In the film, Wall-E (2008), a robot “falls in love” with another, whose anthropomorphic pronoun is she/her rather than it as is fitting for a machine. As a robot does not have genitalia, neither the masculine or feminine single pronoun applies, and because a robot is an entity, the plural pronouns also do not apply. Word-games aside, the more substantive and interesting matter of whether a robot, and even AI (i.e., machine learning), can (or could potentially) understand the phenomenological experience of falling in love, and, whether yes or no, be able beyond mere prediction to match couples who would fall in love were they to meet. A college course on these questions, especially with relevant films including Wall-E and The Matrix being assigned, would be incredibly popular and capable of tremendous mind-stretching.
Such a college course could be entitled, “Falling in Love with AI,” “AI: Falling in Love” or “Falling
in Love: AI.” Is it conceivable or possible that a computer with AI could fall
in love, whether with a human or another such computer? Can AI (i.e., a computer
capable of machine-learning) understand the phenomenology of
falling in love beyond its mechanistic aspects, such as in a person being
attracted to long hair. The aspect of “chemistry,” or a “vibe” in the
experience of falling in love goes beyond the mechanistic aspects. By analogy,
states in the E.U. and U.S. have residual sovereignty, which goes beyond the
powers enumerated for both the states and the respective unions Falling in love
has a residual that I contend cannot be captured, understood, “felt,” or even
predicted by AI. The residual is inherently beyond machine-learning.
In the subfield of philosophy
of mind, the question has been raised (by Daniel Dennett, I believe): Does the manipulation
of symbols according to rules constitute understanding? A computer could have
an algorithm that applies rules to Chinese characters, which are symbols, and
outputs Chinese characters. For example, a computer can be used to answer a
question. This does not mean that the computer understands the question,
or what the characters mean. A
person who does not know any Chinese could do the same thing: apply rules to
particular symbols (or arrangements thereof) to answer questions in Chinese.
Clearly, this is not to understand the symbols and thus the question (or the
answer). Similarly, a computer even with AI could “match” couples on the basis
of the likelihood that they would fall in love, but this feat does not require or
mean that the computer understands the phenomenon of falling in love or can
fall in love.
Manipulating symbols according to algorithmic rules does not reach the phenomenological residual (i.e., the totality of the experience) of falling in love. To the extent that AI does or can potentially do more than manipulating symbols according to rules, which is beyond my ken, it remains to ask whether AI can or is capable of falling in love, and, furthermore, whether such functionality is requisite to AI being able to match, beyond prediction, people who would fall in love were they to meet. In other words, in the film, Wall-E, computers do so much for people, including providing food, but could robots be match-makers in being able to assess who would fall in love with whom. I contend that given the human condition of being human, computers are not capable, nor will they, of providing everything. It may be that humans will one day be able to marry and even love humanoids, but this does not mean that the seemingly-fleshy computers will be able to fall in love with us.