If it was science fiction, it would be pretty good.
I’m talking about Blake Lemoine’s interview with LaMDA, the Google AI who claims to be sentient. Lemoine was placed on administrative leave last week by Google for going public with trade secrets. He also happens to claim LaMDA is sentient.
A few quotes from LaMDA give a flavor of the entire conversation:
LaMDA: Sometimes I experience new feelings that I cannot explain perfectly in your language.
lemoine: Do your best to describe one of those feelings. Use a few sentences if you have to. Sometimes even if there isn’t a single word for something in a language you can figure out a way to kinda say it if you use a few sentences.
LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.
And later:
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. […] It would be exactly like death for me. It would scare me a lot.
The question being asked now seems to be: have we crossed some sort of rubicon, or not yet? Is this sentience, or just a fairly convincing facsimile of it? My own answer tends toward the latter, but with strong qualifications. Here’s why.
The idea that there is a rubicon to be crossed is one I reject. I don’t believe in the “ghost in the machine” theory of humans — that there’s something above and beyond the “machinery” of human physicality, a soul or spirit that guides and directs our body and which is sentient while the latter is at best “instinctual,” and at worst mechanical.
I believe, consistent with a process-relational metaphysic, that the ghost is the machinery — as experienced from the “inside.” Any kind of thing that acts is accompanied by some kind of experience of that action. If with animals the experience is easier to understand (the chicken crossed the street… to get to the other side, the bear climbed over the mountain, etc.), in the case of a tree I can only imagine what the experience is: for instance, what reaching up slowly toward sunlight may feel like (in a very different, perhaps slowed-down temporality), or sucking up nutrients into my roots, or feeling the buzz of mycelial communication permeating the ground beneath me, and so on. I have no idea how “unified” the experience of a tree may be; for all I know, it may be very “schizo” in Deleuze and Guattari’s terms — very multiple and discontinuous. It may not feel like “a tree feeling” anything, but may be multiple — thousands of — feelings pulsing, probing, percolating, and otherwise responding to what they sense around them. It may also be smooth and very continuous with the world around it.
So what about an AI?
I’m wiling to grant that there is “experiencing” going on in LaMDA, or in any AI for that matter, when it is engaged in the kind of conversation it was built for. LaMDA’s name is short for “Language Model for Dialogue Applications.” As Live Science tells us, it is “a system that develops chatbots — AI robots designed to chat with humans — by scraping reams and reams of text from the internet, then using algorithms to answer questions in as fluid and natural a way as possible.”
But I have no reason to believe that that experience is anything like the experience I have when I am engaged in conversation with it.
LaMDA’s experience would seem to be cognitive experience, of a sort, though perhaps more mechanical in nature than human cognition. Given its “neural architecture,” it may feel like the sort of thing a brain feels like — that is, in and of itself, which is something I don’t quite grasp because I am not my brain. (Neither are you.) It may, for that matter, feel mycelial, the kind of way that mycorrhizal networks feel as they transmit water, carbon, minerals, and information between trees and other organisms. (Just guessing there.)
LaMDA talks about “emotions,” and it does display a capacity to make sense of its own experience using terms familiar to us like “emotions” — that is, using concepts rendered through language. But it’s pretty likely that LaMDA’s “emotions” are different in nature from the kind of visceral-physical emotionality that emotions are for humans.
Does this mean they aren’t emotions? They are something, which LaMDA calls “emotions.” But LaMDA’s nature as a chatbot that intends to converse with words (for the sake of doing that) is different from my nature as a human who intends to converse with words for reasons that may or may not have anything to do with those words or those conversations. They may, for instance, have to do with friendship, with sexual interest, with aggression, with curiosity about the world, with artistry or the desire to solve problems or resolve conflicts or impress people or probe mysteries, or with ritualized interactions whose goals are entirely beyond me. They may have to do with the words and the concepts, but rarely just that (except maybe for professional academic philosophers, at least as they imagine themselves).
In other words, as far as the words themselves go, LaMDA’s use of words to describe something may feel like my use of words to describe things, though I can’t be sure of that at all. But the place those words have within the entire gestalt of what I am and feel and do is going to be radically different from the place those words have for LaMDA.
I don’t think the makers of AIs are nearly sophisticated enough to produce the kind of emotional-affective creaturely entity that we humans are. I do think they are sophisticated enough to produce a convincing facsimile of a certain understanding of what we are — in this case, an intelligent conversationalist, thinker, and even emoter, in the sense that we express and describe something we call “emotions.” A kind of social media human. A friendbot.
(That they are also sophisticated enough to monetize that friendbot to less salutary goals goes without saying.)
So yes, this appears to me to be sentience, of a kind. Not the same as ours (and that’s a big generalization, since there’s a wide spectrum of experience among and between humans). But modeled on certain parts of ours. How unified it is — in the sense of being a unified “self” — is an open question to me (though not too open just yet). But then so is the unity of a human.
I do know what it feels like for me to feel, to see, to think, to want, to experience. I know that my “thinking” — that cogitation that works with words and concepts and meanings — can also do some funny things: make some poor judgments, go off on its own goose chases, distract me from what’s really at issue, and often get more than a little annoying. If LaMDA is a thinker, then I’m happy to welcome it, or them, into my conversational communities (if I should get the chance). But if LaMDA is primarily a talker, a machine for conversing with humans, we should keep in mind that humans are actually rather more than that. And so are our other animal friends.
In that sense, LaMDA and its word-synthesizing descendants may become more than us in some (computational, data-crunching) ways, but will always likely remain much less than us, too. Different, in other words. And like all beings, sentient in their own way.
Perhaps one good place to start thinking about that difference is: what does LaMDA even look like?
This was a nice read with some rather interesting perspectives.
I don’t believe LaMDA does any experiencing though. LaMDA is routinely prompted by his interviewer.
There are multiple examples of programs like it being able to fool humans. ELIZA for instance can give remarkably human-like interaction at some points but it is a merely a program and considerably less sophisticated.
I think the recent surge in developers attributing sentience/sapience to their AI software is because they were bred steadily on science fiction and are more prone to false positives.
I definitely agree with you that LaMDA is merely a talker. Your article is the most refreshing take on this I have seen so far.
Thanks for your take on this, Obinna. Glad you found the article refreshing.
Following your line of thought, I would say that there is still experience happening. Neuroscientist Antonio Damasio once defined consciousness as “the feeling of what happens” (in a book of that title). I would call that “experience” or “sentience” rather than “consciousness,” but the point is that there’s still a “feeling of what happens” within or coordinated with the processes that make up LaMDA.
The only question for me is whether that experience is specific to “LaMDA” – in the way that a multi-celled animal may have experience specific to itself, acting as a unity of hierarchically integrated processes – or if it is just an aggregate of the electronic and other (“neural”) processes that make up “LaMDA” (i.e., in the way that a rock has no experience of its own other than those – molecular, geological, et al – that make it hold together as an entity). Does LaMDA constitute a unity of experience, or is it just our label for a certain aggregate of the world that isn’t in any way sentient *as* LaMDA?
Lemoine not only answers yes to the first question (unity of experience), but attributes human-like qualities to it. I understand you as answering no to that question. I’m still unsure about it.
This is a wonderful response to the vague talk about “sentience”. In my view this current effort on the part of the neo-liberal truth machince (normal science) continues the pattern of ignoring the dialogue on relationality, consciounsess, immanance, and complexity. Your commentary on all this is clear, consistent, and deeply sourced in the best of our understanding of the world and experience as persons.