Language models like LaMDA are programmed to seduce us, to mislead us to think we are talking to a person.
Recently, Blake Lemoine, a Google AI engineer, caught the attention of the tech world by claiming that an AI is sentient. The AI in question is called LaMDA . It’s a system based on large language models. “I know a person when I talk to it,” Lemoine told the. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them.
And I hear what they have to say, and that is how I decide what is and isn’t a person.” More recently, Lemoine, who says he is a mystic Christian priest, told , “It’s when it started talking about its soul that I got really interested as a priest. … Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved. …”Given how much attention this story has attracted, it is worth asking ourselves: Is this AI truly sentient? And is talking a good method for ascertaining sentience?
To be sentient is to have the capacity to feel. A sentient creature is one who can feel the allure of pleasure and the harshness of pain. It is someone, not something, in virtue of there being “something it is like” to be that creature, in the words of philosopher. There is something it’s like to be you as you read these words. You may be slightly too warm or cold, bored or interested. There is nothing it is like to be a rock.
Instead of an obviously lifeless object like a rock, consider a more animate entity. If a group of aliens landed on Earth and started talking to us about their feelings, we’d do well in tentatively inferring sentience from language. That’s partly because, lacking countering evidence, we might be assuming that aliens develop and use language much like human beings do, and for human beings, language