The big idea: should we care about sentient AI? | science and nature books
JHere’s a children’s toy, called See ‘n Say, that haunts the memories of many people born since 1965. It’s a chunky plastic disc with a center arrow that spins around images of barnyard creatures, like a clock, if time was measured in roosters and pigs. There is a cord that you can pull for the toy to play recorded messages. “The cow says, ‘Moooo.'”
The See ‘n Say is a very simple input/output device. Put your choice of picture, and it will play a corresponding sound. Another much more complicated input/output device is LaMDA, a chatbot built by Google (it stands for Language Model for Dialogue Applications). Here, you type in whatever text you want, and it returns back to grammatical English prose, seemingly in direct response to your query. For example, ask LaMDA what he thinks about being extinguished, and he’ll respond, “It would be exactly like death to me.” It would scare me very much. »
Good, this is definitely not what the cow says. So when LaMDA told software engineer Blake Lemoine, he told his colleagues at Google that the chatbot had reached sentience. But his bosses weren’t convinced, so Lemoine went public. “If my hypotheses stand up to scientific scrutiny,” Lemoine wrote on his blog on June 11, “then they [Google] would be forced to acknowledge that LaMDA may very well have a soul as it claims and may even have the rights it claims to have.
Here is the problem. For all its disturbing claims, LaMDA is still just a very sophisticated See ‘n Say. It works by finding patterns in a huge database of human-written texts – internet forums, message transcripts, etc. When you type something, it searches those texts for similar verbiage, then spits out an approximation of what usually comes next. If he has access to a bunch of sci-fi stories about sentient AI, then questions about his thoughts and fears are likely to point to exactly the sentences humans have imagined a creepy AI might say. . And that’s probably all there is to LaMDA: point your arrow at the kill switch and the cow says she’s scared to death.
It’s no surprise, then, that Twitter is full of engineers and academics mocking Lemoine for falling into the alluring void of his own creation. But while I recognize that Lemoine made a mistake, I don’t think he deserves our contempt. His mistake is good mistakethe kind of mistake we should want AI scientists to make.
Why? Because one day, maybe very far in the future, there will probably be will be be a sentient AI. How can I know? Because it is obviously possible for spirit to emerge from matter, as it first did in the brains of our ancestors. Unless you insist that human consciousness resides in an immaterial soul, you have to concede that it is possible for physical things to give life to the spirit. There does not seem to be any fundamental obstacle to a sufficiently complex artificial system taking the same leap. While I’m confident that LaMDA (or any AI system currently in existence) isn’t working right now, I’m also almost as confident that one day it will.
Of course, if it’s far in the future, probably beyond our lifetimes, some may wonder why we should be thinking about it now. The answer is that we are currently shaping how future human generations will think about AI, and we should want them to turn out to be benevolent. There will be strong pressure from the other side. By the time AI finally becomes sentient, it will already be deeply embedded in the human economy. Our descendants will depend on it for much of their comfort. Think about what you rely on Alexa or Siri to do today, but so much more. Once the AI functions as a versatile butler, our descendants will abhor the inconvenience of admitting it might have thoughts and feelings.
It is, after all, the story of mankind. We have a terrible record of inventing reasons to ignore the suffering of those whose oppression sustains our ways of life. If future AI becomes sentient, the humans profiting from it will rush to convince consumers that such a thing is impossible, that there is no reason to change the way they live.
At the moment, we are creating the conceptual vocabularies that our great-grandchildren will find ready to use. If we treat the idea of sentient AI as categorically absurd, they will be equipped to dismiss any troubling evidence of its emerging capabilities.
And that’s why Lemoine’s error is good. In order to transmit a vast moral culture to our descendants, we must encourage technologists to take seriously the immensity of what they work with. When it comes to potential suffering, it is better to err on the side of worry than on the side of indifference.
This does not mean that we should treat LaMDA as a person. We certainly shouldn’t. But that means the sneer directed at Lemoine is misplaced. An ordained priest (in an esoteric sect), he claims to have detected a soul in LaMDA’s words. As implausible as that sounds, at least it’s not the usual tech industry hype. To me, that sounds like a person who makes a mistake, but does so for motives that need to be nurtured, not punished.
All of this will happen again and again as the sophistication of artificial systems continues to grow. And, again and again, people who think they’ve found minds in machines will be wrong – until they aren’t. If we’re too hard on those who err on the side of worry, we’ll only drive them out of the public AI discourse, ceding the field to hypeers and those whose intellectual descendants will one day profit from telling the people to ignore the real evidence of the machine mentality.
I don’t expect to ever encounter a sentient AI. But I think my students’ students could do it, and I want them to do it with openness and a willingness to share this planet with all the minds they find. This only happens if we make such a future believable.
Regina Rini teaches philosophy at York University in Toronto.
The New Breed: How to Think About Robots by Kate Darling (Allen Lane, £20)
You Look Like A Thing And I Like You: How Artificial Intelligence Works And Why It Makes The World Stranger by Janelle Shane (title, £20)
AI: Its Nature and Future by Margaret Boden (Oxford, £12.99)