“Machines can learn to behave”

Blaise Agüera y Arcas, vice president of research at Google Research.

Blaise Agüera y Arcas (47) is a world authority on artificial intelligence. He serves as vice president of research at Google Research, the division of the company that centralizes R&D and leads a team of around 500 people. Among them, the engineer Blake Lemoine, who experienced his moment of glory last June: he assured in a report published by Washington Post that LaMDA, the automatic conversational chat generator he worked in, had become self-aware. “If I didn’t know this is a computer program that we recently developed, I would have thought I was talking to a seven or eight year old with a background in physics,” she said. declared. He was immediately suspended and then fired. Agüera y Arcas, in a telephone interview from Seattle, clarified that the reason for Lemoine’s dismissal was not “his statements about him, but the leaking of confidential documents”.

Born in Rhode Island, he has Catalan blood. “I know how to say ‘collons de Déu’ [cojones de Dios] and something else,” he says between laughs. His father, “a young communist from Reus,” Tarragona, met his mother, an American, in a kibbutz in Israel. Something imbued this ideology in him: “If you ask me if I believe in capitalism, I will say no. Their ultimate goal is important, we need a change,” he said. Despite the fact that the projects he works on are confidential, Agüera agrees to speak with EL PAÍS about the Lemoine and LaMDA affair.

Interrogate. Can an artificial intelligence be conscious, as Blake Lemoine claims?

Answer. Depends. What does it mean for you to be aware?

P Being able to express your will, your goals and your own ideas.

R Yes, it is a definition based on the ability to discern between good and evil. There are also others. For some, to be aware simply means to be intelligent; for others, they can feel emotions. And for the philosopher David Thomas, it also implies being something for someone, that there is a subjective experience behind it. I don’t think that human beings are supernatural: we are made of atoms that interact with each other, there is nothing magic. As a computational neuroscientist, I believe that it is possible for a machine to behave like us, in the sense that computation is able to simulate any type of physical process.

Blaise Agüera y Arcas, then a senior researcher at Google, at a TED talk in 2016.

P Do you agree with Blake Lemoine’s statements?

R No. Blake said that LaMDA was specifically sentient, but he also made it very clear that to him there’s something supernatural about it, he believes he has a soul. So there are parts of his argument with which I can agree, but I do not share his spiritual convictions.

P Have you spoken to him since he was fired?

R No. I have no personal problem with Blake, I think he’s a really interesting guy. And he was very brave to go public with his opinion of LaMDA. But he revealed confidential documents. He’s always been a special guy.

P In a column he published in The Economist you said that when you spoke to LaMDA, you felt “the ground move under your feet” and that “you might think you were talking to something intelligent”. What does this mean exactly?

R I mean, it’s very easy for us to think we’re talking to someone instead of something. We have a very strong social instinct to humanize animals or things. I have interacted with many, many such systems over the years, and with LaMDA there is a world of difference. You think, “He really understands the concepts! “. Most of the time, it feels like we’re having a real conversation. If the dialogue is long and you are going to have it, at the end it will answer strange or meaningless things. But most of the time he shows a deep understanding of what you are saying and responds in some creative way. I had never seen anything like it. It gave me the feeling that we are much closer to the dream of general artificial intelligence [la que iguala o supera al ser humano].

“Where is the bar that determines where understanding is? »

P Which response from LaMDA shocked you the most?

R I asked him if he was a zombie philosopher and he said, “Of course not, I feel things, just like you. In fact, how do I know you’re not a philosopher zombie? It is easy to justify this answer by saying that he may have found something similar among the thousands of conversations about philosophy he learned. We should start asking ourselves when we can consider a machine to be intelligent, if there is a bar it must cross to be intelligent.

P Do you think recognition is important?

R It is important to determine what we are talking about. We can distinguish between the ability to discern right and wrong, which is about obligations, and the ability to have moral responsibilities, which is about rights. When something or someone has the latter, they can be judged morally. We make these judgments about people, not about animals, but also about companies or governments. I don’t think a tool like LaMDA could ever have the capacity for moral judgment.

P You say that talking machines can understand concepts. How is it possible?

R To claim otherwise seems risky to me. Where is the bar that marks that there is understanding? One answer might be that the system doesn’t say stupid or random things. It’s tricky, because people certainly don’t meet that requirement. Another possible argument could be that any system formed only with language cannot understand the real world because it has no eyes or ears. There is still a conflict here, because many people have these shortcomings. Another answer would be to argue that it is not possible for machines to really understand anything. But then you oppose the fundamental premise upon which computational neuroscience is based, which for the past 70 years has helped us better understand how the brain works.

P Many experts say that conversational systems simply spit out statistically probable answers, without any semantic understanding.

R Those who repeat this argument are based on the fact that LaMDA-like systems are simple predictive models. They calculate how a text is most likely to continue from the millions of examples given to them. The idea that a prediction sequence might contain intelligence or understanding can be shocking. But neuroscientists say prediction is the key function of the brain.

P So we don’t know if the machines understand what they are being told, but we do know that they are capable of producing a result that apparently shows that they have understood.

R And what is the difference? I’m struggling to find a definition of understanding that makes it possible to say with certainty that machines don’t have it.

P Can machines learn to behave?

R Yes. Being able to behave is a function of understanding and motivation. The understanding part is based on ideas such as that people shouldn’t be hurt. And that can be programmed into the model, so that if you ask one of these algorithms whether a character in the story behaved well or badly, the model can understand the relevant concepts and give appropriate answers. You can also motivate a machine to be one way or another by giving it a bunch of examples and pointing out which ones are good and which ones aren’t.

P What will LaMDA be capable of in ten years?

R The next ten years will continue to be a period of very rapid progress. There are things that are still missing, including the formation of memories. Talking machines can’t: they can retain something in the short term, but they can’t create narrative memories, which is what we use the hippocampus for. The next five years will be full of surprises.

you can follow COUNTRY TECHNOLOGY in Facebook Yes Twitter or sign up here to receive our weekly newsletter.

James G. Williams