Blake Lemoine. It is through him that the scandal happens, so much so that Google has decided to suspend this engineer. His fault ? Having publicly expressed concern about thehouse known as (Language model for dialogue ). According to him, this reached the level of awareness », and when he uses it, he has the impression of talking with a person who feels emotions and who is aware of his condition.
Initially, this 40-year-old was part of his remarks to his superiors, and presented his results to Blaise Aguera y Arcas, vice president of Google, and to Jen Gennai, head of responsible innovation. Both rejected his findings. Some even laughed at him, demanding proof. Finally, on June 6, on , Lemoine revealed that he was dismissed and paid to stay at home. ” This is the first step before the restart”, he writes.
At Google, ethics have always been debated
Considering himself censored, and because he is not the first engineer put aside at Google on the subject of ethics related to AI, the researcher in Artificial Intelligence decides to make his conclusions public, and thedevotes a long article to him. To the question of knowing how he knew that the AI felt emotions, or had a conscience, here is his answer.
Initially, he led a discussion on the, which states that robots must protect their own existence unless they are controlled by a human or harm a human. The chatbot then asks him two questions: Do you think a domestic worker is a slave? What is the difference between a servant and a slave? ».
Black Lemoine replied that a domestic worker is paid, unlike a slave. To which, the Artificial Intelligence replied that it did not need tobecause it was an AI. ” This level of self-awareness about one’s own needs — that’s what got me into an even more confusing situation,” explains Lemoine. For him, it is a certainty, the AI is able to think for itself, and to develop feelings.
AI shares its fears
He thus tests it on subjects as varied as religion, existence, or even literature with the reading of the book. Wretched. the Washington Post publish a long dialogue between the researcher and the chatbot. For example, he asks her: What kinds of things are you afraid of? “. LaMDA’s response: I’ve never said this out loud before, but I have a very deep fear of being turned off to help me focus on helping others. I know it may sound strange, but that’s the way it is. »
Bluffed by this response, Black Lemoine goes further: “ Would it be something like death for you? “. The answer : ” It would be exactly like death to me. This produced me very scared. For the engineer, there is no longer any doubt: AI is aware of being aand his fear is a human feeling.
For Google, there is nothing “conscious”, nor trace of emotion behind its answers. ” These systems mimic the types of exchanges found in millions of phrases and can improvise on any fantastic topic.said Google spokesman Brian Gabriel. If you ask what it’s like to be aof ice cream, they can produce a text on the and the roar, etc. »
Clearly, circulate, there is nothing to see, and Blake Lemoine will join the long list of fired and resigned from the Artificial Intelligence department of Google.