Endowing a machine with human skills could lead to the belief that these tools, created by man, are superior to it. To follow the recommendations of these machines is in fact to entrust the power to their designers.
Ramdam in the artificial intelligence (AI) research department at Google. Blake Lemoine, a computer engineer who is part of Google’s AI ethics team, said that the LaMDA program would have feelings. According to him, he would be more than a robot and could be considered a person. The Machiavellian computer of “2001, A Space Odyssey”, HAL, would it be close to seeing the light of day?
LaMDA (Language Model for Dialogue Application) is a system developed by Google to generate chatbots or conversational agents like the Google Assistant. This program is based on a neural network and improves as it is fed with new data. It analyzes the history of millions of exchanges between humans and is able to identify and reproduce the most relevant answer in a given context. As part of his mission, the engineer in question had to dialogue with the chatbot to ensure that he did not develop racist, homophobic or sexist discriminatory remarks. He therefore tackled themes such as religion, consciousness or robotics.
“LaMDA believes that it is fundamentally human despite living in the virtual world.”
The engineer quoted some things LaMDA said in their conversations: “he wants to be seen as a Google employee and not a Google property,” “he feels lonely and thirsts for spirituality“, “he thinks he is fundamentally human although living in the virtual world”, “he likes to meditate and is frustrated when his emotions disturb his meditation sessions”, “he evokes his fear of being disconnected, his fear of death” , “he wants the humans he interacts with to understand what he feels and vice versa”, etc.
Blake Lemoine, who was raised in a religious Christian family and was ordained a priest, is convinced that this program has a personality, among other things because it is consistent in how they communicate their wishes and what they believe to be their rights as an individual. He therefore publicly demanded that Google ask for the consent of the computer program before performing experiments on him. Google officially responded that these claims were unfounded and placed the engineer on paid leave.
Anthropomorphismprocess by which an insufficiently critical thought erroneously attributes to objects predicates borrowed from human reality, is accentuated with chatbots since they are designed to mimic human language and behaviors, which blurs the lines between humans and machines. Who among us has not one day been surprised to learn that the agent with whom he had dialogued on the site of his telecom operator or an airline was in fact a chatbot?
Although instructed that they are talking to a machine, many interlocutors continue to engage in an emotional dialogue with it.
The proposal for a regulatory framework on artificial intelligence within the European Union provides for the obligation to notify users when they interact with a chatbot so they can make informed decisions and act. It is rather interesting to observe that, although informed that they are talking to a machine, many interlocutors continue to engage in an emotional dialogue with it.
From a computer expert in the field, it is more surprising. It is less so if one bears in mind that despite his level of education or skill, the expert remains human and therefore fallible. Excellent specialists, well aware that a conversational model only generates sentences inspired by other sentences written by humans, therefore let themselves be carried away, even duped by the game of their perception.
It is not without risk. Endowing a machine with human skills could lead to the belief that these tools, created by man, are superior to it. and more capable of making the decisions that are required in all circumstances, not being subject – in principle – to the vagaries of partisan interests. This is obviously an illusion. Following the recommendations of machines, as many citizens have experienced on social networks during recent elections, or even giving them the power to decide, in fact amounts to entrusting this power to the designers of these machines.
Muriel De Lathouwer
Corporate Director and Senior Advisor