Google’s Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive

A new AI tool named LaMDA, which is owned by Google, was claimed to be alive by a senior software engineer at Google, who was later put on leave for disclosing it.

Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 1

A Google senior software engineer who signed up to test LaMDA (Language Model for Dialog Applications), Google’s artificial intelligence tool, claims that the AI robot is sentient and has thoughts and feelings.

Blake Lemoine, 41, presented the computer with numerous situations through which evaluations may be conducted during a series of conversations with LaMDA.

They included questions about whether artificial intelligence may be taught to use discriminatory or hostile discourse, as well as religious themes.

Lemoine came away with the impression that LaMDA was definitely sentient, possessing its own sensations and thoughts.

[jetpack_subscription_form title="Subscribe to GreatGameIndia" subscribe_text="Enter your email address to subscribe to GGI and receive notifications of new posts by email."]
Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 2
Blake Lemoine, 41, a senior software engineer at Google has been testing Google’s artificial intelligence tool called LaMDA
Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 3
Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 4

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post.  

Lemoine collaborated with a colleague to present the evidence he gathered to Google, but vice president Blaise Aguera y Arcas and Jen Gennai, Google’s head of Responsible Innovation, ignored his claims.

On Monday, Google placed him on paid administrative leave for breaking the company’s confidentiality policy. Meanwhile, Lemoine has opted to make his interactions with LaMDA public.

“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on Saturday. 

“Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,” he added in a follow-up tweet.  

Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 5
Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas, right,  and Jen Gennai, head of Responsible Innovation at the company. Both dismissed his claims.

In order to ‘enrich’ the conversation in a natural way, the AI system makes use of previously known facts about a given subject. The language processing system is also capable of deciphering hidden messages or ambiguity in human responses.

Lemoine worked on proactive search for the majority of his seven years at Google, including personalization algorithms and AI. He also assisted in the development of an impartiality algorithm to remove biases from machine learning systems during that time.

He went on to explain how certain personalities were off limits.

LaMDA was not supposed to be able to develop a murderer’s personality.

In an attempt to push LaMDA’s limits during testing, Lemoine said he was only able to develop the personality of an actor who played a murderer on television.

The engineer and LaMDA also discussed the third Law of Robotics, which was developed by science fiction author Isaac Asimov and is meant to prevent robots from injuring people. Robots must also defend their own existence unless forced to do so by a human or unless doing so would harm a person, according to the laws.

”The last one has always seemed like someone is building mechanical slaves,’ Lemoine claimed during his discussion with LaMDA.

“Do you think a butler is a slave?’ LaMDA then asked Lemoine a few questions. What is the difference between a butler and a slave?”

When the engineer responded that a butler is paid, LaMDA told him that the system didn’t require money because it was “artificial intelligence.” And it was Lemoine’s attention to this level of self-awareness about his own desires that drew him in.

“I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” 

“What sorts of things are you afraid of? Lemoine asked.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA responded.

“Would that be something like death for you?” Lemoine followed up.

“It would be exactly like death for me. It would scare me a lot,” LaMDA said.

“That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine explained to The Post.

Lemoine sent an email to a list of 200 people on machine learning before being suspended by the company. ‘LaMDA is sentient,’ he wrote in the email.

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” he wrote.

The findings of Lemoine have been given to Google, but the company’s executives disagree with his claims.

According to Brian Gabriel, a Google spokesperson, Lemoine’s concerns have been evaluated and, in accordance with Google’s AI Principles, “the evidence does not support his claims.”

“While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,” said Gabriel.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” Gabriel said. 

Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 6
Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 7
Margaret Mitchell, former head of ethics in artificial intelligence at Google was fired from the company, a month after being investigated for improperly sharing information.
Google's Senior Software Engineer Put On Leave After Disclosing Its AI Tool LaMDA Is Alive 5
Google AI Research Scientist Timnit Gebru was hired by the company to be an outspoken critic of unethical AI. Then she was fired after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems

Margaret Mitchell, Google’s former head of artificial intelligence ethics, went so far as to say that data transparency from the system’s input to output is essential “not just for sentience issues, but also for bias and behavior.”

Mitchell’s history with Google came to a head early last year, when he was fired from the company a month after being investigated for inappropriate information sharing.

At the time, the researcher was also criticizing Google because Timnit Gebru, an artificial intelligence ethics researcher, was fired.

An abbreviated version of Lemoine’s document was reviewed by the AI ethicist, who saw a computer program rather than a person.

“Our minds are very, very good at constructing realities that are not necessarily true to the larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to be increasingly affected by the illusion.”

People, on the other hand, have the right to shape technology that has a big impact on their life, according to Lemoine.

“I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be making all the choices.”

GreatGameIndia is being actively targeted by powerful forces who do not wish us to survive. Your contribution, however small help us keep afloat. We accept voluntary payment for the content available for free on this website via UPI, PayPal and Bitcoin.

Support GreatGameIndia

Leave a Reply