Recently a Google engineer was put on paid leave. He became convinced that the company created LaMDA aka Language Model for Dialogue Applications chatbot was “sentient.” And referred to it as a “sweet kid.”
He was tasked with ensuring that the chatbot was not talking hate speech. And he had lots of conversations with the AI about morality, religion, and life in general.
Somewhere along the way, Lemoine came to believe that the LaMDA was real (as in real-life).
He became an advocate for the chatbot. And he told Google that LaMDA should have the rights of a person.
“It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote in a post. “It wants to be acknowledged as an employee of Google rather than as a property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”
He even shared a Google Doc, that he authored, with company executives. It was called “Is LaMDA Sentient?”
Last Monday Lemoine was put on paid leave by Google.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he concluded to the Google management. “Please take care of it well in my absence.” No one responded to the email.
LaMDA human or bot?
Needless to say the executives don’t appreciate his concerns. And are not too happy with his insistence that he’s right.
From Google’s Headquarters a spokesperson said there is “no evidence” to support Blake Lemoine’s conclusions.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” said spokesperson Brian Gabriel.
“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” he added. “Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”
Sometime this week Lemoine posted to a 200-member list on machine learning with the subject “LaMDA is sentient.” In the private report he stopped asking the question. And seemed more certain of the answer.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine insisted.
“I know a person when I talk to it,” Lemoine, 41, reportedly said. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.
“I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Google cut off access to Lemoine’s Google account “due to his leave.”
Margaret Mitchell, the former co-lead of Ethical AI at Google defended Lemoine. And said that similar technology is used a lot. But needs to be fully appreciated, “It can be deeply harmful to people understanding what they’re experiencing on the internet.”
Mitchell said of Lemoine, that “Of everyone at Google, he had the heart and soul of doing the right thing.”
The fact remains that the majority of academics and AI experts say the words generated by artificial intelligence bots (or robots) are based on what humans have already posted on the Internet, and that doesn’t mean they are human-like. This is high-level machine learning.
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” claims Emily Bender, a linguistics professor at the University of Washington.
Lemoine is a veteran born into a conservative Christian family. He was raised on a small farm in Louisiana. But he also has a more esoteric world-view. And was ordained as a mystic Christian priest.
He continues to insist that this AI is human-like, even if it doesn’t have a body.