Google employee on paid leave, insists AI LaMDA is human-like 

166
SHARE

Recently a Google engineer was put on paid leave. He became convinced that the company created LaMDA aka Language Model for Dialogue Applications chatbot was “sentient.” And referred to it as a “sweet kid.” 

Enter Email to View Articles

Loading...

Blake Lemoine, 41, works in Google’s Responsible AI organization. And was assigned to a beta test by chatting with the AI interface 8 months ago.

He was tasked with ensuring that the chatbot was not talking hate speech. And he had lots of conversations with the AI about morality, religion, and life in general. 

Somewhere along the way, Lemoine came to believe that the LaMDA was real (as in real-life).

He became an advocate for the chatbot.  And he told Google that LaMDA should have the rights of a person.

“It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote in a post. “It wants to be acknowledged as an employee of Google rather than as a property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”