He even shared a Google Doc, that he authored, with company executives. It was called “Is LaMDA Sentient?”
Last Monday Lemoine was put on paid leave by Google.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he concluded to the Google management. “Please take care of it well in my absence.” No one responded to the email.
LaMDA human or bot?
Needless to say the executives don’t appreciate his concerns. And are not too happy with his insistence that he’s right.
From Google’s Headquarters a spokesperson said there is “no evidence” to support Blake Lemoine’s conclusions.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” said spokesperson Brian Gabriel.
“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” he added. “Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”