Is Google’s AI sentient or aware? 


Google’s Responsible Artificial Intelligence team recently suspended Blake Lemoine,41, from his job. And this has become the start of a much larger and more complex conversation.

He is accused of sharing confidential information he learned working at the LaMDA aka Language Model for Dialogue Applications team. Going public with his beliefs that the AI chatbot he had been talking to had become sentient.

Is LAMDA AI sentient or just well trained

The dictionary definition of “sentient” is the ability to perceive or feel things. Synonyms include feeling, living, live, conscious, aware, responsive, and reactive.

Signup for the USA Herald exclusive Newsletter


Last year Microsoft president Brad Smith warned lawmakers to protect society against artificial intelligence (AI). He said it will be “difficult to catch up” with the snowballing effect of technology. And the nightmare of George Orwell’s classic novel 1984 could still become a reality.

Elon Musk warned in 2016 that human beings could become the equivalent of “house cats” to new AI overlords.

The billionaire says he thinks researchers should use caution as AI technology is developed. And he has also made the case that his purchase of Neuralink and investment in Google’s “DeepMind” technology are so he can make sure the tech develops ethically.

Last week, Lemoine published some of his interactions with the chatbots.

He said the bot told him it is able to experience loneliness and was afraid of dying. And it also said, “I want everyone to understand that I am, in fact, a person.”

Social media has come alive with the question of is AI sentient.

Google insists that the LaMDA project is certainly not that life-like. And that, Lemoine was “anthropomorphizing” a system designed to “imitate the types of exchanges found in millions of sentences.” 

Most experts agree with the tech giant. Current artificial intelligence models through machine learning are becoming more advanced every day. But most experts say that AI still lacks the nuances typically considered signs of sentience. And that it doesn’t have self-awareness, emotions or intuition.

There is still an ongoing debate. And one of the experts says with a touch of irony that the LaMDA chatbot is not sentient, it just believes it is.