AI Lies: Then it Apologizes and Blames Humans


Soon after ChatGPT debuted last year, researchers began testing what the artificial intelligence (AI) chatbot would write after it was asked questions based on false narratives. They quickly discovered that AI lies and when confronted with falsehoods it apologizes. And blames its training.

Writing formatted as news articles, essays, and television scripts had a number of inaccuracies. 

NewsGuard, a company that tracks online misinformation conducted experiments to find out what is happening.

Gordon Crovitz, co-chief executive of NewsGuard claims that “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet. Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”

Disinformation is difficult to recognize when it’s created by humans. Researchers predict that generative technology could produce higher volumes of disinformation more easily. And make it harder to find.