ChatGPT Didn’t Lie with Malice: Georgia Judge Tosses Defamation Lawsuit Against OpenAI

0
307

That distinction proved critical to the court’s decision.

Under the U.S. Supreme Court’s landmark ruling in New York Times Co. v. Sullivan, public figures must prove that false statements were made with “actual malice” — that is, with knowledge of their falsity or with reckless disregard for the truth.

Judge Cason determined that Walters did not meet this burden. While acknowledging that the AI’s response was false, the court found no evidence that OpenAI intended to defame or even knew the statements were incorrect.

Signup for the USA Herald exclusive Newsletter

OpenAI’s legal team, led by Theodore Boutrous Jr. of Gibson, Dunn & Crutcher and Matthew Macdonald of Wilson Sonsini Goodrich Rosati, argued that imposing liability for AI-generated hallucinations would create a chilling effect on innovation.

“This is a victory not only for OpenAI but for responsible AI deployment as a whole,” said one attorney familiar with the case.

The case, Walters v. OpenAI LLC, raised thorny questions about accountability for generative AI. Could a machine be legally liable? And if not, what responsibility lies with the developers?