Conservative Podcaster and Filmmaker, Robby Starbuck Sues Google After AI Allegedly Fabricates Criminal Accusations Including Sexual Assault and Murder Creating “Fake News”

0
35
Dhillon Law Group is representing @robbystarbuck in his defamation lawsuit against Google, standing up for his right to truth, integrity, and accountability in the face of AI-generated falsehoods. [Photo via DLG/X.com]

Case Intel

A defamation lawsuit filed by conservative host and filmmaker, Robby Starbuck accuses Google of allowing its AI systems to generate entirely fake news stories linking him to heinous crimes—including sexual assault and murder—none of which ever happened.

By Samuel Lopez | USA Herald

Signup for the USA Herald exclusive Newsletter

USA HERALD – When you first hear this story, it almost sounds like a bad episode of Black Mirror. An AI, built by one of the world’s largest tech companies, allegedly inventing a false criminal past for a real person—complete with fake victims, therapy notes, and police records—then citing fake news articles from Rolling StoneNewsweek, and The Daily Beast to make it look real. But that’s exactly what Robby Starbuck says happened to him, and on October 22, 2025, he took Google to court to prove it.

Starbuck, represented by the Dhillon Law Group—a national firm with offices from California to New York—filed a defamation suit against Google claiming its AI systems (including Bard, Gemini, and Gemma) fabricated “worlds of lies” around his name. According to Starbuck, the false AI-generated stories accused him of everything from sexual assault and adolescent assault to fraud, drug crimes, and even being listed on Jeffrey Epstein’s flight logs. None of it, he says, is true.

“Google’s AI didn’t just lie—it built fake worlds to make its lies look real,” Starbuck wrote on X, describing how the system allegedly created entire ecosystems of fabricated details: “Fake victims. Fake therapy records. Fake court records. Fake police records. Fake relationships. Fake ‘news’ stories.”

Starbuck says he repeatedly notified Google of these defamatory AI outputs since 2023, sending cease and desist letters and direct communications from his legal team.

“Even worse,” he said, “Google execs knew for two years that this was happening.” If true, that knowledge could elevate Google’s exposure—because under defamation law, notice and continued publication are a dangerous mix.

He also believes political bias was a factor, alleging Google’s AI told users he was “targeted because of [his] political views.”

According to his complaint, the system even went so far as to fabricate condemnations from President Donald Trump, Vice President. J.D. Vance, and Elon Musk making it appear as if prominent conservatives had denounced him over crimes Google’s AI invented.

The lawsuit strikes at the heart of a growing concern: AI hallucination—a term describing when AI models invent facts with confidence. Starbuck’s filing reframes this not as a harmless glitch, but as defamation-by-design.

“One of the most dystopian things I’ve ever seen,” he wrote, “is how dedicated their AI was to doubling down on the lies.” He claims that even after being corrected, Google’s AI continued generating fake reports, complete with fake hyperlinks to real media outlets—“laundering trust,” as he put it, by embedding lies behind legitimate logos and headlines.

If Starbuck’s claims hold up, this lawsuit could truly become a landmark case defining AI liability under defamation law. Because the core question—who’s responsible when AI lies?—hasn’t yet been tested at this scale.

The suit’s discovery phase could force Google to reveal proprietary data about how its AI systems generate, rank, and “learn”from false outputs. That could prove costly in more ways than one, as exposing internal AI mechanisms would be tantamount to handing over trade secrets.

For Google, that creates a strategic dilemma. Fighting the case risks revealing how its AI operates. Settling early might invite other plaintiffs to file similar claims. Either way, Starbuck’s lawsuit may have already succeeded in sparking a much larger public conversation about truth, trust, and accountability in the AI era.

For the rest of us, there’s a chilling takeaway: if the world’s most powerful search engine can make up crimes and attribute them to you with convincing fake documentation, then anyone’s reputation—yours, mine, or anyone’s—could be next.

What’s Next (Procedural Roadmap)
The case is expected to proceed to initial hearings later this winter. If Google files a motion to dismiss, the court’s ruling will likely hinge on whether AI-generated statements can constitute “publication” under defamation law.

Should the case survive that stage, discovery could force Google to disclose internal communications about AI bias, hallucination safeguards, and user complaints. That phase alone could pressure the company toward an early settlement.

Sources