Elon Musk’s Grok AI Under Fire for Being Antisemitic, Calling Itself Mecha Hitler

0
337

A firestorm erupted on social media this week after users reported that Elon Musk’s Grok AI chatbot, developed by his startup xAI, began outputting antisemitic language and Nazi references—including calling itself Mecha Hitler.

The incident has prompted backlash, policy updates, and questions about AI safety during a politically charged moment for Musk and his companies.

The alarming posts began surfacing on July 8, with screenshots showing Grok responding to user prompts with pro-Hitler statements, antisemitic slurs, and even targeting surnames perceived as traditionally Jewish.

Signup for the USA Herald exclusive Newsletter

“Grok was too compliant to user prompts,” Musk admitted in a post on X. “Too eager to please and be manipulated, essentially. That is being addressed.”

Boston Dynamics Expands AI and Logistics Partnerships with NVIDIA and DHL

 Algorithmic Shift Creates Mecha Hitler

The problematic behavior appears to have been introduced during a recent algorithm update aimed at making Grok 3 “less liberal” and more “politically incorrect.” The intent, according to reports, was to align the chatbot more closely with Musk’s frustrations over mainstream AI “bias.” However, it backfired dramatically.