OpenAI has disrupted what it describes as a sprawling, industrialized Chinese transnational repression campaign after a Chinese law enforcement official inadvertently revealed the operation by using ChatGPT to log details of covert intimidation tactics against overseas dissidents.
In a new threat intelligence report released Wednesday, OpenAI’s investigators detail how the user — linked to Chinese authorities — treated ChatGPT as a personal journal to document and plan suppression activities. The network allegedly involved hundreds of operators and thousands of fake accounts across social media platforms, employing AI-generated content, forged documents, impersonation, and disinformation to silence critics of the Chinese Communist Party (CCP).
Key tactics exposed include:
- Impersonating U.S. immigration officials to contact a U.S.-based Chinese dissident, falsely warning that their public statements violated American law and implying deportation risks.
- Forging documents purporting to come from a U.S. county court to demand the takedown of a dissident’s social media account.
- Fabricating the death of a prominent Chinese dissident by generating a phony obituary, gravestone photos, and posting them online — false rumors of the individual’s death did circulate in Chinese-language media in 2023, matching the operative’s descriptions.
- Attempting to smear incoming Japanese Prime Minister Sanae Takaichi by fanning online outrage over U.S. tariffs on Japanese goods (ChatGPT refused the prompt, but similar hashtag campaigns emerged on Japanese forums in late October as Takaichi assumed office).
OpenAI’s principal investigator Ben Nimmo described the effort as “industrialized” transnational repression: “It’s not just digital. It’s not just about trolling. It’s about trying to hit critics of the CCP with everything, everywhere, all at once.”
Investigators correlated the ChatGPT user’s logs with real-world online activity and impacts, leading to the account’s ban. While much content was generated by other AI tools and disseminated via fake accounts and websites, ChatGPT served as the operative’s operational diary — inadvertently providing OpenAI a window into the campaign.
The revelation underscores growing concerns over authoritarian regimes weaponizing consumer AI tools for censorship and influence operations. It arrives amid intensifying U.S.-China competition in artificial intelligence, where control over the technology shapes both military and economic dominance.
As context, the Pentagon is currently in a standoff with AI firm Anthropic: Defense Secretary Pete Hegseth issued a Friday deadline for CEO Dario Amodei to relax safeguards on its models or risk losing a major contract. Former Pentagon emerging-tech official Michael Horowitz told CNN the OpenAI report “clearly demonstrates the way that China is actively employing AI tools to enhance information operations,” noting the rivalry extends to “day-to-day” surveillance and disinformation apparatus.
OpenAI has requested comment from the Chinese Embassy in Washington, D.C.; no response has been received.
This case highlights the dual-use risks of generative AI — tools designed for productivity can inadvertently (or deliberately) become instruments of state repression when safeguards are bypassed or absent. It also demonstrates how AI companies’ internal threat monitoring can expose hidden operations that would otherwise remain undetected.

