OpenAI banned multiple accounts linked to Chinese government entities on Tuesday after discovering they used ChatGPT to draft surveillance proposals targeting ethnic minorities and political dissidents across social media platforms from X to TikTok.
The banned users asked the artificial intelligence (AI) chatbot to help design a “High-Risk Uyghur-Related Inflow Warning Model” to track the movements and police records of Uyghur Muslims, according to OpenAI’s quarterly threat report. Another account sought help creating promotional materials for tools that scan social media for what they called “extremist speech.”
“Cases like these are limited snapshots, but they do give us important insights into how authoritarian regimes might abuse future AI capabilities,” Ben Nimmo, OpenAI’s principal investigator, told reporters during a Tuesday briefing.
The revelations come as China faces international condemnation for its repression of Uyghurs in Xinjiang province. The U.S. State Department has accused Beijing of genocide against the Muslim minority group, charges China denies.
What makes these attempts particularly concerning is how users carefully crafted their requests to avoid triggering ChatGPT’s safety filters. They framed surveillance tool designs as academic inquiries or technical documentation requests.
The accounts never asked ChatGPT to conduct surveillance directly. Instead, they sought help writing proposals and creating project plans that could later be implemented using other technologies.
“It’s not last year that the Chinese Communist Party started surveilling its own population. But now they’ve heard of AI and they’re thinking, oh maybe we can use this to get a little bit better,” Nimmo told CNN.
Michael Flossman, who leads OpenAI’s threat intelligence team, noted a troubling pattern emerging. “We’re seeing adversaries routinely use multiple AI tools hopping between models for small gains in speed or automation,” he said.
One China-linked cluster first used ChatGPT to draft phishing materials, then explored DeepSeek, a Chinese AI model, to automate mass targeting campaigns.
OpenAI has disrupted more than 40 networks violating its usage policies since February 2024. The company said it cannot independently verify whether Chinese authorities ultimately deployed any surveillance tools conceived through ChatGPT.
Outside China, the report documented Russian-speaking criminal groups using ChatGPT to refine malware code and North Korean hackers testing phishing techniques. Networks in Cambodia, Myanmar, and Nigeria employed the chatbot to craft investment scams.
Liu Pengyu, spokesperson for the Chinese Embassy in Washington, rejected OpenAI’s findings as “groundless attacks and slanders against China.” He said China is building an AI governance system that balances development with security.
OpenAI estimates that ChatGPT users identify scams three times as often as they create them, suggesting that defensive applications currently outpace malicious uses.