OpenAI has confirmed that it has banned multiple ChatGPT accounts believed to be linked to government or state-affiliated actors in China after those accounts requested help designing surveillance and social media monitoring tools.
According to OpenAI’s October 2025 threat intelligence report, several of the banned users sought project plans, promotional materials, or architectural proposals for “social media listening” tools — described sometimes as probes that would scan platforms like X, Facebook, Instagram, Reddit, TikTok, and YouTube for political, ethnic, or religious content.
One example involved a request to design a “High-Risk Uyghur-Related Inflow Warning Model,” which would match travel booking data against police records to detect “high-risk” individuals. Though OpenAI says it found no evidence that these tools were implemented, it banned the associated accounts for violating its national security and misuse policies.
Other banned accounts reportedly asked for help identifying funding sources for a social media account critical of the Chinese government or locating petition organizers in Mongolia — tasks that venture into profiling and intelligence gathering. OpenAI states that its responses in these cases were limited to public information.
The removal of China-linked accounts is part of a broader set of actions OpenAI has undertaken against misuse of its models. In the same report, the company also disclosed bans on accounts linked to Russian-speaking criminal groups that used ChatGPT to generate malware, phishing code, or assist in campaign operations.
OpenAI asserts that while some threat actors attempted to use its models to assist with malicious development, there is no evidence that the models spawned entirely new offensive capabilities.
Since it began publishing threat reports in February 2024, OpenAI says it has disrupted more than 40 networks or clusters violating its usage policies.
Context and implications
OpenAI’s actions come amid growing international scrutiny of how generative AI can be misused by authoritarian regimes or malicious actors.
The U.S.–China competition over AI development and regulation adds further weight to decisions around access, oversight, and enforcement of policy boundaries.
As of now, the Chinese government has not publicly responded to OpenAI’s disclosures.