OpenAI revealed on Friday that it had uncovered striking evidence of a Chinese security operation leveraging artificial intelligence to power a sophisticated surveillance tool. This system was designed to track and analyze anti-China sentiment in real time across social media platforms in Western nations, marking a bold and unsettling escalation in digital intelligence warfare.
The company also took decisive action, banning five ChatGPT accounts that it said generated a “small number of tweets and articles that were then posted on third party assets publicly linked to known Iranian influence operations.” One of these operations is known as the International Union of Virtual Media, or IUVM; Microsoft reported the other as STORM-2035. We disrupted and reported earlier activity by these two operations last year.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
“Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our A.I. models,” said Ben Nimmo, a principal investigator for OpenAI.
OpenAI also banned accounts linked to activity potentially tied to publicly reported North Korean threat actors. Some of these accounts exhibited tactics, techniques, and procedures (TTPs) consistent with the notorious cyber-espionage group VELVET CHOLLIMA (also known as Kimsuky or Emerald Sleet). Others were potentially connected to an entity assessed by a credible source as being linked to STARDUST CHOLLIMA (also known as APT38 or Sapphire Sleet), a group infamous for cyber heists and intelligence gathering. OpenAI uncovered these accounts after receiving a critical tip from a trusted industry partner, exposing yet another front in the ongoing global cyber conflict.
“OpenAI’s policies strictly prohibit use of output from our tools for fraud or scams,” said the company. “We are dedicated to collaborating with industry peers and authorities to understand how AI is influencing adversarial behaviors and to actively disrupt scam activities abusing our services. In line with this commitment, we have shared information about the scam networks we disrupted with industry peers and the relevant authorities to enhance our shared safety.”
