OpenAI models used in nation-state influence campaigns, company says

Summary: OpenAI has reported that threat actors linked to the governments of Russia, China, and Iran have used its tools for influence operations, generating various types of content including articles, social media posts, and fake comments.

Threat Actor: Governments of Russia, China, and Iran | Governments of Russia, China, and Iran
Victim: OpenAI | OpenAI

Key Point :

  • Threat actors from Russia, China, and Iran have utilized OpenAI’s tools for conducting influence operations.
  • The actors generated a variety of content including text, photos, articles, and social media posts.
  • They also used the tools to debug code and analyze social media activity.
  • Multiple groups employed the service to create artificial engagement by replying to AI-generated content with fake comments.
  • OpenAI’s report highlights concerns about the potential misuse of generative AI tools for malicious purposes.

Threat actors linked to the governments of Russia, China and Iran used OpenAI’s tools for influence operations, the company said Thursday. 

In its first report on the abuse of its models, OpenAI said that over the last three months it had disrupted five campaigns carrying out influence operations. 

The groups used the company’s tools to generate a variety of content — usually text, with some photos — including articles and social media posts, and to debug code and analyze social media activity. Multiple groups used the service to create phony engagement by replying to artificial content with fake comments.

“All of these operations used AI to some degree, but none used it exclusively,” the company said. “Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts, or memes copied from across the internet.”

The rise of generative AI has sparked fears that the tools will make it easier than ever to carry out malicious activity online, like the creation and spread of deepfakes. With a spate of elections this year, and stark divisions between China, Russia, Iran and the West, experts have raised alarms. 

According to the company, however, the influence operations have had little reach, and none scored higher than a 2 out of 6 on a metric called the “Breakout Scale”, which measures how much influence specific malicious activity likely has on audiences. A recent report by Meta on influence operations reached a similar conclusion about inauthentic activity on its platforms.  

OpenAI  detected campaigns by two different Russian actors — one an unknown group it dubbed Bad Grammar and the other Doppelgänger, a prolific malign network known for spreading disinformation about the war in Ukraine. It also disrupted the activity of the Chinese group Spamouflage, which the FBI has said is tied to China’s Ministry of Public Security. 

The Iranian group the International Union of Virtual Media (IUVM) reportedly used the tools to create content for their website, usually with an anti-US and anti-Israel focus, and an Israeli political campaign management firm called STOIC was also discovered abusing the models, creating content “loosely associated” with the war in Gaza and relations with Jews and Muslims. 

OpenAI disrupted four Doppelgänger clusters, which used generative AI to create short text comments in English, French, German, Italian and Polish, and to translate articles from Russian and to generate text about them for social media. Another cluster generated articles in French, while a fourth used the technology to take content from a Doppelgänger website and synthesize it into Facebook posts. 

The report also highlights instances where the company’s software prevented threat actors from achieving their goals. For example, Doppelgänger tried to create images of European politicians but was stopped, and Bad Grammar posted generated content that included denials from the AI model.

  “AI can change the toolkit that human operators use, but it does not change the operators themselves,” they said. “Our investigations showed that they were as prone to human error as previous generations have been.” 

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

Source: https://therecord.media/openai-report-china-russia-iran-influence-operations


“An interesting youtube video that may be related to the article above”