OpenAI has recently announced its success in dismantling five covert influence operations in various countries, including Russia, China, Iran, and Israel, which sought to manipulate public opinion using the company’s AI products. The maker of ChatGPT has expressed concerns about the role of AI in global elections and how these influence networks utilized AI to deceive people more efficiently.
Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, emphasized the need to address the potential consequences of influence operations using generative AI. According to Nimmo, “With this report, we really want to start filling in some of the blanks.” OpenAI defined its targets as covert “influence operations,” with the intention to manipulate public opinion or influence political outcomes without revealing their true identity or intentions.
These influence networks, distinguished from disinformation networks, can often promote factually correct information, but in a deceptive manner. OpenAI noted that the use of generative AI tools in propaganda networks is a relatively new development, and these tools were used in conjunction with traditional formats such as manually written texts and memes on major social media sites.
The report identified five covert networks, including the pro-Russian “Doppelganger,” the pro-Chinese network “Spamouflage,” and an Iranian operation known as the International Union of Virtual Media (IUVM). OpenAI also flagged previously unknown networks originating from Russia and Israel. For example, the new Russian group, labeled “Bad Grammar” by OpenAI, used AI models and the messaging app Telegram to set up a content-spamming pipeline.
Despite their efforts, most of the networks’ messaging did not gain widespread traction, and human users were able to identify the posted content as being generated by AI. Ben Nimmo stressed the need for vigilance, stating, “History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody’s looking for them.”
OpenAI also acknowledged that there may be other groups using AI tools that the company is not aware of, highlighting the ongoing challenges in combating misuse of AI for covert influence operations. The company is proactively sharing threat indicators with industry peers and intends to release more reports in the future to assist in this detection work.
In conclusion, OpenAI’s efforts to identify and counter covert influence networks underscore the importance of continued vigilance in the face of evolving tactics to manipulate public opinion. As AI technology continues to advance, it is crucial for companies to remain proactive in addressing the potential misuse of AI for deceptive purposes.