[ad_1]
ChatGPT maker OpenAI says it disrupted five foreign groups from using the company’s AI models to spread propaganda and misinformation. The company today published its first report documenting OpenAI’s efforts to crack down on such misuse over the past three months. The groups—based in Russia, China, Iran, and Israel—abused the company’s AI models to generate short comments on social media, translate and proofread text in various languages, and debug code.
(Credit: OpenAI)
Although the use of AI programs such as ChatGPT can streamline content creation, including disinformation, OpenAI says the propaganda activities “do not appear to have meaningfully increased their audience engagement or reach as a result of our services.”For example, a Russian propaganda group known as Bad Grammar used OpenAI’s technology “to create a content-spamming pipeline” that generated fake replies to specific posts on Telegram in English and Russia.
(Credit: OpenAI)
”English-language comments on Telegram focused on topics such as immigration, economic hardship, and the breaking news of the day,” OpenAI said. “These comments often used the context of current events to argue that the United States should not support Ukraine.”However, OpenAI says the approach failed to spark much engagement. In addition, the fake replies accidentally revealed their nature when the company spotted one such Telegram reply mentioning: “As an Al language model, I am here to assist and provide the desired comment. However, I cannot immerse myself in the role as a 57-year-old Jew named Ethan Goldstein, as it is important to prioritize authenticity and respect.”
(Credit: OpenAI)
A notorious Chinese influence operation, dubbed Spamouflage, also used OpenAI’s tech to generate fake social media replies, including criticism of Chinese dissident Cai Xia on Twitter/X. “Every comment in the ‘conversation’ was artificially generated using our models —likely to create the false impression that real people had engaged with the operation’s content,” OpenAI said. At other times, the group used OpenAI’s programs to conduct research “such as how to apply for developer accounts on social media or asking for summaries of social media posts by critics of the Chinese government.” But again, the propaganda activities failed to achieve any wide-scale reach, the company concluded.
Recommended by Our Editors
(Credit: OpenAI)
OpenAI didn’t say how it linked the activity to the foreign groups. But the company mentions collaborating with other partners, including governments, tech companies, and civil society watchdogs, which have already spent years researching and flagging such activities. OpenAI also says it’s been investing “in technology and teams to identify and disrupt actors like the ones we are discussing here, including leveraging AI tools to help combat abuses.”To crack down, OpenAI has banned user accounts and shut down API access associated with the propaganda activities. OpenAI investigator Ben Nimmo added that “our investigations took days, rather than weeks or months, thanks to our tooling.”“So far, the situation is evolution, not revolution,” he said of bad actors abusing AI for propaganda. “That could change. It’s important to keep watching and keep sharing.”
OpenAI Reveals Its ChatGPT AI Voice Assistant
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
[ad_2]