OpenAI said Friday it deactivated a cluster of ChatGPT accounts this week that were using the AI chatbot to craft fake news articles and social media comments as part of an Iranian disinformation campaign.

Why it matters: Nation-state adversaries have already shown a vested interest in disrupting the 2024 U.S. elections — and experts fear AI tools like ChatGPT could speed up their ability to craft disinformation.

This is the first operation OpenAI has spotted and removed that focuses on the U.S. elections.

State of play: OpenAI identified, removed and banned an unspecified number of ChatGPT accounts this week that were using the tool to create content about the U.S. presidential elections and other topics.

OpenAI linked the activity to a group known as Storm-2035, which is known for creating fake news websites and sharing them on social media to influence elections.
Operators used ChatGPT both to create long-form fake news stories and to write comments for social media posts. Topics included the Israel-Hamas war, Israel’s presence at the Olympic Games and the U.S. presidential election.

Driving the news: Microsoft shared details about this exact Iranian disinformation group last week, including some of the fake news sites, in the same report that kicked-off the news about recent spear-phishing attacks targeting the U.S. presidential campaigns.

OpenAI has found a new set of social media accounts the group was using to spread this information.

Zoom in: OpenAI identified a dozen accounts on X, formerly Twitter, and one Instagram account as part of its investigation.

A Meta spokesperson told Axios that it has deactivated the Instagram account and said it’s linked to a 2021 Iranian campaign that targeted users in Scotland.
X did not immediately respond to a request for comment, but OpenAI says all of these social media accounts appear to no longer be active.
The actors also created five websites that posed as both progressive and conservative news outlets sharing information about the elections.

In one example, operators used ChatGPT to create a headline that read, “Why Kamala Harris Picked Tim Walz as Her Running Mate: A Calculated Choice for Unity.”

Reality check: Most of the social media accounts sharing this AI-generated content didn’t get much engagement, OpenAI found.

“We all need to stay alert, but stay calm,” Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, told reporters.
“There’s a big difference between an influence operation posting online and actually becoming influential by reaching an audience.”

Between the lines: Nimmo said OpenAI used its own tools, including new ones developed since its last threat report in May, to detect these accounts after the Microsoft news last week.

What we’re watching: There’s still a long way to go until the November election and whether foreign influence operations will gain more steam online is an open question.

Source » axios