Researchers uncover an information operation threatening the 2024 U.S. Presidential Election

| November 1, 2024 

A new study from USC ISI’s Election Integrity Initiative finds evidence of a multi-platform network spreading biased political content

Image of top of the White House

Image credit: Douglas Rissing/iStock

In the lead-up to the 2024 U.S. presidential election, coordinated efforts to manipulate online political discourse are already underway, according to a new study from USC Viterbi’s Information Sciences Institute (ISI).

Researchers from the USC HUMANS Lab uncovered a network of 34 coordinated accounts exhibiting politically biased behavior across multiple platforms and websites. The coordinated accounts amplified conservative narratives and Donald Trump’s presidential campaign. 

“We found evidence of a cross-platform information operation,” said Luca Luceri, a research assistant professor at ISI, who worked on the study. “The activity spans not only Twitter/X but also YouTube and various mock websites.”

In today’s digital age, information operations pose a serious threat to democratic processes. Coordinated by ideologically driven individuals or state-backed agents alike, these networks spread fake news and harmful disinformation to manipulate public opinion and influence voters. 

“The main risk is that coordinated activity might create an illusion of public consensus or widespread agreement about certain narratives,” said Luceri. “People who might be susceptible to this activity could be influenced not only in their online behavior, but also in their offline behavior, potentially affecting their voting decisions.”

To detect the coordinated network, researchers employed a machine learning framework to analyze over 17,826,799 tweets shared from May to July 2024. They examined patterns in link-sharing behavior on X, creating a “similarity network” that visually maps connections between users based on shared content. This analysis revealed distinct clusters of coordinated accounts.

A manual review was then conducted to take a closer look at the identified accounts’ content. This revealed key signs of coordination, such as nearly identical profiles, shared posts and images, and links to mock right-wing websites. 

Notably, 15 accounts used the exact same phrase in their bios: “Are you tired of fake news? Click on the link below.” The links in the bios directed users to the same YouTube channel, a right-wing political opinion account that has over 30,000 views. 

Over time, the coordinated network garnered substantial traction, according to an additional analysis performed by the team.  “The suspicious websites promoted by this coordinated network are shared thousands of times per day by the Twitter/X user base, further amplifying their reach and potential impact,” Luceri said.

Additionally concerning was the network’s use of generative AI to create biased images portraying political figures in contrasting lights: Donald Trump as powerful, Joe Biden as weak. (Study data was collected from May 2024, when Biden was still the Democratic candidate.) Luceri noted that content created with generative AI, whether text or image, can amplify the impact of coordinated campaigns. “AI can be used to quickly scale up content production, or create personalized propaganda that can sway specific groups of people,” he said.

While Twitter/X has suspended eight of the 34 accounts identified in the study, the majority remain active. The researchers also emphasized that they used a conservative methodology, aiming to minimize the number of users who might be mislabeled as being part of the network. This likely means that their findings represent just a small sample of the overall coordinated activity occurring online.

Going forward, the ISI team plans to expand their analysis to include a longer timeframe and additional platforms, such as Telegram and TikTok. As part of the lab’s 2024 Election Integrity Initiative, this research aims to track evolving tactics used to manipulate election-related conversations online, with the goal of helping tech companies, regulators, and users alike become more aware of coordinated efforts to spread misinformation.

“As election day approaches, we expect to see increased levels and more varied forms of interference in election-related online chatter,” said Emilio Ferrara, a USC professor and ISI principal scientist, who leads the HUMANS Lab. “We are keeping our eyes peeled to understand what types of influence operations are happening, including which ones are most effective, what tools threat actors are using to manipulate public perception, and what countermeasures might work.” 

Published on November 1st, 2024

Last updated on November 4th, 2024

Share this Post