Learning From Lies to Spread the Truth

| July 27, 2021 

USC Viterbi researchers use new language analysis tools to combat misinformation

Artist's rendition of dice

The researchers combined social media posts that used similar language to create networks of misinformation they could track as they evolved (PHOTO CREDIT: USC Viterbi)

In the almost two decades since social media has been around, society’s relationship with the medium has certainly changed. What we once naively saw as a useful new technology meant to bring people together, we now approach with the same level of trust one might give to a philandering partner. But while most people agree that misinformation on social media is a real problem, we are only now beginning to understand exactly how misinformation acts as an almost living organism on these platforms.

Now, a recent paper published by USC Viterbi researchers in Scientific Reports is helping us see how networks of misinformation organically arise on social media, how they evolve over time, and how we can stop them from spreading. Even more exciting, they may have found a way to use the same mechanisms currently used to spread lies to start spreading the truth.

The study, led by two PhD students, Mingxi Cheng and Chenzhong Yin, looked at how people discussed the COVID-19 pandemic online and analyzed the language they used in their posts. Specifically, the researchers would identify a piece of misinformation in, say, a tweet, and combine that individual post with others that used similar language. Then, they connected these together to form networks, which they could observe as they grew and evolved like living organisms.

You might think of words and phrases of misinformation as something like individual genes gaining dominance in an organism. “The networks grow very fast and as they do, the features of the networks change,” said Cheng. “As certain words and phrases draw more attention than others, they then become dominant nodes within the network.”

Cheng, Yin, and their colleagues observed these networks over a 60-day period using new algorithms they wrote to help capture the data they were looking for. The pace at which misinformation grew and evolved was often shocking. “In one case, we built a network on day one from 300 tweets and other information we identified as being related and being false. By the second day, that network had grown to 600 problematic words and phrases,” said Yin. As more language gets added to the network, it gets harder and harder to stop. In fact, the researchers saw some of their networks grow forty-two times in size in that two-month period. “This highlights the need for the development of new statistical physics and discovery of the laws of social network-transcendent misinformation that can, in turn, help us develop real-time countermeasures,” she said.

At first, the pace of network growth was a bit overwhelming. But as more and more data were collected, strategies for combatting the misinformation began to emerge. Once they understood, in such detail, how a network was evolving and which nodes within that network were becoming influential, misinformation could be deleted more strategically and with better results.

And while the researchers have already proven that this can be done, their planned next steps are even more promising. As they develop even better models, they hope to do more than simply delete online misinformation. “We hope to more actively combat misinformation by strategically inserting true information nodes in the right place at the right time and watching them grow – like injecting more white blood cells directly into an infection,” said Cheng. After all, if lies can take advantage of the natural infrastructure of social media to spread, why can’t truth do the same?

Published on July 27th, 2021

Last updated on May 16th, 2024

Share this Post