Emilio Ferrara’s research on social bots was compelling even before the current Congressional investigations into likely Russian hacks surrounding the 2016 US presidential election.
More than 100 media outlets, including The New York Times, CBS News, The Wall Street Journal and UK’s Daily Mail, have covered Ferrara’s work in the past month. And in a March, 2017 Senate intelligence hearing into cybersecurity vulnerabilities, experts flagged the steep rise in social media bots. Ferrara initially had reported those suspicious activities, in the journal First Monday, one day before last year’s election.
Bots are scripts that impersonate real people while operating automatically, often with a specific agenda – in this case, tearing down or building up Hillary Clinton, Donald Trump or their respective political platforms. According to Ferrara, bots are now so pervasive that they may distort public opinion far more than is commonly believed, and reflect planted, predetermined sentiments toward their targets.
Where Twitter estimated in 2015 that five percent of its accounts were bots, Ferrara concluded that 10 percent is a conservative estimate – and 15 percent is more realistic. What’s more, he found that bots, which operate around the clock, generate a far larger percentage of Twitter activity than their human counterparts. People also turn out to be mediocre at distinguishing human- and bot-generated tweets.
In March, Ferrara gave ISIers a deeper look into the methodology and analytics behind his team’s results. “Bots and Human Behavior in Techno- social Systems” was the latest in ISI’s “What’s Going On” series, which expands researchers’ knowledge of work taking place Institute-wide. The lively talk was attended by about 30 colleagues in ISI’s Marina del Rey, California and Arlington, Virginia locales.
Ferrara launched his presentation with a story about famed physicist Richard Feynman, who once asked an audience which science is the most difficult. His colleagues assumed the question meant math versus physics. No, said Feynman, since both disciplines adhere to rigorous laws. He found human systems far less predictable. Hence the complexity of Ferrara’s field, computational social science, which operates at the intersection of cognitive systems and human behavior by using technology to understand society.
As for why bots matter, Ferrara cited Cynk Technology, a company with no assets or revenue. In 2014, someone nabbed $5 billion using social media bots that temporarily amped Cynk’s share price. Syrian hackers claim to have falsified an Associated Press story that tipped the Dow Jones Industrial Average by $136 billion. ISIS and its counterparts deploy bots to identify and recruit likely prospective jihadis.
Ferrara has tracked bot behavior for six years, during which fake tweeting has become ubiquitous and far more sophisticated. Artificial intelligence- driven bots now may deploy natural language strategies that make detection much more difficult. And while human tweets are easily detectable by both humans and computers, more complex bots are hard for either people or machines to catch.
To help discern bots from real sources, Ferrara’s team created the first free detection tool. Botornot checks Twitter users and rates the likelihood that each account is a governed by a script, not a person.
Among the identifiers: Bots retweet more than humans, but are retweeted less. Bot accounts tend to have longer user names and be more recent, since Twitter attempts to purge those it discovers. Unlike humans, who have wide ranges of interests, bots tend to be single-minded. The vast majority of never-Hillary tweets turn out to have been generated by bots. Similarly, humans produce both positive and negative tweets. But the bulk of bot- generated tweets in support of a presidential candidate were positive, particularly those allied with Republican causes, which generated almost no negative tweets.
To determine bots’ election role, the team analyzed more than 20 million messages tweeted between mid- September and mid-October last year. While bots spiked and fell roughly in line with human comments, people retweeted the more sophisticated bots and human messages equally. That suggests most Twitter users are incapable of distinguishing advanced bots from real users. Surprisingly, bots also can garner followers, and may retweet people to enhance their own credibility.
Ferrara was clear, though, that suggesting bots influence voting behavior requires a leap of faith. He’s now working with a group of political scientists to find out. If the answer is “yes”, Ferrara may need to create an army of bots just to answer press inquiries from around the globe.
Published on April 27th, 2017
Last updated on June 3rd, 2021