by PETER DORFMAN
Online news consumption can be complicated, but it has become even more so with the introduction of “astroturf” campaigns—coordinated attempts that use armies of bots (automated software applications) to convince readers there is grassroots support for fraudulent narratives on social media.
To detect and counter astroturfing scams, informatics researchers from the Indiana University Observatory on Social Media have launched BotSlayer, a free software program designed to flag bots and root out fake news campaigns. It’s already in use at The New York Times, CNN, The Associated Press, and other news outlets.
Generating an astroturf campaign is easy, says Filippo Menczer, a professor in the IU School of Informatics, Computing, and Engineering, and Observatory director. “Any novice can buy thousands of fake Twitter accounts,” Menczer explains. “There are literally factories for fake accounts in low-wage countries. The individual posts something, and it’s automatically reposted by all these fake accounts. Or fake accounts retweet someone to suggest that person is more popular than he is, or to promote a fake narrative to distract attention from an important story.”
Bot scammers push spam or malware through malicious links, manipulate the stock market, attack the reputations of commercial brands, or try to influence political opinions. Social media platforms try to fight these activities, but definitively identifying abuse is difficult, particularly with tens of millions of accounts to manage or, in Facebook’s case, billions of accounts.
It’s especially difficult for a lay person to recognize fake accounts. Bots are designed to discover people who share a sympathetic viewpoint and attract them as followers, creating an aura of credibility.
But bots have certain earmarks. “If the account never posts but only retweets, that’s suspicious,” Menczer says. “Very long usernames or names with lots of digits are likely bots. Very new accounts that follow many more accounts than they have following them are suspicious. Our software looks for those features that make an account look highly automated.”
BotSlayer looks at usernames, plus memes, phrases, hashtags, and links that are trending, then analyzes the accounts that are promoting them, Menczer explains. Using artificial intelligence, it is designed to recognize automated campaigns that push a suspicious narrative, and assigns accounts, links, and hashtags entangled in such campaigns high “bot scores.”
He says that once BotSlayer has identified a suspicious topic, the user can, with a single click, call up a second tool, called Hoaxy, to instantly generate a network map that illustrates how a particular topic is spreading over time, he says. Hoaxy assigns a bot score to each account in the network, providing an easy way to see the most influential accounts (real or fake) in the conversation. The software also scores how quickly the story is spreading.
BotSlayer is designed for newsrooms, political campaigns, and businesses. If you are personally concerned about the integrity of the news you’re seeing on Twitter, you can use the researchers’ online software tools, Botometer (which determines the likelihood a Twitter account is a bot) and Hoaxy (which helps track how stories are spreading online) free of charge [see below for URLs].
But Menczer counsels against relying on such measures to calm your nerves about fake news. “If you’re getting upset about what you’re seeing on Twitter, you probably should just turn it off,” he suggests.