Bots, trolls and fake news: Social media is a minefield for U.S. midterms
Social media manipulation threatens to influence midterm elections despite efforts to curb it
Erin Gallagher doesn't fancy herself much of a detective or a hunter, but from her home in Northeast Pennsylvania she spends hours tracking down and exposing nefarious activity on social media.
She's part of a cottage industry that's emerged in the United States since the 2016 election. From academic studies, to tech companies, to individuals like Gallagher working in their spare time, all are trying to get a grasp of the vast and nebulous network of websites, automated accounts and social media pages spreading misinformation online.
"I see my own family and friends struggling with what to believe," says Gallagher, a freelance graphic designer who posts her research on Medium.
Her main targets are bots — automated accounts that can spread misinformation, as well as amplify and distort regular conversations that take place online.
Using her love of visual arts Gallagher creates illustrations of what happens online around a major topic, which allows her to expose activity that isn't "organic" or natural.
She first extracts thousands of tweets from Twitter using the platform's open-source software. She then organizes it using Gephi, a visualization and analysis program.
What emerges is a colourful web showing who is talking to whom online, and the popular hashtags, or topics, being discussed around a certain event.
She created one such visualization to examine the online chatter around the death of 32-year-old Kate Steinle, who was fatally shot on a pier in San Francisco in 2015. An undocumented Mexican immigrant was charged in her death, leading to her name becoming a rallying cry for those advocating for stricter border and immigration controls. When the man was acquitted, the already-smoldering controversy exploded online across social media.
The storm on social media seemed genuine enough, but as the #KateSteinle hashtag trended, Gallagher became disturbed by the tone and tenor of the conversation.
"I noticed that there were tweets calling to kick in doors of illegal immigrants, undocumented people," she says.
As Gallagher mapped out the conversation online, she started to see patterns emerge.
Accounts that tweet at a high volume were easy to pick out, appearing as thick lines amidst the colourful spider web on her screen. While it wasn't necessarily a sign of something bad, it was out of the ordinary, warranting a closer look.
The accounts tweeted about building a wall and Make America Great Again, taking what was already a controversial topic and inflaming and exaggerating emotions.
"This in an account that frequently amplifies anti-immigration hashtags, anti-Muslim hashtags," she says, referring to a now-defunct account that was sending out more than 700 messages a day. "When you can see that there are accounts that are tweeting more than humanly possible, that is a good indication that the conversation is not entirely organic."
Gallagher first became interested in the topic of bots and social media influence while doing translation of Mexican news sources in 2014. After seeing what activists there had become accustomed to, she was surprised when Americans were slow to catch on to the fake news phenomenon during the 2016 election.
Out of the fog of the election battle in 2016, there's now a clearer picture of what took place online while Donald Trump and Hillary Clinton battled it out for the presidency in 2016. A group linked to Russian intelligence, the Internet Research Agency, created more than 400 pages which were seen by more than 126 million Americans.
The pages spread fake news, attacked candidates, even organized rallies that would be attended by unsuspecting Americans. In one case in Houston in May 2016, the Russians organized an anti-Islam protest in front of a Islamic cultural centre while simultaneously organizing a counter-protest.
On Twitter, 50,000 Russian-linked bots tweeted about the election, exaggerating conversations, amplifying debate and generally sowing discord.
In Indiana, Josh Russell found himself believing a lot of the fake news and misinformation being shared by his friends in the gamer community during the 2016 election. But as election day approached, he began taking a closer look at what he was reading.
Like Gallagher, he now spends his free time hunting and exposing bots on social media.
"I've seen them talk about Charlottesville, Colin Kaepernick, all the usual issues. Anything that gets people to fight online are the type of things that they're going to get on," he says.
Russell has assembled a database of trolls and potential malicious accounts, often working with journalists to track accounts spreading misinformation. He says while it's now easier to spot and expose automated accounts, the social media landscape remains littered with them.
"They're still out there. There's not nearly as many as there was, but they're still floating around and they're doing them the same thing they did before [the 2016 election]."
In the wake of the presidential election, social media platforms have been called on to account for what happened and how they'll stop the spread of disinformation.
Twitter CEO Jack Dorsey told the U.S. Senate Intelligence Committee in September that the company is removing 214 per cent more accounts for platform manipulation than last year. He says the company is also identifying and challenging as many as 10 million suspicious accounts a week.
This week the company released data from 3,841 accounts affiliated with the Russian-based Internet Research Agency, including more than 10 million tweets and 2 million images, GIFs, videos and Periscope broadcasts dating as far back as 2009.
Like Twitter, Facebook has tweaked its algorithm to try and stymie fake news accounts. It blocks ads from suspicious pages and removes suspect accounts. Facebook has partnered with third-party fact checkers, and officials say the company is using machine-learning and algorithms to detect and shut down millions of fake accounts.
But a new study of disinformation and influence campaigns on twitter by the Knight Foundation says those efforts may not be enough.
It found on Twitter that 80 per cent of the accounts responsible for most of the fake news in 2016 were still active as recently as early October.
The study also estimates that as many as 70 per cent of the accounts spreading fake news and conspiracies are bots or semi-automated.
Further evidence of the ongoing nature of the problem was provided on Friday when the U.S. Department of Justice charged a Russian woman with conspiring to interfere in the upcoming midterms. The complaint unsealed in a Virginia court lays in detail how "information warfare" was and is still being used to create social and political polarization in the U.S.
"The good news is that platforms have gotten much better at detecting these types of manipulation. The bad is that there's more and more actors that are using these types of manipulations," says Camille Francois, an analyst with Graphika, which helped the Knight Foundation with its report.
Francois says the biggest challenge ahead of November's midterm elections is not just Russian and foreign influence campaigns, but domestic actors. Those could be anything from ordinary candidates using automated accounts, to far-right or left groups mimicking the Russian playbook.
"We're going to have to realize that this is our new normal. We live in an information space where there is manipulation. We need to not overreact to it, either. We need to make sure we focus on the parts that matter," she says.
Francois adds that this means focusing on the source of the material you read online. Asking who wrote it, is it a legitimate site, is there an editor? And above all, recognizing that there are forces out there looking to manipulate what you see online.
In July Facebook removed 32 pages for what it called "coordinated inauthentic behaviour":
And last week, Facebook shut down more than 200 accounts and removed more than 500 pages, all for spreading homegrown disinformation. One site used a network of fake pages and accounts across multiple platforms to spread false stories around the nomination of Supreme Court nominee Brett Kavanaugh, targeting Christine Blasey Ford, the woman who accused Kavanaugh of sexual assault.
Complicating matters, experts like former State Department official Brett Bruen say, is that the Russians are adapting their strategies and aiming to fund unsuspecting individuals or activists to amplify their messaging.
"The Russians and other actors have refined their game, they've updated their playbook, and this time it will be much more of a microtargeting effort. They're going to look at specific districts," says Bruen, Barack Obama's former director of global engagement.
Bruen says he also believes the Russians will stick with successful tactics, like releasing hacked emails to hijack news headlines ahead of the election.
"I have little doubt that we are going to see an October surprise drop," Bruen says.
Erin Gallagher wrote a blog with her advice to voters to deal with social media manipulation. Her suggestion: Think twice about everything you read online.
"I think it's so easy to hit that share button. But really, we all I think need to slow down."
Former gamer Josh Russell says the work never stops. He just helped Twitter take down a bot network that was trying to influence the conversation around the death of Saudi journalist Jamal Khashoggi. He says second-guessing what he reads online is now second nature.
"Don't be so quick to share things that are really partisan and that you might really agree with, because there's a good chance it might actually not be real."