star twitter facebook envelope linkedin instagram youtube alert-red alert home left-quote chevron hamburger minus plus search triangle x

Weaponizing Social Media: Heinz Experts on Troll Farms and Fake News


By Scottie Barsotti

Social media platforms have proven to be a scarily efficient means of propagating disinformation and sowing division. This innovation has developed into an urgent problem, for which Heinz College experts prescribe technological, policy, and human remedies.

Without dropping a bomb or firing a shot, nations can do tremendous harm to one another through cyberwarfare. But even without, say, hacking into a power plant or weapons system, malicious actors can erode trust in institutions and breed an atmosphere of contempt, distrust, and even violence among citizens.

The 2016 U.S. presidential election brought this new threat into the public consciousness, with Russia using Facebook and Twitter as both frontlines and weapons in a new kind of information warfare.

“In 2014, Russia started to invest small amounts of money in troll farms. Nothing on the scale of a typical military operation. [The intelligence services would spend] a million here, a million there,” said Ambassador Sarah E. Mendelson, Distinguished Service Professor of Public Policy at Heinz College and head of Heinz College in Washington, D.C. She remarks that until that point, Russian president Vladimir Putin didn’t have much interest in social media as a tool. That changed after Putin saw uprisings of Russian citizens protesting corruption, including his own return to presidential power in 2011 and 2012; these citizen uprisings developed spontaneously and organically in part on social media.

Russian “troll farms”—groups of organized online agitators—identify grievances in other countries and then insert themselves into those debates with the aim of inflaming them. Rather than promoting any one political ideology, professional Russian trolls instead focus on fanning Americans’ emotions around heated topics such as gun control or immigration, and then pitting Americans against Americans. The tactic is—literally—divide and conquer.

“They’re not making these things up. They’re finding tensions that exist on Facebook or Twitter, and they’re amplifying,” said Mendelson. “It’s pretty basic social marketing, using social media in ways that are hugely successful. And not terribly expensive, in the scheme of things, for the amount of chaos that they created.”

In July, 13 Russian officials and multiple Russian organizations—including the Internet Research Agency in St. Petersburg, a company with ties to the Kremlin that has been named as a troll farm by the U.S. Intelligence Community—were indicted by a federal grand jury for various attacks on the American electoral system, including a hack of the Democratic National Committee and spreading disinformation with the intent to influence and interfere in the 2016 election.

This month, a Russian national was charged for activities pursuant to interference in the upcoming 2018 midterm elections.

“This is ongoing. This is an investment that they continue to make,” said Mendelson. “And it’s easy to recruit for these positions. There are many young people both inside and outside of Russia who are going to work for these organizations. Perhaps they need money and see it as easy money, but many of them have anti-Western feelings.”

Social Media is Uniquely Powerful—for Good and for Bad

Intersecting data analytics, data security and privacy, technology, consumer behavior, policy, and ethics, Heinz College professor Ari Lightman says social media weaponization is an extremely complex problem that Heinz College is uniquely suited to address.

“As a school we focus on data. Some of us focus more on data for policy decisions, and some on data for business value, but the two are becoming increasingly intertwined.”

Lightman teaches several courses that focus on understanding and harnessing the power of social media data.

“Facebook can be exploited due to its popularity and vastness,” he said. Facebook has 2.23 billion monthly active users (204 million in the U.S.); Twitter has 335 million monthly active users (49.35 million in the U.S.) “People are continually checking their News Feed, more often than they may check in on actual news, and that’s an issue.”

On Facebook, user preferences and content targeting often create “echo chambers” of like-minded people sharing content with each other and “filter bubbles” in which users rarely see viewpoints that oppose their own.

“It’s very easy to unfriend people who have a different political belief than you, so you can become surrounded by people who think and believe like you. And then because Facebook wants to show you things that you will like, you’ll be shown content that reinforces your belief system,” said Lightman.

Bad actors who understand those mechanisms and user tendencies have used that knowledge to weaponize information in various ways, such as swaying public opinion or sowing chaos in the leadup to an election. Between fake accounts and social bots—specialized computer programs that can autonomously post messages on social platforms—false information spreads with incredible velocity.

“Bots exacerbate the problem. People spread disinformation, and then bots spread it in a million directions,” said Lightman.

It doesn’t help that social media exists in a regulatory grey area. Lightman suggests that government and social media executives work together to find a way forward and combat this problem. First, legislators and regulators need to ramp up their knowledge about the medium.

“I don’t think legislators sufficiently understand the rules of engagement, the business model, the community, or the social nuances associated with social media. They need to get educated,” he said. “Then we need an alliance to be created between legislators, regulators, and social media executives to understand the scope of this problem and identify mechanisms that don’t require anything drastic like platforms getting shut down. Because social media provides a valuable utility, and that part of the story often gets swept under the rug because there’s so much focus on the abuses.”

Lightman adds that even though public awareness of this problem is growing, it goes beyond the platforms simply deleting fake accounts or using machine learning techniques to identify fake news. Some of the responsibility falls on users to be more vigilant about disinformation and more cognizant of the information they share and how it might be misused.

To that end, Mendelson says American schools need to be teaching digital literacy.

“We’re already in a very different situation than we were two years ago. People know this is going on now,” she said. “The Russians may be continuing to do it, but it is less successful than in 2016 when people didn’t even know it was happening."

[RELATED: See our feature interview with Sarah Mendelson on Russian politics]

And because we all need a reminder sometimes: please post, share, and retweet responsibly! While everyone loves a good meme, it’s up to all of us to prevent the spread of disinformation.

Here are a few quick tips from NBC News for spotting fakery in your news feed and balancing your own perspective:

  • Question the source, follow a story to its origin
  • Look for confirmation across other reputable media sites
  • Use third-party fact checkers like Snopes and PolitiFact
  • Call out fake news shared by your network
  • Don’t assume all video is real
  • Learn to recognize bots
  • Identify your own biases