Social media propaganda is being used to manipulate public opinion around the world, a new set of studies from the University of Oxford reveals.
From Russia, where over half of all political content on Twitter comes from bot accounts, to Taiwan, where a campaign against President Tsai Ing-wen involved thousands of heavily co-ordinated – but not fully automated – accounts sharing Chinese mainland propaganda, the studies show that social media is an international battleground for dirty politics.
The reports, part of the Oxford Internet Institute’s Computational Propaganda Research Project, cover nine nations including China, Poland and the United States. They found “the lies, the junk, the misinformation” of traditional propaganda is widespread online and “supported by Facebook or Twitter’s algorithms” according to Philip Howard, Professor of Internet Studies at Oxford.
At their simpler end, techniques used include automated accounts to like, share and post on the social networks. Such accounts can serve to game algorithms to push content on to curated social feeds. They can drown out real, reasoned debate between humans in favour of a social network populated by argument and soundbites and they can simply make online measures of support, such as the number of likes, look larger – crucial in creating the illusion of popularity.
The researchers found that in the US this took the form of what Samuel Woolley, the project’s director of research, calls “manufacturing consensus” – creating the illusion of popularity so that a political candidate can have viability where they might not have had it before.
The US report says: “The illusion of online support for a candidate can spur actual support through a bandwagon effect. Trump made Twitter centre stage in this election, and voters paid attention.”
While the report finds some evidence of institutional support for the use of bots, even if only in an “experimental” fashion by party campaign managers, Woolley emphasises that it’s just as powerful coming from individuals. “Bots massively multiply the ability of one person to attempt to manipulate people,” he says. “Picture your annoying friend on Facebook, who’s always picking political fights. If they had an army of 5,000 bots, that would be a lot worse, right?”
Russian propaganda on social media is well known in the west for its external-facing arm, including allegations of state involvement in the US and French presidential elections. But the nation’s social media is also heavily infiltrated with digital propaganda domestically according to the report on that country.
It shows that Russia first developed its digital propaganda expertise for dealing with internal threats to stability and drowning out dissent to Putin’s regime while providing the same illusion of overwhelming consensus that was used in the US election years later. “Political competition in Putin’s Russia created the demand for online propaganda tools,” the report’s author, Sergey Sanovich, writes, “and … market competition was allowed to to efficiently meet this demand and create tools that were later deployed in foreign operations”.
Woolley adds: “Russia is the case to look to to see how a particularly powerful authoritarian regime uses social media to control people.”
If Russia is the progenitor of many of the techniques seen worldwide, then Ukraine is the example of how the conflict might progress. There, says Woolley, “we’re seeing how computational propaganda will be in five years, because the country is a testing ground for current Russian tactics.” As a result, however, civil society organisations dedicated to tackling the problem are similarly advanced.
The report on the country’s efforts to tackle Russian misinformation highlights the StopFake project, a collaborative effort to tackle fake stories “produced mainly by the Russian media” It also mentions a Chrome extension that allowed automatic blocking of thousands of Russian websites, and even a straightforward ban from the government aimed at certain Russian social networks, including VKontakte and Yandex, as part of the country’s sanctions against Russia.
Facebook and Twitter must act
The reports suggested an apparent disinterest from the social media firms in how their networks were being used. Facebook, for instance, leaves most of its anti-propaganda work to external organisations such as Snopes and the Associated Press, who operate semi-autonomous fact-checking teams aimed at marking viral news stories as true or false while Twitter’s anti-bot systems are effective at fighting commercial activity on the site, but seem less able or willing to take down automated accounts engaging in political activity.
The researchers are presenting their findings to a group of “senior” representatives from the technology industry in Palo Alto. They say that the social networks need to do more, and fast.
“For the most part, they leave it to the user community to police themselves, and flag accounts,” Howard says. He points out while social networks tend to comply only with the minimum legal requirements, occasionally they’ll be ahead of public opinion – as happened when the company decided to ban adverts for payday loans. “Of all the public policy issues, I don’t know why they landed on that one. They clearly can have an impact, and between violent extremism and payday loans there’s a span of issues.”
The researchers did find one country to be significantly different to the others. In Germany, fear of online destabilisation outpaced the actual arrival of automated political attacks and has led to the proposal and implementation of world-leading laws requiring social networks to take responsibility for what gets posted on their sites.
“Germany leads the way as a cautionary authority over computational propaganda, seeking to prevent online manipulation of opinion rather than addressing already present issues,” the report says, although it adds that “many of those measures lack legitimacy and suitable enforcement, and some are disproportionate responses considering their implications for freedom of expression”.