Researchers investigated large-scale manipulation of public opinion by government and political parties, and found that the practice is widespread in 70 nations worldwide, compared to just 28 in 2017.
“Computational propaganda – the use of algorithms, automation, and big data to shape public life – is becoming a pervasive and ubiquitous part of everyday life,” the researchers warn.
The research examined activity in nations like the US, China, and Russia, as well as European Union member states such as Austria, Czech Republic, Germany, Greece, Hungary, Italy, Poland, Spain, Sweden, and the UK.
According to the report:
“In an information environment characterized by high volumes of information and limited levels of user attention and trust, the tools and techniques of computational propaganda are becoming a common – and arguably essential – part of digital campaigning and public diplomacy.”
Despite a wider variety of social media platforms available, the researchers found that Facebook is still a focus for manipulation, used for “organized computational propaganda campaigns” in 56 of the 70 countries.
On YouTube and Instagram, the researchers say that visual content like viral photos, videos, and memes are driving the spread of misinformation.
“Memes and videos are so easy to consume in an attention-short environment,” according to Samantha Bradshaw, one of the authors of the report.
The use of visual content also makes it harder for platforms to identify and remove harmful or fake content.
“It’s easier to automatically analyze words than it is an image,” Bradshaw notes.
While platforms have been working to combat such activity, regularly removing content that violates their rules, the report suggests that such a piecemeal approach is unlikely to stem the tide of large-scale manipulation that now pervades social media.
According to the report, China is a “major player” in the spread of global computational propaganda, targeting international audiences with disinformation campaigns. It’s one of seven state actors using dedicated online workers to manipulate public opinion abroad, including attempts to influence the results of elections. Other state actors named in the report include India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela.
Such actors use bots to amplify hate speech and other manipulative content, data harvesting to target users, and armies of “trolls” to harass dissenters and journalists.
The researchers hope to not only document online manipulation, but “to drive public and scholarly debate about how we define and understand the changing nature of politics online, and how technologies can and should be used to enhance democracy and the expression of human rights online.”