Featured 6403087401_f5bc462f64_b

Published on December 16th, 2016 | by EJC

0

3 Ways To Stop False News Spreading Like Wildfires

We are witnessing furore over the posting of ‘fake’ news online and its propagation on social media through retweets and shares. Concerns over the potential negative impacts of false content have become crystallised in comments that certain ‘news’ stories helped to secure the presidential election for Donald Trump; however, these concerns have existed for some time. For instance, in relation to false but widely repeated claims about the number of Syrian refugees set to enter the US or a tweet of approval posted by Marine Le Pen following a Theresa May speech on immigration.

The current attention given to this phenomenon creates an opportunity for debate over the accountability of ‘fake’ news sites, the editorial responsibilities of social media platforms in identifying and/or blocking this kind of content, and the obligations of individual users in deciding whether to share something that may or may not be true. Suggested solutions to the apparent problem of fake news include a role for (human or non-human) ‘editors’ to vet content on social media feeds or the use of automated fact checkers. However, it is important to note that the spread of unverified content is nothing new. The sociology of rumour is a well-established field that examines the spread of (potentially) false claims as a form of collective behaviour. Spreading a rumour often fills a certain function – perhaps to overcome a knowledge deficit or produce a narrative to make sense of a troublesome event. So if we are concerned about the contemporary spread of false news on social media, it is worth asking what this particular phenomenon tells us about the wider social world. Are we observing the consequences of diminishing trust in traditional news media organisations – which might explain the recent Pew Research Center findings that between 2013 and 2015 the proportion of new media users relying on Twitter and Facebook as a primary source for news increased from 52% to 63% from 47% to 63% respectively –  or a rising commitment to post-truth narratives in which claims and counter claims are treated as equivalent?

Although rumours are nothing new, it is undoubtedly true that the growth of user-generated content can greatly enhance the capacity for false news to spread. The more content that is posted and shared online – combined with the ease through which well-established news reporting styles can be mimicked and images faked – the more opportunities there are to persuade recipients of the credibility of unverified claims. The provision for spontaneous reposting by social media platforms means that this content can then spread at a rapid pace. Work conducted within the computational sciences has begun investigating what features increase the likelihood of a claim being reposted. Focusing on Twitter, research has highlighted that the age of the account posting a claim and the inclusion of a hashtag and URL within a tweet are factors, among others, that influence the probability of reposting. It has also begun to illustrate the ways in which disagreeing posts or ‘counter speech’ from other users might serve to slow or limit the spread of unverified content. As an open platform, Twitter is relatively easy for researchers to collect data from. It is harder to collect data from sites such as Facebook and this makes it difficult to to examine how fake news might spread across platforms. This is even before we consider how we might be able to trace and measure the relationship between online content and offline impact.

Bolstered by research, we might begin to ask what could be done to counteract the spread of fake news. This is of interest to three current interdisciplinary projects:

1) The EU funded Pheme project, which aims to develop machine learning techniques to help journalists determine the veracity of information posted in social media. To do this automatically and in real-time, large volumes of user-generated content need to be analysed very rapidly, which is a major challenge. The project aims to model, identify, and verify rumours as they spread across media, languages and social networks. Modelling is based on the information content of the posts, the reputation of the originating information sources (if known), the information diffusion pattern, similarities to historical data, and temporal dynamics.

Pheme logo

2) The ESRC funded Digital Wildfire project, which asks how we can encourage the responsible governance of social media. For instance, can user self-governance form an effective means to limit the spread of harmful content whilst upholding rights to freedom of speech? Is it possible promote an ethos of personal responsibility in which users take greater care about what they post and share, and use counter speech to challenge what they feel to be untrue or inflammatory?

DW logo

3) The UnBias project, which investigates user experiences of algorithm driven internet services. It asks whether it is possible to achieve ‘fairness’ in the operation of algorithms online. For example, should users have the right to expect some kind of transparency regarding the ways that algorithms rank and sort the news items they see on their personal social media feeds? Should the Editor’s code of Practice apply to these algorithms? Would increased transparency, or even control, over exactly how these algorithms function assist users in making informed decisions about how they consume and share the content they see?

UnBias.logo

Underpinning all of these projects is an even more fundamental question. The recent debates around ‘fake’ news and the US presidential election highlight that agreement over what is the ‘truth’ is frequently elusive. If we want to reduce untruths online, we must first work out what is true. But is this always possible and who might we trust to make this decision? A government? A platform? An algorithm? Or the individual user? Perhaps the best we can hope to do is to provide individual users with enough contextually relevant evidence – including results of algorithmically derived veracity predictions – so that they can make up their own minds.

About the Authors:

This article was prepared by the Digital Wildfire project team (Marina Jirotka and Helena Webb – University of Oxford; Rob Procter – University of Warwick; Bernd Stahl – De Montfort University; William Housley, Adam Edwards, Omer Rana, Matt Williams and Pete Burnap – Cardiff University) and Ansgar Koene (University of Nottingham) from the UnBias project.

Image: M.A.S.K. Productions.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


About the Author



Comments are closed.

Back to Top ↑