acm-header
Sign In

Communications of the ACM

Viewpoint

Reason-Checking Fake News


mobile phone displays News page

Credit: R. Classen

While deliberate misinformation and deception are by no means new societal phenomena, the recent rise of fake news5 and information silos2 has become a growing international concern, with politicians, governments and media organizations regularly lamenting the issue. A remedy to this situation, we argue, could be found in using technology to empower people's ability to critically assess the quality of information, reasoning, and argumentation through technological means. Recent empirical findings suggest "false news spreads more than the truth because humans, not robots, are more likely to spread it."10 Thus, instead of continuing to focus on ways of limiting the efficacy of bots, educating human users to better recognize fake news stories could prove more effective in mitigating the potentially devastating social impact misinformation poses. While technology certainly contributes to the distribution of fake news and similar attacks on reasonable decision-making and debate, we posit that technology—argument technology in particular—can equally be employed to counterbalance these deliberately misleading or outright false reports made to look like genuine news.

Back to Top

From Fact-Checking to Reason-Checking

The ability to properly assess the quality of premises and reasoning in persuasive or explanatory texts—critical literacy—is a powerful tool in combating the problem posed by fake news. According to a 2017 Knight-Gallup survey, one in five U.S. adults feels "not too confident" or "not confident at all" in distinguishing fact from opinion in news reporting.a Similarly, in the U.K., the National Literacy Trust recently reported that one in five British children cannot properly distinguish between reliable online news sources and fake news, concluding that strengthening critical literacy skills would help in identifying fake news.b

Efforts to combat the effects of fake news focus too often exclusively on the factual correctness of the information provided. To counter factually incorrect—or incomplete, or biased—news, a whole industry of fact-checkers has developed. While the truth of information that forms the basis of a news article is clearly of crucial importance, there is another, often overlooked, aspect to fake news. Successfully recognizing fake news depends not only on understanding whether factual statements are true, but also on interpreting and critically assessing the reasoning and arguments provided in support of conclusions. It is, after all, very possible to produce fake news by starting from true factual statements and drawing false conclusions by applying skewed, biased, or otherwise defective reasoning. We therefore argue that fact-checking should be supplemented with reason-checking: evaluating whether the complete argumentative reasoning is acceptable, relevant, and sufficient.3

Back to Top

Argument Technology for Critical Literacy

Seven years ago, we introduced the Argument Web in Communications: an integrated platform of resources and software for visualizing, analyzing, evaluating, or otherwise engaging with reasoned argument and debate.1 Since then, argument technology has matured into an established research field, attracting widespread academic and industrial interest. Recently, for example, IBM presented the results of its 'grand challenge' on argument technology in a live debate between the Project Debater system and two human debating champions.c

Commissioned by the BBC, we have developed a suite of argument technologies, built on the infrastructure of the Argument Web. The resulting software tools are aimed at providing insight into argumentative debate, and at instilling the critical literacy skills needed to appraise reasoned persuasive and explanatory communication. In addition to identifying reasoning patterns and fallacies, our software addresses the issue posed by echo chambers in which people are less exposed to opinions diverging from their own, while already held views get reinforced. Several cognitive processes are involved in this process—such as confirmation bias6—which discourages the consideration of alternative positions in a dispute, and the backfire effect,8 which leads to further entrenchment of viewpoints when presented with conflicting facts.

The Polemicist applicationd addresses this looming one-sidedness of argumentative positions. The application lets the user take on the role of moderator in a virtual radio debate: selecting topics, controlling the flow of the dialogue, and thus exploring issues from various angles. The textual data is drawn from the Argument Web database of analyzed episodes of BBC Radio 4's Moral Maze.e On this weekly radio program, recurring panelists and invited subject experts debate a morally divisive current affairs topic. The ensuing debate is often lively, combative, and provocative, producing a wealth of intricate argumentative content. Polemicist produces responses given by the actual Moral Maze participants from the Argument Web database and assigns them to software agents modeled on the participants. Playing the role of moderator lets the user rear-range the arguments and create wholly novel virtual discussions between the contributions of participants that did not directly engage in the original debate, while still reflecting their stated opinions.

Test Your Argumentf aims to both foster critical literacy skills and prompt users to consider alternative viewpoints. The software challenges users with a number of argumentation puzzles designed to help develop an understanding of the core principles of strengthening and critiquing arguments. The examples are again drawn from debates on BBC Radio 4's Moral Maze. Test Your Argument was launched on BBC Taster in December 2017, and has since been visited over 10,000 times, with a rating of 4/5, and 88% of the evaluations saying that the BBC should do more along these lines.g

Argument Analytics serves as an online second-screen supplement to BBC Radio and Television broadcasts. Trialed in 2017 on selected episodes of Moral Maze, the data-driven infographics are designed to provide a deeper insight into the debate. For instance, the interaction between arguments pro and contra are diagrammatically visualized, alignment between participants' stances is mapped out, and a timeline shows which parts of the debate lead to the most conflict. An example of Argument Analytics for the October 11, 2017 episode of Moral Maze dedicated to the 50-year anniversary of the Abortion Act in the U.K.h

Back to Top

Argument Mining for Reason-Checking

The latest addition to the suite of argument technologies developed for the BBC is The Evidence Toolkit.i This online application is designed to encourage users to dissect and critically appraise the internal reasoning structure of news reports. The Evidence Toolkit launched in March 2018j as part of BBC's Young Reporter initiative (formerly called 'BBC School Report'). BBC Young Reporter is a U.K.-wide opportunity offered to some 60,000 11- to 18-year-old students to develop their media literacy skills by engaging firsthand with journalism and newsmaking.k The 2018 initiative addressed the issue of fake news. To let students develop the means to distinguish real news from fake news, the BBC commissioned the iReporter gamel from Aardman Animations targeted at 11- to 15-year-olds, and The Evidence Toolkit from the Centre for Argument Technologym for 16- to 18-year-olds.

uf1.jpg
Figure. The Evidence Toolkit interface.

The Evidence Toolkit guides students through a series of steps to identify claims, arguments, counter-arguments, reasoning types, and evaluation criteria on the basis of scholarship in Critical Thinking and Argumentation Theory.3,9 To help students understand the theoretical concepts, examples are given from episodes of BBC Radio 4's Moral Maze. Since BBC Young Reporter is intended to be primarily used in the classroom and teachers will often not be argumentation experts themselves, The Evidence Toolkit comes with teacher notes, available through the BBC website.n

Upon identifying the main claim and reasons in the news article, the software helps users classify the reasoning in the news article based on the type of evidence provided. Reasons can be connected to claims in many different ways. Drawing on theories of argumentation and persuasion,9 users are presented with a compact set of options not requiring any specific theoretical background knowledge. Reasoning is classified as fact-based or opinion-based, which in turn can be subdivided further. Opinion-based reasoning, for example, subdivides into evidence drawn from experts (providing authoritative backing), from popular sentiment (of the masses or of a particular community), and from personal experience (whether the author's own or that of a witness).

Each type of reasoning is associated with a specific template of critical questions pointing at the evaluation criteria for the reasoning.11 In answering these questions, the user builds a confidence level in the support for the claim. Identifying any counter-considerations is another essential step in judging the impartiality of news articles. Again, the software helps students identify any such objections, often linguistically marked with indicative phrases such as "on the other hand," "admittedly," or "to some extent."

In addition to a choice of five articles from various news sources across the political spectrum that are manually pre-analyzed by a team of experts to identify claims, reasons, and objections, The Evidence Toolkit employs automated methods for argument mining (also called argumentation mining in the literature) to allow the students to select their choice of article from the BBC News archives. The implemented argument mining technology automatically extracts the argumentative content from the news articles—provided the chosen article has any explicit argumentative content in it to begin with. Argument mining builds on the successes of Opinion Mining and Sentiment Analysis7 to identify not only what views are being expressed in a text, but also why those views are held4—the software automatically processes the natural language text to produce the analysis otherwise performed by human experts.

At the time of this writing, The Evidence Toolkit has accumulated over 22,000 tries. The software has been well received, earning a rating of 4.15 out of 5 (where the average for applications on BBC Taster lies around 3.5). The user feedback moreover showed not only an accessible user experience (78% found it easy to use), but a successful one: 84% said the critical thinking tools explained in The Evidence Toolkit help to check the reliability of news, with 75% saying that it made them think more deeply about the topics at issue in the news articles. Putting critical literacy high on the BBC's agenda and applying argument technology to drive it also appears to reflect positively on the organization itself, with 73% stating that The Evidence Toolkit positively changed their view of the BBC.

The suite of argument technology developed for the BBC aims to address the major societal challenge posed by intentional obfuscation and misinformation in the modern media landscape. In collaboration with the BBC—and with the producers of BBC Radio 4's Moral Maze in particular—we have approached critical literacy from several angles, developing quantitative debate analytics, interactive ways of engaging with argumentative material, and argument mining technology. With a distribution to over 3,000 educational institutions in the U.K., The Evidence Toolkit constitutes, to the best of our knowledge, the first public deployment of argument mining technology at scale. The further development of argument technology for reason-checking could provide a much needed weapon in combating fake news and reinforcing reasonable social discourse.

Back to Top

References

1. Bex, F. et al. Implementing the Argument Web. Commun. ACM 56, 10 (Oct. 2013), 66–73.

2. Flaxman, S., Goel, S., and Rao, J.M. Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly 80 (2016), 298–320.

3. Johnson, R.H. and Blair, J.A. Logical Self-Defense. McGraw-Hill Ryerson, 1977.

4. Lawrence, J. and Reed, C. Argument mining: A survey. Computational Linguistics 45, 4 (2020), 765–818.

5. Lazer, D.M.J. et al. The science of fake news. Science 359, 6380 (2018), 1094–1096.

6. Nickerson, R.S. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology 2, 2 (1998), 175–220.

7. Pang, B. and Lee, L. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. Now Publishers, 2008.

8. Sethi, R.J., Rangaraju, R., and Shurts, B. Fact checking misinformation using recommendations from emotional pedagogical agents. In A. Coy, Y. Hayashi, and M. Chang, Eds, Intelligent Tutoring Systems 99, 104 (2019), 99–104.

9. van Eemeren, F.H. et al. Handbook of Argumentation Theory. Springer, Cham, 2014.

10. Vosoughi, S., Roy, D. and Aral, S. The spread of true and false news online. Science, 359, 6380 (2018), 1146–1151.

11. Walton, D., Reed, C., and Macagno, F. Argumentation Schemes. Cambridge University Press, 2008.

Back to Top

Authors

Jacky Visser ([email protected]) is a Lecturer in Computing with the Centre for Argument Technology, at the University of Dundee, in the U.K.

John Lawrence ([email protected]) is a Lecturer in Computing with the Centre for Argument Technology, at the University of Dundee, in the U.K.

Chris Reed ([email protected]) is Chair of Computer Science and Philosophy with the Centre for Argument Technology, at the University of Dundee, in the U.K.

Back to Top

Footnotes

a. See https://kng.ht/3iy9z6p

b. See https://bit.ly/3mxjR9s

c. See https://ibm.co/2Rzb7RW

d. Online at http://polemici.st

e. See https://bbc.in/3kqh7J2

f. See https://bbc.in/3iDTRGW

g. All user statistics reported in this Viewpoint are obtained directly from the host and are correct at the time of writing.

h. See http://bbc.arg.tech.

i. Available at https://bbc.in/2FFNQen

j. See https://bbc.in/2FETOvU

k. See https://bbc.in/3mqsP89

l. See https://bbc.in/2H9u1wH

m. See http://arg.tech

n. See https://bbc.in/33CPFkm

This research was supported in part by EPSRC in the U.K. under grant EP/N014871/1, and in part by funding from the BBC. The authors would like to thank the entire ARG-tech team that supported the development of software and resources for The Evidence Toolkit, and to recognize the input and support from Sharon Stokes, Head of BBC Young Reporter. The authors also want to acknowledge their enormous debt to Christine Morgan, Head of Radio, Religion and Ethics at the BBC, and to her production team on Radio 4's Moral Maze for their long-term support and enthusiasm for this initiative.


Copyright held by authors.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2020 ACM, Inc.


 

No entries found