Dr Irini Katsirea is an ILPC Research Associate and Reader in International Media Law at the University of Sheffield, specialising in International, European and comparative media law and has published extensively in these areas.

In this edition of the Spotlight Series, Irini examines the various meanings and history behind the term ‘fake news’ and assesses the efficacy of measures to combat ‘fake news’ from self-regulation to a code of practice.

1. The term ‘fake news’ rose to prominence during the 2016 US presidential election and the Brexit referendum. Almost four years on, the phrase has gained currency again as a whirlwind of false news stories have flooded our timelines on social media platforms with regards to the current pandemic. 

The phrase encompasses a myriad of meanings, from news stories that have been misreported to completely fabricated stories – what would be an appropriate definition for the term?

The term ‘fake news’ is notoriously vague and highly politicised. On the one hand, it has been used to describe foreign interference in elections and referendums, sparking fears over the threat posed to democracy. On the other hand, it has been employed by the US President but also by nationalist, far-right parties such as the German party Alternative for Germany (AfD) for political advantage. The Trump administration and nationalist parties who lambast the mainstream media in their tweets, election campaigns and demonstrations join a long tradition of press victimisation. In the First World War, the notion of ‘Lügenpresse’ was enlisted in the effort to discredit reporting by the enemy. Before the NS party’s seizure of power, this concept was weaponised against the ‘unpatriotic’ press of the Weimar Republic, which failed to stand up to the demeaning Versailles Treaty; later it was used against foreign media, not least by the chief Nazi propagandist Joseph Goebbels. It is against this backdrop of historic and recent abuse of the term ‘fake news’ for political ends that the Department for Digital, Culture, Media & Sport (DCMS) recommended that the term ‘fake news’ be rejected, and that an agreed definition of the terms ‘misinformation’ and ‘disinformation’ be put forward.

In response to this recommendation, the Government distinguished between disinformation as the ‘deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or finan. The distinction between these two types of information challenges draws on Wardle and Derakshan’s typology of ‘information disorder’. It attempts to separate inaccurate content on the basis of the disseminating agent’s motivation. Indeed, intent to deceive is key when attempting to draw a line between calculated falsehoods and legitimate forms of political expression such as ‘news satire’, which ordinarily aim to mock, not to deceive.  

This distinction is pertinent in so far as the methods used to tackle different forms of untruthful expression may need to vary depending on the motivation of the actors involved. Media literacy is a long-term solution for misinformation, while blocking financial incentives is a possible remedy against the spread of disinformation. However, both disinformation and misinformation can potentially pose similar risks. In the context of the current pandemic, a report by the European External Action Service concluded that misinformation was ‘the more pressing challenge’ for public health. While the terms of ‘misinformation’ and ‘disinformation’ are less politically loaded and more amenable to a definition than the term ‘fake news’, one needs to pay heed to the fact that the term ‘fake news’ is likely here to stay as ‘part of the vernacular that helps people express their frustration with the media environment’.

2. What are some key false stories that have circulated with regards to Covid-19?

Since the start of the pandemic, there has been evidence of state and non-state actors spreading false stories about the origins and spread of the disease; its symptoms, diagnosis and treatment; its financial and societal impact; as well as the measures taken to contain it. In February 2020, the World Health Organisation raised the alarm about the emergence of a so-called ‘infodemic’ as a result of the circulation of misleading information about Covid-19. 

Some of the key false stories that have gained considerable traction on social media concern the supposed link between Covid-19 and the 5G network, and the advice on self-medication, inter alia by way of toxic chemicals. The former allegation has inspired numerous arson attacks on 5G infrastructure across Europe, while the latter has led to  hundreds of deaths in Iran and multiple instances of poisoning in other parts of the world

3. Self-regulation and the employment of fact-checkers appears to be the preferred method by technology companies for dealing with ‘fake news’. How effective do you think these measures are?

Technology companies have responded to the challenge of the ‘infodemic’ by adopting a two-pronged strategy of promoting accurate information, and of flagging or removing false claims and conspiracy theories with the help of a network of certified third-party fact-checkers. Facebook, for example, places educational pop-ups from the WHO and national health authorities at the top of result pages, while removing conspiracy theories that have been ‘flagged by leading global health organizations and local health authorities that could cause harm to people who believe them.’ The rather broad focus is on claims that are ‘designed to discourage treatment or taking appropriate precautions’, as well as on ‘claims related to false cures and prevention methods… or claims that create confusion about health resources that are available’.

The promotion of accurate information in cooperation with the WHO and national health ministries is a promising way of countering misinformation. More caution is called for with regards to the removal of false claims and conspiracy theories. Freedom of expression does not only protect truthful information, but may also extend to untruthful statements. It is important that individuals feel empowered to discuss their concerns about the spread of the disease and to criticise the response of public authorities, especially in view of the political uncertainty as to what an optimum response should entail. Public health is one of the narrow grounds for the restriction of free speech. However, under the principle of proportionality, it is imperative that there is a direct and immediate link between the expression and the alleged threat, and that the chosen method to restrict expression is necessary and proportionate. In the US context, the removal of false claims should be reserved to such speech that is likely to incite imminent lawless action, as recognised in First Amendment doctrine. In cases that do not meet this threshold, the flagging of clearly inaccurate information is a more proportionate response compared to its outright removal. Still, the identification of such false information with the help of independent fact-checkers also bears risks. The characterisation of a fact-checking service as ‘independent’ is not cast in stone and can become a matter of contention. Facebook has cooperated with partisan fact-checkers in the past, and other platforms might tread the same path. Nor is it implausible to assume that erstwhile neutral fact-checkers could become subject to media capture.  Even greater risks are posed by the reliance on automated content moderation as well as appeal and review processes. Such recourse to automation has been increasingly resorted to recently as a result of depleted workforces due to the pandemic.

4. Do you believe that these platforms should be held accountable through a code of practice or legislation? If so, how would that balance with the right to freedom of speech?

In recent years, there has been a reconsideration of the continuous validity of the liability exemptions, that have facilitated the platforms’ unbridled growth, especially with regards to potentially dangerous or illegal content. In the EU, the revision of the rules concerning online platforms is discussed in the context of the Digital Services Act, which is intended to set ‘global standards which could be promoted at international level’, underpinned by a ‘duty of care’ and backed by a ‘European regulatory oversight structure’. It reflects the perceived need to harmonise the existing unwieldly ‘patchwork of national rules’, such as the German Network Enforcement Act or the French Avia Law, and to possibly influence the forthcoming UK Online Harms bill. At the same time, the fact remains that platforms do not ordinarily actively curate content in the way editorial desks would. Social media platforms rely on their users to post or share content, and search engines depend on websites to make their offering accessible to them. Their editorial decision-making is mostly related to the organisation of content rather than its production. This difference needs to be taken into account when balancing increased accountability with the right to freedom of speech.

5. In your opinion, what would be an effective solution?

An effective solution would need to focus on the organisation of content rather than its production, building on the experience gained by audiovisual regulators under the Audiovisual Media Services Directive rules for video-sharing platforms. The organisation of content by social media platforms is a focal point of the German draft Interstate Media Treaty (Medienstaatsvertrag, MStV), which has been adopted after lengthy negotiations. The new Media Treaty is the first legislative attempt in Europe to regulate social media platforms’ algorithms for diversity and transparency. The extent to which the Treaty’s  requirements will succeed in penetrating the opacity of algorithms, and in delivering information about the aggregation, selection and presentation of contents that is intelligible, yet detailed enough to be useful, remains to be seen. It is important that such transparency obligations also extend to platforms’ moderation policies, including those on misinformation. The criteria on the basis of which platforms police users’ content, as well as their user notification and appeal procedures need to be human rights informed and subject to scrutiny. Notably, the new Media Treaty also extends journalistic due diligence obligations to professional journalistic-editorial services, which regularly contain news or political information, such as blogs of a non-private nature. Greater transparency for and oversight over platforms’ content policies coupled with greater supervision of online news providers are a starting point towards tackling the current information disorder. The findability of public interest content as well as the provision of financial support for and partnership with trustworthy news media also provide a powerful bastion against misinformation.  

6. With ‘filter bubbles’ and ‘echo chambers’ playing a part in what users see, what measures can individuals take to be able to discern what’s a genuine news story and what isn’t?

The extent to which social media leads to the creation of hermetically sealed filter bubbles is contentious.There is conflicting evidence to the effect that users of social media, aggregators and search engines often enjoy a more diverse and balanced news diet than non-users. Doubtlessly, some social media users exhibit narrow, partisan consumption patterns. However, this might be a reflection of their conscious choices rather than a consequence of personalisation filters imposed on them by social media platforms as implied by the filter bubble theory. In any case, individuals should resist the temptation to consume news that confirm their given beliefs – a phenomenon described as ‘confirmation bias’ – regardless of the reliability of news content. The development of critical thinking and digital verification skills is crucial to being able to discern what is a genuine news story. Measures that individuals can take to avoid falling into the trap of misinformation include: visiting trustworthy information sites, such as those of the WHO or the NHS; being vigilant when getting news from social media; checking a site’s URL and consulting different sources on a given story to critically evaluate its trustworthiness; pausing and reading past the headline before sharing a story.