Hands of a guy on laptop keyboard

Disinformation takes to the streets: Lessons from other countries that could inspire discussions in the UK

Published on 09 August 2024

The chronic problem of online disinformation has once again entered an acute phase as the world watches in disbelief the street riots in several cities across the UK. Over the past months, Diplo, with the support of GIZ Moldova, has conducted comprehensive research on online content governance, shedding light on the interplay between disinformation and other digital policy areas, such as online business models, cybersecurity, and human rights. The research included case studies in four countries: Finland, Sweden, Lithuania, and Singapore. The takeaways could offer valuable insights into the ongoing crisis in the UK.

Riot police hold back protesters near a burning police vehicle in Southport, England, on July 30. (Getty Images/Photographer: Getty Images/Getty)
Riot police hold back protesters near a burning police vehicle in Southport, England, on July 30. (Getty Images/Photographer: Getty Images/Getty)

The crisis: The spread of disinformation in the UK

UK protests erupted after false rumours were spread on social media about the identity and background of British-born Axel Rudakubana, who was responsible for the brutal mass stabbing of children in the city of Southport. He was falsely portrayed as an asylum seeker who had supposedly arrived in the UK by boat in 2023.

The authorities quickly disputed the false information, but the country’s far-right movements seized the opportunity to inflame the population. Online disinformation, amplified by social media, escalated into racially-based hate speech and incitement to violence against immigrants. The unrest spilled over into the streets, leading to the arrest of almost 500 people.

Ofcom, the regulator of UK communications industries, will have enhanced powers over content moderation and the combat of disinformation – including the application of heavy fines and the possibility of blocking non-compliant platforms – when the UK Online Safety Act of 2023 becomes enforceable. In the meantime, the government has put pressure on social media companies to act responsibly and remove harmful online content fuelling the protests.

The responses from platforms have varied widely: while some have pledged to cooperate, others have ignored the government’s requests or even exacerbated the protests. This has led to discussions on amending the Online Safety Act to strengthen governmental regulation over ‘legal but harmful content’. The UK is also examining the role of foreign states in sowing discord that leads to riots, but so far, there is no strong indication that this was a determining factor.

The image shows a photograph of Elon Musk

Disinformation as a common challenge: Lessons that could be relevant to the present crisis 

The complexity and scale of information pollution in the digitally connected world present an unprecedented challenge. Against this backdrop, a considerable number of initiatives to combat misinformation and disinformation have been introduced by governments and the private sector. Some conclusions can be drawn from the analysis of this broader picture:

  1. Any action to combat disinformation should be aligned with international human rights law, in order to protect the pillars of democratic societies.
  1. Any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence is prohibited by international human rights law, regardless of any assessment of its truthfulness. Public authorities and companies alike are under the obligation to act against such content. 
  1. Laws on disinformation that are vague, or that confer excessive government discretion to fight disinformation are concerning, and have led to censorship in some countries.
  1. Some countries have achieved good results in combating disinformation without enacting specific domestic laws. Sweden, for example, has focused on the Psychological Defence Agency, an independent body with the resources to monitor threats and map social vulnerabilities (such as growing discontent, which makes society particularly prone to fall prey to disinformation campaigns) and capabilities. The Agency also seeks to build long-term societal resilience against disinformation. Finland has achieved good results with an emphasis on media literacy, while Lithuania has relied on online civic engagement to debunk and prebunk disinformation. 
  1. Concerns with disinformation increasingly relate to influence operations originating from abroad. Dissociating external information influence campaigns from legitimate domestic opinion is difficult, especially in the context of astroturfing (the practice of hiding the sponsors of a message or organisation to make it appear as though it originates from, and is supported by, grassroots participants). Moreover, focusing on external operations may lead actors to overlook genuine domestic societal vulnerabilities that need to be addressed before they get maliciously exploited by domestic or foreign actors.  
  1. The introduction of laws on disinformation should seek to protect legitimate and fundamental aims – respecting the rights and reputations of others, protecting national security, public order, or public health or morals – and must be legal, proportionate, and necessary. Any limitation imposed on freedom of expression must be exceptional and narrowly construed.  
  1. More should be done to curb economic incentives to disinformation. Companies are expected to conduct human rights risk assessments and due diligence, ensuring their business models and operations do not negatively impact human rights. This includes sharing data and information on algorithms, which could make an assessment of the correlation between the spread of disinformation and ‘ad tech’ business models possible. 
  1. Companies should ensure that their moderation practices are transparent, consistent, and based on clear guidelines that respect human rights​​. A consistent and harmonised approach should also be fostered across platforms in order to avoid safe havens for disinformation. For example, the EU Strengthened Code of Practice on Disinformation provides valuable suggestions on how platforms could assist in combating disinformation while respecting human rights standards. 

Short-term firefighting and long-term solutions 

In moments of acute crisis, public authorities are under pressure to provide quick solutions to difficult problems. While immediate action is needed to curb incitement to discrimination, hostility, or violence, sustainable solutions require time and the involvement of society. People are not passive recipients but co-creators of messages. They choose what to read, ‘like’, and share. Just as physical resistance training helps to ward off chronic diseases, boosting societal resistance to chronic disinformation is necessary. While laws can be an important piece of the puzzle, governments should engage in transparent policymaking processes, incorporating feedback from all relevant stakeholders, to create balanced, sustainable, and effective disinformation strategies.


In September 2024, Diplo will present the report from comprehensive research on the governance of online content and disinformation, conducted with the support of GIZ Moldova. If you would like to receive updates about the presentation and forthcoming discussions on this topic, please register below.

NoteBy clicking on the register button, you are agreeing to our Privacy policy.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog