Hands of a guy on laptop keyboard

Countering terrorist narratives online: A balancing act

Published on 04 July 2016
Updated on 05 April 2024

Terrorists are using the Internet for a wide-range of purposes. On the operational side, terrorist organisations use ICTs for internal communication and fundraising, while on the public relations side, the Internet has proven to be an effective vehicle for disseminating and promoting terrorist ideologies. The spread of extremist propaganda over the Internet can have severe effects in the offline world, with last month’s shooting in Orlando a grave example of what online radicalisation can lead to.

As terrorists get increasingly sophisticated in their use of social media, and as these online platforms are able to reach more and more people around the world, the threat of youth radicalisation by terrorists has come into focus of many decision-makers. However, countering radicalisation and extremist messages is a different undertaking than countering operational terrorist activities. Questions have arisen concerning the line between free speech and countering terrorist content, as well as which actors should be involved in making this effort and striking this balance.

The developments over the last three months signal the heightened concern about online radicalisation. In April, official statements of foreign ministers of Russia, India, and China highlighted the need to counter the rise of online terrorist content. A closer look into the operations of the British counter-terrorism unit was taken in the same month, which stated that the number of instances whereby extremist online content was removed has risen from fewer than 2000 in 2012, to a rate of almost 300 pieces a day this year. In May, the topic reached the level of the UN Security Council, which held an open debate on countering the narratives and ideologies of terrorism. The topic was further addressed by the G7 leaders in Japan.

In addition to discussions on a political level, countering terrorist narratives has also become a concern for the private sector, most notably the Internet industry. On 23 May, Microsoft published its policies related to online terrorist content, as it feels ‘a responsibility…not to contribute, however indirectly, to terrible acts’. Although the tensions between Silicon Valley and US authorities have been growing in different areas of digital policy (e.g. on topics such as encryption, privacy, and taxation), Internet companies seem to increasingly engage with the public sector in the fight against online terrorist narratives. This was also evidenced by the speech given by Steve Crown, Microsoft’s Vice-President, during the Security Council meeting. He stressed the need for public-private partnerships, arguing that ‘if there were an elegant solution, industry would have adopted it.’ Furthermore, the European Union has developed a Code of Conduct on illegal online hate speech in cooperation with Facebook, Twitter, YouTube, and Microsoft.

The evident willingness of both the public and the private sectors to counter online radicalisation together presents a hopeful message. Yet the practical operation of counter-extremist campaigns needs to be very carefully balanced with the right to freedom of expression. There is a delicate line between protecting security and promoting online censorship, and the location of this line is very much open to interpretation. This concern was highlighted by David Kaye, UN Special Rapporteur on freedom of expression, who argued that ‘violent extremism’ could be used as the ‘perfect excuse’ by governments to limit freedom of expression. This caution was reiterated in his recently published report on freedom of expression in the digital age. The right formula for content policy that ensures the maximum level possible of freedom of expression, while lowering radicalisation to a minimum, can only be found through a continued dialogue between security and human rights communities. This was also recognised by the G7, which stressed ‘the need for continued cooperation with the private sector, civil society and communities in investigating, disrupting and prosecuting terrorists’ illegal activities online while respecting human rights and fundamental freedoms’.

Apart from ensuring a continuous consideration of free speech in the design of counter-radicalisation policies, the topic of online radicalisation and extremism itself needs to be explored in further detail. As John Crowley, UNESCO’s Chief of Research, explained during the WSIS Forum in May, there are too many assumptions at the basis of counter-extremist measures that have never been validated by proper investigation. According to him, we need serious, interdisciplinary, scientific research and evidence to understand whether our assumptions about online radicalisation are valid, so that we do not create policies that do more harm than good. A proper understanding of the phenomenon might be enhanced with insights from the domains of psychology and anthropology, and not only from political and tech communities.

In June, the topic attained a legal dimension, as the father of one of the victims of last November’s Paris attacks accused Google, Facebook, and Twitter of offering ‘material support’ to terrorists. He claims that the companies ‘knowingly permitted’ terrorists’ recruitment, fundraising, and the spread of ‘extremist propaganda’. The court is unlikely to rule in the father’s favour, as the tech companies ‘are generally exempt from liability for the materials users post on their networks’. Yet it will be interesting to see how the legal debate will be framed, and whether the case will set a precedent for others.

In sum, the commitment of world leaders and industry chiefs is an important step towards counter-radicalisation in a time where terrorists have great potential to influence and radicalise youth online. At the same time, the search for solutions needs to involve constant attention to the protection of free speech and a commitment by researchers to understand the complexity of the phenomenon. In the end, sustained dialogue is needed to strike the right balance between freedom of expression and security, which is not an act that can be established in one go, but rather a process that requires continuous attention.

Barbara Rosen Jacobson is a Curator for the GIP Digital Watch observatory. This blog was adapted from an article originally published in Issue 11 of the Geneva Digital Watch newsletter. Issue 12 is out now. Download your copy.

Want to know more about cyberterrorism? Join our online course on Cybersecurity (starting in October 2016)

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!