Hands of a guy on laptop keyboard

Guest blog: Cybersecurity vs privacy: Dissecting ‘good’ and ‘bad’

Published on 28 April 2016
Updated on 05 April 2024

The Apple vs FBI case is undoubtedly one of the biggest cybersecurity issues since the Snowden revelations – in terms of what he exposed and the resultant effects. Now that the FBI has opened the iPhone 5c used by the San Bernardino shooters in California and the United States Justice Department (DoJ) has rescinded its request for Apple to unlock the phone to the courts, many long-term implications will likely linger as the larger dispute is far from over. In fact, the FBI has since paid more than $1 million to unlock the phone – and found no additional information that would assist the investigation – technology companies such as WhatsApp have rolled out end-to-end encryption for their services while others such as WordPress enabled HTTPS encryption for free, and the DoJ may still require Apple’s assistance to unlock phones for pending cases.

I am not a cybersecurity expert and will not explore the facts of the case (for that, I suggest reviewing the 17 March webinar hosted by the Geneva Internet Platform). Looking critically at the commentary on the debate, however, exposes two particularly concerning trends that emerged: the concepts of ‘good’ versus ‘bad,’ and the larger relationship between security and privacy, often juxtaposed as antagonists.

Given the amount of online content made available every day, it is easy to understand how relevant, readable, and concise information is crucial. Such information, though, should not come at the expense of perpetuating black-and-white thinking. This is especially true when both (or more) sides of contentious Internet- and technology-related topics such as the Apple/FBI debate continue to become more polarised. While it may not have been the intention of the authors, examples including this post, this post, and this post all evoke language that either seeks to oversimplify the intentions and actions of both government and private companies, or criticises the use of simplified language when discussing the Apple/FBI issue.

On the contrary, I argue that there is no inherent ‘good’ or ‘bad’ when it comes to the intentions of the actors in question. It is really about the competing interests and the values attached to those interests that should be evaluated more thoroughly. By reducing arguments on both sides to what is good or bad, we lose sight of the underlying reasons this ballooned into what Snowden called ‘the most important tech case in a decade’. For those who supported Apple’s stance, it is important to acknowledge the FBI’s larger interest in uncovering as much information as possible so that it and related governmental or security bodies could continue to investigate and thwart those who want to harm others. In addition, democratic societies such as the USA must respect the rule of law, regardless of whether they agree with it, and look to the judiciary to help them discern the most prudent course of action. This point particularly resonates with me as I live in a country where the president and other officials continuously disregard the rule of law, something that has had disastrous consequences for human rights and democracy.

Yet, it is also important for those who supported the FBI to understand the fundamental, long-term dangers involved had Apple had been ordered to comply – especially when the FBI’s own investigators committed large technical errors. The interests of those who supported Apple in defending privacy, cybersecurity, encryption, and human rights also need to be recognised. Given an environment of deep distrust towards government surveillance and monitoring, it is important to remember that intrusive actions that violate human rights or infringe on personal privacy generate fear and insecurity. This, in turn, undermines and erodes confidence and participation in our vibrant digital spaces for dissent and expression. In the wake of the Snowden revelations, one commentator compared such a system to Michel Foucault’s analogy of the panopticon, a prison complex where the imprisoned must assume they are being watched at all times. Thus, regardless of whether or not we are doing something ‘bad’ online, simply knowing that the state may be spying on our phones or PCs could ultimately lead us to tailor our online habits to reflect that constant feeling of being watched. And as the abovementioned commentator asserted, this consequence ‘could be subtly corrosive of exactly the sorts of freedoms of expression and self-identity that liberal democracies purportedly protected absolutely.’

In fact, between the time I originally wrote this post and published it on the DiploFoundation blog, new research released in Journalism and Mass Communication Quarterly concluded that ‘knowledge of government surveillance causes people to self-censor their dissenting opinions online.’ Even more troubling, the study’s author said participants who were steadfast in their belief that they had ‘nothing to hide’ – those participants who also tended to support mass surveillance as a necessity for national security – were the most likely to silence their minority opinions.

Another implicit assertion regarding this issue that encourages black-and-white thinking is that security and privacy are inherently mutually exclusive, when in fact they are not. For the sake of transparency, I agree that Apple or any other technology company creating a backdoor for its software sets a dangerous precedent, as Apple CEO Tim Cook continuously stressed. However, this case also highlighted that there are multiple avenues for cooperation between those who advocate on behalf of the security side, represented here by the FBI and the US Department of Justice, and the privacy side, as represented by Apple, other private-sector technology companies, and certain civil society groups, specifically concerning solutions that can advance a system that promotes both privacy and security as well as upholds human rights online. One suggestion is increased transparency on both sides to strengthen trust. Another suggestion is expanding open, sincere, and facilitated dialogue that can genuinely strive for all actors involved to create human rights-centric, win/win outcomes – in particular through collaboration among government bodies, the private sector, and human rights organizations – as well as rooted in fact and mutual understanding instead of coercion or scepticism.

The concluding message I want readers of this post to recognise is that by polarising our thinking and simplistically labelling one side or position as good or bad, we get lost in the ultimate purpose of such debates: creating a more secure cyberspace that encourages vibrant debate and respects human rights while helping the relevant authorities maintain a more peaceful world.
 

Guest blog: Cybersecurity vs privacy: Dissecting ‘good’ and ‘bad’

Michael J. Oghia is an Istanbul-based journalist and Internet governance enthusiast who represented the Internet Society (ISOC) as a 2015 IGF Ambassador in João Pessoa, Brazil. He completed DiploFoundation’s Introduction to Internet Governance course in April 2016. Twitter: @mikeoghia

Disclaimer: The author wrote this post as part of an assignment for DiploFoundation’s Introduction to Internet Governance course.

 

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!