Hands of a guy on laptop keyboard

Will algorithms make safe decisions in foreign affairs?

Published on 17 December 2019
Updated on 05 April 2024

Artificial intelligence (AI) is starting to influence decision-making in foreign affairs and diplomacy. It has been reported that in recent years, a Chinese AI ​​system has reviewed almost every foreign investment project. The Chinese Department of External Security Affairs (涉外安全事务司) – under the Ministry of Foreign Affairs – used AI systems in several ways. For instance, AI was applied to the decision-making process for foreign investment in relation to China’s Belt and Road Initiative strategy; the total investment worth $US 900 billion came with high political, economic, and environmental risks. Furthermore, the Chinese government has recently been considering applying AI systems to the defense sector, which requires accurate judgment and precise technology. 

The Chinese Ministry of Foreign Affairs has unveiled an AI system for foreign policy suggestions built by the Chinese Academy of Sciences. As the ministry believes that AI, big data, and other advanced technologies are increasingly applied to various industries and sectors, it will continue to actively apply new technologies to work. While China aims to overtake the USA as the global leader in AI innovation, China’s AI drive has been affected by US sanctions. The US government has made moves to curb China’s ambition to lead the global AI market.

Besides, Elad Ratson, Israeli diplomat, has introduced an algorithmic approach to the practice of digital diplomacy. The so-called algorithmic diplomacy relies on the harnessing of algorithms to influence the flow of country related narratives online. As such, the algorithmic foreign policy system analyses a vast amount of data garnered anywhere from diplomatic cocktail parties, to video footage captured by spy satellites. Based on these datasets, the AI system suggests plausible strategies that can be applied to the actual practices of diplomacy.

As the use of AI becomes an integral part of foreign policy decisions, many believe that AI is capable of predicting international events, and will therefore have clout in shifting geopolitics. It would be fair to say that AI will be able to accurately analyse a myriad of cases and data-sets in the highly strategic game of diplomacy, without being influenced by emotions. Unlike human operators, AI systems will not be swayed by favours or emotions like fear, and will not feel short-sighted impulsivity or compulsivity to retaliate against an adversary. This reminds us of Alan Turing’s prediction that machines will ‘outstrip our feeble powers’.  

Against the backdrop of the AI race, many nation states are – nolens volens – developing AI-based predictive capabilities. AI is highly regarded as a cost-effective tool, even in the digitalisation of diplomacy that reduces human errors to a certain extent. Machines, however, lack the capacity to perceive the value of human life or the calamity from military operations. Does AI understand iniquitous disregard of the rights of small nations? Maybe not, unless it has been built into the system explicitly. Moreover, as Taylor Owen pointed out, much of the data used in AI is being inputted and tagged by humans. So AI systems would be nothing less than chock-full of human bias and errors. Accordingly, as for now, AI systems are powerful but unreliable. The concern over the risks of an AI arms race ranges from algorithmic foreign policy suggestions, to AI based defense systems and killer robots. Therefore, the central concern over AI is linked to the probability that AI’s bias or the so-called overconfidence trap could pave the way towards disastrous decisions. For instance, what if AI systems prioritise ‘winner-takes-all’ options in foreign policy? 

Science-fiction movies commonly depict doomsday scenarios that start by unleashing fierce military attacks between hostile countries. In the future, this might be due to the misclassification or miscalculation of an AI system. It seems that the human factor often stands between us and disaster on a global scale. Back in October 1962, would an AI system have handled the Cuban missile crisis like John F. Kennedy? Probably not, since some suggestions made by the machine learning algorithms could have been biased or given a misdiagnosis disregarding subtle clues. Making AI accountable – AI explainability – is a quest that still has a long way to go. Conversely, staff at the White House at the time had to weigh the importance of informative clues, reading the intention and context of every move. Who makes the decisions was not separated from who bears the responsibility for the geopolitical tension at that time. In September 1983, the world was at the precipice of a nuclear war because of false alarms from a Soviet computer system. At midnight, Stanislav Petrov’s computer screen showed several missiles launched toward the Soviet Union. He judged this to be a false alarm and based on this judgement disobeyed orders and Soviet military protocol. In doing so, he probably averted the beginning of the Third World War. Similar incidents can recur in the age of AI. Would an AI system be able to act as smartly as Petrov did? What is worse, AI systems can easily transfer human responsibility for decisions on foreign affairs to the machine’s capacity itself. It is notable that the buck will not stop in front of AI system.  AI systems will inherently provide a good pretext for passing the buck. Who would take the responsibility for such a decision?

AI-powered strategy for foreign relations is already on its way. But, according to a RAND report, AI has the potential to upend the foundations of nuclear deterrence by the year 2040. Such a change could lead to dire consequences for humanity. How can we be sure that safety measures or self restraint mechanisms are in place? Deterrence is a strategy that dissuades an adversary from taking an action by means of a threat of reprisal. However, a psychological approach to deterrence will not work on an AI system because it has no fear of retribution. In addition, the Black Box problem – when AI makes decisions no human can explain – lurks in the methods that developers use to train their AI algorithm. If developers fail to find out exactly how the miscalculation occurred, it could trigger a stupid decision – the real risk of miscalculation. The Group of Governmental Experts on Lethal Autonomous Weapons Systems (CCW GGE) are aware of military applications of AI technologies, but there still is no working definition of lethal autonomous weapons systems. Having recognised the impasse, in March 2019, UN Secretary-General António Guterres openly urged that no country or armed force should be in favour of such fully autonomous weapon systems that can take human life. This highlights the need for new global institutions and agreements to cope with these emerging technological challenges. To allay AI fear, a researcher at the Shanghai Institute for International Studies expounded that AI policy is a planning system that just supports strategic decisions. It suggested that AI systems only support humans and the final decision will always be left with persons who are in authority. 

In a situation with simmering regional conflicts or geopolitical tensions, a compulsive move to seek competitive advantage could justify blind trust in an AI system. The worst case scenarios are not far from there. Relying too much on a faulty AI system could trigger autonomous and semi-autonomous missile-defense systems or launch a large-scale cyber-attack that may invoke a much wider range of disaster for everyone involved. Once it has begun, nobody can stop the unfolding of such a scenario. The real threat of AI would come from overplaying AI rather than AI technology itself.

Geopolitics may be implicitly transformed due to a series of suggestions based on simulation and prediction regarding foreign policy. AI can predict some chances in geopolitics but we have no idea how it works. The calculations of an AI system take place inside a ‘black box’ and rest heavily on data-sets. Many data scientists have demonstrated that classifiers are highly vulnerable to adversarial disturbance. If adversaries feed fake data – for example, false air strike images on radar –  it causes misclassification or a false alarms that can be perceived as actual threat in an AI system. But the incompetence of AI system is often outstripped by ungrounded expectations of its accurate performance. Many short-sighted countries may acclaim AI predictions in foreign policy as a game changer, but at the same time, we need to confront the apocalyptic risks of such a scenario.

Eun Chang Choi is a Korean legal scholar, focused on information law, data governance, AI-powered strategy for foreign relations, and governing AI. He has conducted research on Internet regulation at the Information Society Project, Yale Law School, and the Centre for Socio-Legal Studies, University of Oxford. He has authored the books ‘The Future of Fake News’ and ‘Layered Model of Regulation’ and has taught at the ITU Centres of Excellence Network for Asia-Pacific Region and the Korea University Graduate School of International Studies. He has a seat in the steering committee of Korea Internet Governance Alliance.

 

Related resources

Load more

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!