From algorithms to Armageddon: The rise of AI in nuclear decision-making
The year 2022 marks a turning point in the development of artificial intelligence (AI), as it was the year OpenAI launched ChatGPT. According to Sam Altman, just five days after launch, the application had already acquired over 1 million users. Earlier that same year, the war in Ukraine began, bringing the narrative of nuclear Armageddon back to the forefront of global politics. At first glance, two seemingly unrelated topics—AI and nuclear weapons—have sparked numerous debates among the world’s brightest minds, becoming the subject of scientific conferences, media headlines, and apocalyptic predictions. The shared narrative of their potential to end human civilisation is the unbreakable link that binds artificial intelligence and nuclear weapons. The possible integration of doomsday weapons and superintelligence represents a complex challenge humanity may one day face. Let us not forget that there are currently more than 12,100 nuclear weapons in existence worldwide, with no signs of reduction, making it clear that we are more than capable of destroying the planet multiple times over. We are caught in a state of fear that nuclear weapons could be used, whether in Ukraine or another volatile region (the Middle East, the Korean Peninsula, or the Indian subcontinent), alongside a growing dread of reaching the point of singularity, where humanity becomes intellectually inferior to artificial intelligence. What does the symbiosis of the two fears look like?
AI and nuclear sector integration
Revolutionising the nuclear domain through the integration of AI has the potential to significantly enhance nuclear decision-making processes and crisis management, establishing more effective and safer methodologies. AI is capable of analysing vast quantities of data in real time, facilitating superior decision-making during pivotal moments. All choices that human beings make about client service communication or business operations are important in one way or another. When it comes to collecting satellite images, analysing activity, or even monitoring communication, one can just bet that there is a history of that event. AI can model potential situations and outcomes based on past data and provide these predictive analytics to leaders, enabling and helping them see the consequences of varying actions. Such foresight can profoundly impact the formulation of a nuclear strategy. Erroneous calculations may lead to catastrophic consequences. AI could facilitate communication and ensure that military and diplomatic channels are synchronised and informed, without the need for real-time human presence. Nevertheless, are ethical considerations taken into account in the possible case of using AI in such high-stakes decisions where humans rely completely on machines, with no presumed responsibility? Of course, AI promises to improve the decision-making process on nuclear things, however, it requires careful and continuous proper implementation in terms of assessment for risk mitigation in this sensitive area.
The application of AI could improve operational efficiency and safety, as well as decisions made in the broader nuclear sector such as nuclear medicine, electricity production, and nuclear weapons management. For instance, the implementation of deep learning methods currently in practice to augment diagnosis systems in nuclear power plants for more reliable true detection of probabilistic accidents and operational anomalies. This should be the open, evidence-based AI systems such as GRU-AE and LightGBM for deeper insights into the operational status of nuclear power plants. These systems are made for early-stage interventions in conjunction with exposure risks for nuclear power plants. Los Alamos National Laboratory is pushing the boundaries of science with three groundbreaking projects that harness cutting-edge technology to supercharge accelerator performance – machines that propel particles to incredible speeds for research across diverse fields. Meanwhile, another innovative project is diving into the future of fusion energy, leveraging the power of deep learning, an advanced form of artificial intelligence, to solve complex challenges in reactor design.
The impact of AI on nuclear deterrence: navigating the complexities
There is currently intense debate among specialists regarding the possible effects AI might produce in the arena of nuclear deterrent systems owing to its potential to automate decision-making in military contexts. AI technology might indeed render decision-making during crises more rational, and possibly obviate accidental launches, resulting from unintentional escalations. But there are huge ethical and practical implications when it comes to AI-based decision-making about the use of weapons. There are different opinions among the experts on what the future trend of AI will be and how it is likely to influence nuclear stability. Many refer to this as ‘subversionists’ since they believe that the opponent will feed lies into AI so that the decision makers will be clueless and diverted. Besides that, sophisticated algorithms using AI may change completely the way warfare is designed and shape forces, as well as command and control systems. The uncertainty is potentially enormous. Nuclear deterrence is built on a fragile network of risk perceptions, and AI could strengthen or undermine stability. They may not perceive the underpinning logic when discrepancies arise between an AI decision and their own intuition, creating a trust crisis situation on the part of policymakers. There is a poor intuitive understanding in humans concerning probability and inconsistent expectancy regarding cost-benefit evaluations, making it relatively difficult to adhere to AI advice. The further evolution of AI in nuclear systems makes an increasingly urgent case for the involvement of the public, the research community, and policymakers in discussions around complexity issues. These issues ought to be raised with serious consideration of responsible development and deployment of technology.
The Cuban Missile Crisis of 1962 presented an unfortunate encyclopaedia of complexities concerning the decision-making in nuclear matters. During this event, two superpowers laid the groundwork for a nuclear strike that almost ignited due to misperceptions and miscalculations. Including AI in such a potentially volatile arrangement could have worsened conditions and led to catastrophic results. As we later found, there were several occasions when decision makers did not follow the standard operating procedures but human intuition to avoid nuclear exchange. This case indicates that autonomy includes adaptability over the automated decision procedure in evolving situations. Is artificial intelligence truly autonomous? The human mental system is a closed system, and actions based on each other’s intentions are unpredictable in the ultimate sense. In other words, ‘unknowability’ is the basis of the human capacity to make choices and decisions free from external predetermination (so-called ‘free will’). While on the surface, AI seems to be endowed with free will like humans, it turns out to be an adapted, heteronomous system, which tragically makes it incapable of making decisions in an entirely autonomous way. All choices need to be compared with each other, and when the option is ambiguous and somehow does not indicate which option the other prefers to choose, the social outcome which can be induced by the government (human) decision of one particular option is linked to ‘responsibility’. Therefore, ‘free will’ and ‘responsibility’ are concepts associated with closed autonomous systems and cannot be linked to heteronomous systems such as AI.
The justified fear of losing human control over nuclear weapons systems has led to specific activities by the world’s most powerful countries. The United States is leading efforts today to keep sufficient human autonomy in the use of AI in the military domain and to reduce the use of AI for nuclear facilities.
The application of AI in the military is associated with a number of difficulties and risks. States have a monopoly of force and in the last instance will very likely succeed in limiting the participation of AI in the decision-making process that can lead to the destruction of the planet. The US initiative to implement responsible AI in the military domain calls not only on states, but also on all stakeholders including companies, international organisations, universities and civil society to put in place measures to increase transparency and communication, and reduce risks of inadvertent conflict and escalation.
On the sidelines of the APEC summit in November 2024, leaders of the USA and China, two adversaries, decided to refrain from granting artificial intelligence command over nuclear weapons systems. No matter how at odds they are, states will always agree on one thing – to suppress the development of mechanisms that are not under their control.
Human values vs machine logic: ethical tensions in AI-managed nuclear systems
The integration of AI into nuclear deterrence systems poses significant ethical challenges that cannot be ignored. AI systems, driven by machine logic, can make decisions that conflict with fundamental human values such as the sanctity of life, freedom, proportionality, responsibility, and others. A purely logical reasoning process based on value-free calculation could have disastrous consequences. The machine has no compassion for massive human casualties and in some situations may suggest the use of nuclear weapons, guided solely by a rational calculation of costs and benefits. If a human hand inserts a life-is-highest-value algorithm into AI that human life is priceless and the most important variable in deciding whether to press the nuclear button, then the autonomy of AI is violated, and the very concept of its introduction becomes meaningless. In the final analysis, homo sapiens will always better understand other homo sapiens and the decisions they make, which only in rare cases are a pure reflection of rationality.
Unquestioning human obedience to the decisions of artificial intelligence raises serious questions about the ethical responsibility of human decision makers. It has already been mentioned as a potential avenue through which adversaries can manipulate AI inputs to mislead decision makers. This is actually one of the most intense ethical challenges. To attain the balance between the possible benefits and the serious harm that AI typically entails requires a very delicate and well-informed approach, grounded in a deep understanding of both technology and ethics.
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!