On 26 March 2025, DiploFoundation, together with the Permanent Missions of Kenya, the Kingdom of the Netherlands, Pakistan, the Republic of Korea, and Switzerland to the United Nations in Geneva, and the Permanent Representation of France to the Conference on Disarmament, hosted a diplomatic dialogue on AI and international peace and security.
About
Artificial intelligence (AI) and its implications for international peace and security have become increasingly prominent on the global diplomatic agenda. Beyond discussions on lethal autonomous weapons systems, held within the framework of a dedicated group of governmental experts since 2017, broader debates have emerged on the impact of AI on the international security environment.
Read more…
AI’s role in the military domain has become an increasingly important area of focus, as technology advances and military strategies evolve. Its impact in conflict dynamics, its benefits and risks in military applications, and the principles necessary for its responsible uses remain key concerns for the international community.
In 2024, the global conversation on AI and international peace and security has gained momentum with several major milestones. At the UN level, the Pact for the Future outlined a commitment by member states to assess the risks associated with the military applications of AI and the possible opportunities throughout their life cycle. Furthermore, the 79th session of the UN General Assembly adopted the first resolution on AI in the military domain, which also encouraged member states to convene exchanges on the responsible application of AI in the military field.
Outside of the UN system, multistakeholder initiatives such as the Responsible AI in the Military domain (REAIM) conference look at ensuring that military AI applications are developed, deployed, and used responsibly. The second REAIM Summit in 2024, for instance, saw the adoption of a Blueprint for Action signed by 63 countries. Looking ahead, the UN Secretary-General will prepare a report on AI’s opportunities and challenges for international peace and security, and international initiatives such as the UNIDIR Global Conference on AI, Security, and Ethics (March 2025) and the third REAIM Summit in Spain later in the year will further shape discussions.
Programme
10:00 – 10:15 | Welcoming remarks
10:15 – 11:15 | Setting the scene
Overview of recent and ongoing processes, including UNGA resolutions, the Pact for the Future, the REAIM Blueprint for Action, and upcoming initiatives.
11:15 – 11:30 | Break
11:30 – 13:00 | International cooperation on responsible AI in the military domain
An exchange of views on the role of member states, the UN, and other stakeholders in advancing international cooperation on the responsible application of AI in the military domain. Discussion on challenges, opportunities, and Geneva’s potential role.
***
This is an invitation-only event. It is part of a Diplomatic dialogues on AI series launched in January 2025 with the goal to provide a space for Geneva-based diplomats to engage in open and informal debates on AI governance issues.
AI Assistant on AI in the military domain
Disclaimer: This AI assistant has been built for illustrative purposes, to showcase capabilities of generative AI. The training dataset behind this assistant is not the result of extensive research; other relevant resources may be missing. As such, the assistant may not always provide as detailed or as comprehensive answers as expected.
Use of GPT 4o
Use of DeepSeek v3 Chat mode
Use of Mixtral/8x7B/instruct/v 0.1
Sources used by AI Assistant
The following resolutions, reports, and publications by various UN entities are used by the AI assistant. Additional resources such as outcomes of REAIM summits are also included here.
Title: AI Beyond Weapons: Application and Impact of AI in the Military Domain
Description: This report explores the diverse applications of artificial intelligence in the military, beyond autonomous weapons, including logistics, intelligence gathering, and decision-making support. It assesses the implications and potential impacts on military operations and global security.
Link
Title: AI in the Military Domain: A Briefing Note for States
Description: This briefing note offers a concise overview of critical issues related to military AI, targeting state representatives and policymakers. It addresses the current technological landscape, ethical concerns, and challenges in international governance.
Link
Title: The Global Kaleidoscope of Military AI Governance
Description: This document examines the varied and evolving governance frameworks for military AI worldwide, showcasing different national and international approaches. It discusses the complexities of achieving unified global governance in this field.
Link
Title: Background on LAWS in the CCW
Description: This webpage details the Convention on Certain Conventional Weapons (CCW) and its discussions on lethal autonomous weapons systems (LAWS), providing historical context, key debates, and the current state of international law.
Link
Title: Joint Call for Prohibitions and Restrictions on Autonomous Weapons Systems
Description: A joint UN SG and ICRC appeal urging global leaders to establish norms and regulations for autonomous weapons systems, stressing the urgency of protecting humanity from their risks.
Link
Title: State Positions on Autonomous Weapons Negotiations
Description: This webpage compiles various countries’ official stances on negotiating international agreements for autonomous weapons systems, offering insights into diverse perspectives in ongoing talks.
Link
Title: Revised Rolling Text on LAWS (November 2024)
Description: This document presents the latest draft from the Group of Governmental Experts on Lethal Autonomous Weapons Systems as of November 2024, reflecting current negotiations and areas of consensus or contention.
Link
Title: UN General Assembly Resolution 79/62
Description: This resolution text from the UN General Assembly addresses disarmament issues, potentially including autonomous weapons or military AI, detailing decisions and recommendations adopted.
Link
Title: Our Common Agenda: A New Agenda for Peace
Description: This UN policy brief outlines a vision for peace, tackling global challenges like AI in conflict and security contexts, and proposing strategies for international cooperation and governance.
Link
Title: Pact for the Future
Description: Adopted at the UN Summit of the Future, this document outlines commitments to address global challenges, possibly including AI governance in military contexts, with actionable plans.
Link
Title: Governing AI for Humanity: Final Report
Description: This report provides a thorough analysis and recommendations for AI governance to benefit humanity, covering ethics, policy, and international cooperation, with relevance to military uses.
Link
Title: UN Security Council Meeting on Artificial Intelligence
Description: This page details a UN Security Council meeting exploring AI’s implications for peace and security, including the agenda, participants, and key discussion points.
Link
Title: Press Release: Security Council Meeting on AI (2023)
Description: This press release summarizes a 2023 UN Security Council meeting on AI’s global security impact, highlighting key statements and positions from member states.
Link
Title: Verbatim Record of Security Council Meeting 9381
Description: This official transcript records the 9381st UN Security Council meeting, discussing AI’s role in international peace and security.
Link
Title: 9821st Security Council Meeting on AI
Description: This page covers the 9821st UN Security Council meeting focused on AI in security, with summaries and links to related documents.
Link
Title: Security Council, 79th year : 9821st meeting
Description: This is a record of a UN Security Council session on AI and security, detailing proceedings and statements made.
Link
Title: Press Release: Security Council Meeting on AI (2024)
Description: This 2024 press release recaps a UN Security Council meeting on AI, summarizing key points, outcomes, and any issued statements.
Link
Title: UNGA resolution 79/239: Artificial intelligence in the military domain and its implications for international peace and security
Description: This is the first resolution adopted by UNGA on AI in the military domain.
Link
Title: Chair’s Summary of UNDC Working Group II | April 2024 session
Description: This summary from the April 2024 discussions of Working Group II of the UN Disarmament Commission captures key discussions and outcomes on disarmament topics, including AI and autonomous weapons.
Link
Title: UN General Assembly Resolution 79/23
Description: This UN General Assembly resolution covers issues related to the role of science and technology in the context of international security and disarmament.
Link
Title: UN SG report on science, technology and international security
Description: This report by the UN Secretary-General explores current developments in science and technology and their potential impact on international security and disarmament efforts.
Link
Title: Human Rights Council Advisory Committee Session 32
Description: This page outlines the thirty-second session of the UN Human Rights Council Advisory Committee, including a Draft report on human rights implications of new and emerging technologies in the military domain.
Link
Title: UNIDIR Focus Area: Artificial Intelligence
Description: This UNIDIR section researches AI’s impact on disarmament, security, and arms control, offering publications and resources on AI governance.
Link
Title: UNIDIR Work on AI and Autonomy
Description: This UNIDIR page examines AI and autonomy in weapons systems, addressing technical, ethical, and legal challenges in the military domain.
Link
Title: UNIDIR 2024 ABDM Report
Description: This 2024 UNIDIR report analyzes AI’s role in disarmament and arms control, exploring trends, challenges, and opportunities for global cooperation.
Link
Title: UNIDIR RAISE Initiative
Description: UNIDIR’s RAISE initiative promotes responsible AI in security contexts through research, dialogue, and capacity-building to address governance challenges.
Link
Title: Global Conference on AI, Security, and Ethics 2025
Description: This UNIDIR-hosted 2025 conference page details themes, objectives, and participation info, focusing on AI, security, and ethics intersections.
Link
Title: AI Action Summit Military Talks
Description: This site for the AI Action Summit Military Talks offers details about the event.
Link
The following ICRC reports are used by the AI assistant:
Title: 2024 ICRC Report on IHL Challenges
Description: This report examines the challenges posed by artificial intelligence and other emerging technologies to international humanitarian law (IHL). It explores the ethical, legal, and operational implications of these technologies in armed conflict, emphasizing the need for updated regulations and compliance mechanisms.
Link
Title: Expert Consultation Report on AI and Related Technologies in the Military
Description: This report compiles insights from a diverse group of experts on the use of AI and related technologies in military contexts. It addresses technical, ethical, and policy considerations, offering recommendations for the responsible development and deployment of these technologies.
Link
Title: ICRC Position on Autonomous Weapon Systems
Description: This document presents the ICRC’s official stance on autonomous weapon systems. It stresses the necessity of maintaining human control over the use of force in warfare and advocates for strict regulations to ensure compliance with international humanitarian law.
Link
Title: Publication 4283: Autonomous Weapons Systems
Description: This comprehensive publication provides an in-depth overview of autonomous weapons systems, including their definitions, technical capabilities, and the ethical and legal challenges they pose. It serves as a valuable resource for understanding the ICRC’s perspective and the broader implications of these technologies.
Link
The following reports and studies by governments, think tanks and other entities are used by the AI assistant:
Title: REAIM 2023 Call to Action
Description: Issued at the 2023 REAIM conference, this document outlines commitments and recommendations for ethical military AI use.
Link
Title: REAIM 2024 Conference Detail
Description: This REAIM 2024 conference page details a specific event segment, like a panel on responsible military AI, with agendas and outcomes.
Link
Title: Global Commission on Responsible AI in the Military Domain
Description: This page introduces a commission developing ethical guidelines for military AI, detailing its mandate, members, and activities.
Link
Title: Political Declaration on Responsible Military Use of AI and Autonomy
Description: This declaration sets principles for responsible military AI and autonomy use, reflecting its stance on ethical and legal standards.
Link
Title: Artificial Intelligence and National Security
Description: This RAND report offers an in-depth analysis of AI’s role in national security, addressing its potential to enhance military capabilities, the risks posed by adversarial AI, and the need for governance frameworks to balance innovation and security.
Link
Title: Artificial Intelligence and National Security
Description: Published by the Belfer Center, this report examines AI’s strategic implications for national security, including its impact on military power, intelligence, and decision-making, while offering policy suggestions to address its dual-use challenges.
Link
Title: Killer Robots: UN Vote Should Spur Action on Treaty
Description: This news article covers a UN vote on autonomous weapons, dubbed “killer robots,” advocating for the vote to drive efforts toward an international treaty to regulate or ban them, emphasizing human rights concerns.
Link
Title: A Guide for Policy Makers on Autonomous Weapons
Description: This guide targets policymakers, offering an overview of autonomous weapons systems and recommendations for addressing their ethical, legal, and security challenges through policy and international collaboration.
Link
Title: How Modern Militaries Are Leveraging AI
Description: The Atlantic Council details how global militaries are adopting AI for logistics, surveillance, and autonomous weapons, highlighting key trends and ethical considerations influencing modern warfare.
Link
Title: Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy
Description: This declaration outlines principles for ethical military AI use, emphasizing human accountability, transparency, and international collaboration to prevent misuse.
Link
Title: Ethics and Insights: The Ethics of AI in Warfare
Description: This Naval Postgraduate School document explores ethical challenges of AI in warfare, such as accountability and proportionality, proposing frameworks to align AI with humanitarian and military ethics.
Link
Title: The Ethics of Autonomous Weapons
Description: This Stanford news article discusses ethical concerns about autonomous weapons, featuring expert views on accountability, morality, and the urgency of regulatory measures.
Link
Title: Artificial Intelligence and International Affairs
Description: Chatham House analyzes AI’s impact on global politics, security, and governance, stressing the need for multilateral efforts to manage its transformative effects on international relations.
Link
Title: Artificial Intelligence and Security
Description: ETH Zurich’s analysis highlights AI-related security threats like cyber risks and autonomous weapons, proposing technical and normative solutions to mitigate these challenges.
Link
Title: AI in UK Politics
Description: This BBC article covers UK political debates on AI, focusing on its defense and security applications, alongside government efforts to establish ethical and regulatory frameworks.
Link
Title: The Most Useful Military Applications of AI
Description: SDI’s blog post highlights practical AI uses in the military, including predictive maintenance and intelligence analysis, with examples of operational enhancements.
Link
Title: 15 Examples of AI in the Military and Defence Sector Industry in London, UK, Europe, and the USA 2024-2025
Description: IoT World Magazine lists 15 examples of AI in defense across key regions, showcasing innovative technologies and companies advancing military AI applications.
Link
Title: Military Artificial Intelligence Can Be Easily and Dangerously Fooled
Description: Technology Review warns of military AI vulnerabilities, detailing how adversarial attacks can compromise systems and underscoring the need for stronger safeguards.
Link
Title: Chinese Perspectives on AI
Description: This CSET report examines China’s military AI strategy, ethical stance, and global governance approach, contrasting it with Western perspectives.
Link
Title: Background on LAWS in the CCW
Description: The UNODA provides historical context on the Convention on Certain Conventional Weapons (CCW) and its discussions on lethal autonomous weapons systems (LAWS), outlining key developments and current status.
Link
Title: Stop Killer Robots Campaign
Description: This advocacy site campaigns against autonomous weapons, arguing for human control in warfare and detailing ethical, legal, and security risks while calling for a global ban.
Link
Title: Trust but Verify: U.S. Troops, Artificial Intelligence, and an Uneasy Partnership
Description: Brookings explores challenges in integrating AI with U.S. troops, focusing on trust, verification, and maintaining human oversight to ensure accountability.
Link
Title: Regulating the Use of Artificial Intelligence (AI) on the Battlefield
Description: This Brookings article proposes legal and ethical frameworks for regulating battlefield AI, supported by case studies and principles for international law compliance.
Link
Title: Artificial Intelligence, International Security, and the Risk of War
Description: Brookings examines how AI influences global stability, potentially increasing conflict risks through miscalculation and deterrence challenges.
Link
Title: The Global AI Race: Will U.S. Innovation Lead or Lag?
Description: This Brookings piece analyzes the global AI race, focusing on U.S. competitiveness, policy challenges, and strategic investments needed to counter rivals like China.
Link
Title: AI Rivalries Redefining Global Power Dynamics
Description: Trends Research explores how AI reshapes global power, detailing strategic rivalries, economic shifts, and geopolitical implications of AI dominance.
Link
Title: Artificial Intelligence and the Security Dilemma
Description: Brookings applies the security dilemma to AI, showing how state pursuits of AI capabilities heighten mistrust and strategic tensions, risking arms races.
Link
Title: AI and Future Warfare
Description: This Brookings article predicts AI’s role in future warfare, from autonomous systems to decision-making tools, addressing strategic and ethical challenges.
Link
Title: Artificial Intelligence Arms Race
Description: Wikipedia provides an overview of the AI arms race, detailing historical milestones, key players, and the current global competition in military AI.
Link
Title: AI Action Summit Military Talks
Description: This summit website features discussions on AI in defense, with expert insights on ethical, strategic, and operational dimensions of military AI use.
Link
Title: Global Commission on Responsible AI in the Military Domain
Description: This commission offers recommendations for ethical military AI use, focusing on governance, transparency, and international cooperation.
Link
Title: AI Action Summit Military Talks
Description: This site for the AI Action Summit Military Talks offers details about the event.
Link
Answers to guiding questions, by DiploAI
During the dialogue, several guiding questions were proposed for discussion. We only managed to explore the first two questions. After the dialogue, we asked DiploAI to provide answers to all questions. You will find these AI-generated answers below.
What does responsible AI in the military domain mean?
Introduction
Responsible AI in the military domain encompasses the ethical, legal, and operational frameworks that ensure AI technologies are developed, deployed, and utilized in ways that align with international law, humanitarian principles, and accountability standards. This concept is crucial for mitigating risks while harnessing AI’s potential benefits in military applications.
Key Facets of Responsible AI
The UNGA Resolution 79/239 and various reports outline several vital components that constitute responsible AI in the military domain:
- Preservation of the Human Element: Ensuring that human judgment, oversight, and control are maintained over AI systems, particularly in critical military functions.
- Compliance with International Law: Adhering to humanitarian law, which includes the protection of civilians and the principles of distinction, proportionality, and precaution in military operations.
- Lifecycle Accountability: Maintaining accountability throughout the entire lifecycle of AI technologies, from design and development to deployment and decommissioning.
- Risk Identification and Mitigation: Proactively addressing risks associated with AI, such as algorithmic bias, dual-use technologies, and security vulnerabilities.
- Promotion of Ethics and Equity: Ensuring fairness, transparency, and inclusivity in AI systems while respecting human rights.
- Capacity Building: Supporting education and regional dialogue to bridge technological divides and enhance understanding of responsible AI practices.
(See Figure 1 on page 19 of the UNIDIR report for a visual summary of these facets.)
Operational Priorities
To operationalize responsible AI, several priorities have been identified:
- Human-Machine Teaming: Commanders must retain ultimate accountability, even when AI assists in decision-making processes such as target selection.
- Training and Sensitization: Developers and end-users need to understand the capabilities and limitations of AI to prevent misuse and ensure effective deployment.
- Multi-Stakeholder Engagement: Collaboration among governments, industry, and civil society is essential to align technical and policy efforts for responsible AI use.
Challenges and Divergences
While there is a consensus on the importance of responsible AI, challenges remain, including:
- Governance Models: Disagreements exist on whether new treaties are necessary or if existing laws can be adapted to govern military AI.
- Risk Tolerance: Balancing the operational advantages of AI against ethical risks, such as algorithmic bias and potential humanitarian impacts.
Conclusion
Responsible AI in the military domain is about balancing innovation with safeguards to ensure compliance with international norms, human oversight, and equitable outcomes. Ongoing multilateral dialogue, as emphasiz
Which principles (would) contribute to this vision? And how to translate such principles into meaningful and concrete action?
Introduction
The vision for responsible AI in the military domain revolves around ensuring that AI technologies are developed, deployed, and used in a manner that aligns with ethical standards, legal frameworks, and humanitarian principles. This vision is crucial for mitigating risks while maximizing the benefits of AI in military applications. Translating these principles into concrete actions is essential for effective governance and operationalization.
Key Principles Contributing to the Vision of Responsible AI
Based on the UNGA Resolution 79/239 and related documents, the following principles are vital for fostering responsible AI in the military domain:
- Preservation of the Human Element:
Ensuring that human judgment and oversight remain integral to AI systems, particularly in critical military functions. This principle emphasizes accountability and the importance of human decision-making in the use of force. - Compliance with International Law:
Adhering to international humanitarian law and human rights law, which includes the protection of civilians and adherence to principles such as distinction, proportionality, and precaution in military operations. - Lifecycle Accountability:
Maintaining accountability throughout the entire lifecycle of AI technologies—from design and development to deployment and decommissioning. This principle ensures continuous evaluation and responsibility for AI systems. - Risk Identification and Mitigation:
Proactively identifying and addressing risks associated with AI, including algorithmic bias, dual-use technologies, and security vulnerabilities. This principle is crucial for preventing unintended consequences of AI applications. - Promotion of Ethics, Equity, and Human Rights:
Ensuring fairness, transparency, and inclusivity in AI systems, while respecting human rights. This principle advocates for equitable outcomes and the avoidance of discrimination in AI applications. - Capacity Building and Information Exchange:
Supporting education and regional dialogue to bridge technological divides and enhance understanding of responsible AI practices. This principle emphasizes the need for collaboration among states and stakeholders.
(Refer to UNGA Resolution 79/239 and related reports for detailed discussions on these principles.)
Translating Principles into Meaningful Actions
To effectively translate these principles into concrete actions, the following steps can be taken:
- Establish Formal Frameworks for Collaboration:
Create platforms for regular exchanges among states, academia, civil society, and the private sector to share knowledge, best practices, and lessons learned regarding responsible AI applications in the military. - Develop Comprehensive Training Programs:
Implement standardized training and certification programs for military personnel and AI developers to ensure they understand the ethical, legal, and operational implications of AI technologies. - Implement Robust Governance Structures:
Establish governance frameworks that outline roles and responsibilities for stakeholders involved in AI development and deployment, ensuring that these frameworks are adaptable to evolving technologies. - Conduct Ongoing Risk Assessments:
Regularly assess the risks associated with AI applications in military contexts, including ethical implications, humanitarian impacts, and potential biases in AI systems. - Promote Multilateral Dialogue:
Encourage ongoing discussions at international forums, such as the United Nations, to address challenges and opportunities related to AI in the military domain, ensuring that diverse perspectives are considered. - Foster Research and Innovation:
Invest in research initiatives that explore underexplored areas related to AI governance, such as data governance, dual-use technologies, and the implications of AI on human and environmental security.
Conclusion
The vision for responsible AI in the military domain is supported by key principles that guide ethical and accountable practices. By translating these principles into actionable steps, states and stakeholders can ensure that AI technologies are developed and used in ways that align with international norms, enhance security, and protect human rights. Ongoing collaboration and dialogue are essential for achieving these goals.
What does the application of international law in the military domain mean in practice? How to ensure compliance with international law in the development, acquisition, deployment, use of AI in the military domain?
The application of international law in the military domain, particularly concerning the development, acquisition, deployment, and use of artificial intelligence (AI), is essential for ensuring that military operations adhere to established legal frameworks. This application seeks to uphold humanitarian principles, protect civilian lives, and maintain accountability within military operations.
Practical Implications of International Law in the Military Domain
The application of international law in military contexts encompasses several key aspects:
- Compliance with Humanitarian Law:
Military operations must adhere to international humanitarian law (IHL), which includes principles like distinction (differentiating between combatants and civilians), proportionality (ensuring military actions are proportional to the military advantage gained), and precaution (taking all feasible precautions to minimize harm to civilians). This requires that AI systems used in military operations are designed to comply with these principles from the outset. - Lifecycle Accountability:
The entire lifecycle of AI technologies—from design and development to deployment and decommissioning—must be scrutinized for compliance with international law. This involves integrating legal considerations during the early stages of AI development, ensuring that procurement processes reflect these legal obligations, and establishing mechanisms for accountability throughout the lifecycle (see page 20 of the UNIDIR report). - Monitoring and Enforcement Mechanisms:
There is a need for realistic monitoring and enforcement mechanisms that can ensure compliance with international law during military operations involving AI. This includes maintaining human oversight and control over AI systems, particularly in critical decision-making processes (see page 21 of the UNIDIR report). - Addressing Non-State Actors:
The growing use of AI by non-state armed groups raises significant legal questions regarding accountability and compliance with international law. States must consider how to hold these groups accountable and ensure that their actions are in line with international legal standards (see page 32 of the UNIDIR report).
Ensuring Compliance with International Law in AI Development and Use
To ensure compliance with international law in the military domain, the following steps can be taken:
- Establish Clear Legal Frameworks:
States should develop comprehensive legal frameworks that outline the obligations and responsibilities associated with the development and use of AI in military contexts. This includes translating international law into technical requirements for AI systems. - Implement Training and Education Programs:
Training programs for military personnel and AI developers should emphasize the importance of international law and ethics in military operations. This will help ensure that those involved in AI development and deployment understand their legal obligations (see page 30 of the UNIDIR report). - Promote Multilateral Cooperation:
International collaboration is crucial for sharing best practices and lessons learned regarding AI compliance with international law. Regular exchanges among states, civil society, and the private sector can facilitate knowledge sharing and foster a collective approach to responsible AI governance. - Conduct Regular Risk Assessments:
Ongoing assessments of the risks associated with AI applications in military contexts should be conducted to identify potential legal and ethical implications. This proactive approach can help mitigate risks before they manifest in operational settings. - Develop Guidance and Best Practices:
States should create guidance documents that outline best practices for legal compliance in AI application within the military domain, especially for those with limited resources. These documents can provide a roadmap for ensuring adherence to international law (see page 21 of the UNIDIR report).
Conclusion
The application of international law in the military domain, particularly concerning AI, is pivotal for maintaining accountability, protecting civilians, and upholding humanitarian principles. By establishing robust legal frameworks, implementing comprehensive training programs, promoting multilateral cooperation, conducting regular risk assessments, and developing guidance documents, states can ensure compliance with international law throughout the lifecycle of AI technologies in military contexts.
What roles and responsibilities do various actors (member states, private sectors, …) have – or should have – when it comes to advancing responsible AI in the military domain?
The advancement of responsible AI in the military domain involves a collaborative effort among various actors, including member states, the private sector, civil society, and international organizations. Each of these stakeholders has distinct roles and responsibilities that contribute to the ethical development, deployment, and use of AI technologies in military contexts.
Roles and Responsibilities of Various Actors
- Member States: Member states have a crucial role in advancing responsible AI in the military domain by ensuring compliance with international humanitarian law and human rights law throughout the AI technology lifecycle. They are responsible for establishing legal frameworks and governance structures that promote ethical practices and accountability in AI deployment. Furthermore, states should engage in multi-stakeholder dialogues to share knowledge and best practices, fostering regional cooperation and capacity-building initiatives to address the unique challenges posed by AI technologies in military applications.
- Private Sector: The private sector, particularly technology companies, must prioritize ethical considerations in the development of AI systems for military use. Companies should ensure that their products comply with legal obligations and ethical standards, as their technologies can directly impact military operations and humanitarian outcomes. Additionally, the private sector should actively participate in governance discussions, providing technical expertise and insights to shape responsible AI practices. Establishing partnerships with states and civil society can enhance transparency and accountability in the development and deployment of military AI systems.
- Civil Society and Academia: Civil society organizations and academic institutions play a vital role in advocating for responsible AI in the military domain by raising awareness of the ethical, legal, and humanitarian implications of AI technologies. They can contribute to the development of ethical frameworks and guidelines, ensuring diverse perspectives are considered in policy-making processes. Furthermore, these actors can engage in research and capacity-building initiatives that promote understanding of AI technologies and their potential risks, fostering informed public discourse and encouraging accountability from both states and industries.
- International Organizations: International organizations, such as the United Nations, are essential for facilitating global dialogue and cooperation on responsible AI in the military domain. They can provide platforms for member states and other stakeholders to share best practices, develop normative frameworks, and address the challenges posed by AI technologies. By promoting inclusive discussions and capacity-building efforts, international organizations can help bridge the digital divide and ensure that all states, particularly developing ones, can responsibly harness AI for military applications while upholding international law and humanitarian principles
Conclusion
The advancement of responsible AI in the military domain requires a collaborative effort among member states, the private sector, civil society, and international organizations. Each actor has specific roles and responsibilities that contribute to ensuring that AI technologies are developed and used in ways that comply with international law, uphold humanitarian principles, and promote ethical standards. Ongoing dialogue and cooperation are essential for fostering a secure and responsible AI landscape in military applications.
How to enhance international cooperation on the responsible use of AI in the military domain, at both intergovernmental and multistakeholder levels?
Enhancing international cooperation on the responsible use of AI in the military domain is crucial for ensuring that AI technologies are developed and deployed in a manner that adheres to international law and ethical standards. This cooperation must occur at both intergovernmental and multistakeholder levels, involving a diverse array of actors, including states, private sectors, civil society, and international organizations.
Strategies for Enhancing International Cooperation
- Establishing Multilateral Frameworks: To enhance international cooperation on the responsible use of AI in the military domain, establishing multilateral frameworks is essential. These frameworks can provide a platform for member states to agree on common norms, standards, and principles that govern the development and deployment of military AI technologies. Such frameworks should facilitate dialogue among states, allowing for the exchange of best practices and the harmonization of legal and ethical standards across regions. This approach can help ensure that all states, regardless of their technological capabilities, adhere to shared responsibilities and accountability in the military use of AI.
- Promoting Capacity-Building Initiatives: Capacity-building initiatives are crucial for fostering responsible AI practices in the military domain. These initiatives should focus on enhancing technical literacy and expertise among military personnel and policymakers. By investing in education and training programs, states can ensure that their armed forces are equipped to understand and manage AI technologies effectively. Furthermore, regional cooperation can be promoted through joint training exercises and workshops, allowing countries to share resources and knowledge, ultimately leading to more robust and responsible AI governance
- Facilitating Knowledge Sharing: Facilitating knowledge sharing among states and stakeholders is vital for advancing responsible AI in the military domain. This can be achieved through the establishment of formalized processes and platforms for information exchange, where best practices, lessons learned, and technical insights can be discussed. Regularly convened forums and workshops can help bridge the gap between different stakeholders, including military officials, technologists, and legal experts, fostering a comprehensive understanding of AI’s implications and promoting responsible use.
- Encouraging Cross-Sectoral Engagement: Encouraging cross-sectoral engagement is essential for addressing the multifaceted challenges posed by military AI. By involving diverse stakeholders, including industry, civil society, and academia, states can benefit from a wider range of perspectives and expertise. Multi-stakeholder dialogues can help translate ethical considerations into technical requirements, ensuring that AI systems are developed with compliance in mind. Such engagements can also foster collaborative efforts to mitigate risks associated with AI technologies and promote shared accountability for their use in military contexts.
- Building Regional Cooperation Mechanisms: Building regional cooperation mechanisms can significantly enhance the responsible use of AI in the military domain. States should focus on establishing regional working groups that address specific challenges related to AI technologies, such as data governance and interoperability. These mechanisms can facilitate collaborative research and development efforts, promote joint investments in AI capabilities, and enable states to share resources and expertise. By fostering regional partnerships, countries can collectively address the unique challenges posed by AI in their respective security environments while ensuring compliance with international standards.
Conclusion
Enhancing international cooperation on the responsible use of AI in the military domain requires a multifaceted approach that includes establishing legal frameworks, promoting capacity-building, facilitating knowledge sharing, encouraging cross-sectoral engagement, and building regional cooperation mechanisms. By fostering collaboration among diverse stakeholders, states can ensure that AI technologies are developed and deployed responsibly, in compliance with international law and ethical standards.
What does a flexible, balanced, and realistic approach to the governance of AI in the military domain mean? How do we get there? And what role can Geneva play?
A flexible, balanced, and realistic approach to the governance of AI in the military domain is essential for ensuring that AI technologies are developed and used responsibly, while also accommodating the diverse needs and concerns of various stakeholders. Achieving this requires collaboration among states, the private sector, civil society, and international organizations, particularly in forums like Geneva that facilitate dialogue and cooperation.
Meaning of a Flexible, Balanced, and Realistic Approach
- Flexibility: A flexible approach to the governance of AI in the military domain means adapting regulations and frameworks to the rapidly evolving nature of AI technologies and military operations. This involves being open to modifying existing policies and creating new ones that can respond to unforeseen challenges and opportunities presented by AI advancements. Flexibility allows for innovation while ensuring that ethical and legal standards are maintained.
- Balance: Balance refers to finding an equilibrium between the need for security and the ethical implications of deploying AI in military contexts. This involves weighing the benefits of AI technologies against potential risks and harms, ensuring that military applications do not undermine humanitarian principles or international law. A balanced approach seeks to integrate diverse perspectives from various stakeholders, including states, civil society, and the private sector, to create a comprehensive governance framework
- Realism: Realism in this context emphasizes the importance of grounding governance frameworks in practical considerations and the realities of military operations. This means recognizing the limitations of current technologies, the varying capacities of states, and the complex geopolitical landscape. A realistic approach advocates for achievable goals and solutions that can be implemented effectively, rather than idealistic visions that may not be feasible in practice.
Pathways to Achieve This Approach
- Multilateral Dialogues: Engaging in multilateral dialogues is crucial for fostering collaboration among states and stakeholders. These dialogues can help establish common norms and principles for AI governance, allowing for the sharing of best practices and the harmonization of legal and ethical standards across different regions. Regular forums can facilitate ongoing discussions that adapt to new developments in AI technology and military applications.
- Capacity Building: Capacity building is essential for equipping states and stakeholders with the necessary knowledge and skills to implement responsible AI practices. This includes training military personnel, policymakers, and industry representatives on ethical AI use and compliance with international law. Collaborative regional training programs and workshops can enhance understanding and operational capabilities, ultimately leading to more robust governance frameworks.
- Cross-Sectoral Engagement: Encouraging cross-sectoral engagement involves integrating perspectives from various sectors, including industry, civil society, and academia, into the governance of military AI. Multi-stakeholder platforms can facilitate discussions that translate ethical considerations into technical requirements, ensuring that AI systems are developed responsibly. This engagement can also foster innovative solutions to mitigate risks associated with AI technologies in military contexts.
- Regional Cooperation Mechanisms: Building regional cooperation mechanisms can significantly enhance the governance of AI in the military domain. States can establish working groups focused on specific challenges, such as data governance and interoperability, allowing for collaborative research and development efforts. By pooling resources and expertise, countries can address unique regional security challenges while ensuring compliance with international standards
Role of Geneva
Geneva plays a pivotal role in facilitating international cooperation and dialogue on AI governance in the military domain. As a hub for multilateral negotiations, Geneva can:
- Host Forums for Dialogue
- Support Capacity Building
- Promote Inclusivity
Conclusion
A flexible, balanced, and realistic approach to AI governance in the military domain is essential for navigating the complexities of modern warfare and ensuring compliance with international law. Achieving this requires collaborative efforts among various stakeholders, with Geneva playing a crucial role in facilitating dialogue and cooperation. By prioritizing flexibility, balance, and realism, the international community can develop governance frameworks that effectively address the challenges posed by AI technologies in military applications.
What role various UN fora or treaty bodies can play to contribute to the governance of AI in the military domain, while ensuring complementarity and avoiding duplication/overlap?
The governance of AI in the military domain is a complex issue that requires coordinated efforts across various UN fora and treaty bodies. These platforms can play a critical role in establishing frameworks that ensure responsible use of AI technologies while promoting complementarity and avoiding duplication of efforts.
Roles of UN Fora and Treaty Bodies
- Facilitating Multilateral Dialogues:
- Establishing Normative Frameworks:
- Enhancing Capacity-Building Initiatives:
- Promoting Complementarity Among Processes:
- Addressing Transboundary Challenges:
- Facilitating Research and Knowledge Exchange:
Conclusion
UN fora and treaty bodies have a vital role to play in the governance of AI in the military domain. By facilitating dialogues, establishing normative frameworks, enhancing capacity-building, promoting complementarity, addressing transboundary challenges, and facilitating research, these platforms can contribute significantly to responsible AI governance. Ensuring that these efforts are coordinated and complementary will be essential for managing the complexities associated with AI technologies in military applications.
How to ensure multistakeholder dialogue and engagement on AI in the military domain? And what should such engagement entail?
Ensuring multistakeholder dialogue and engagement on AI in the military domain is crucial for developing responsible governance frameworks. This dialogue should encompass a variety of stakeholders, including states, industry representatives, civil society, and academia, to foster a comprehensive understanding of the implications of AI technologies in military applications.
Key Strategies for Ensuring Multistakeholder Dialogue
- Establishing Structured Platforms:
- Promoting Inclusivity:
- Capacity-Building Initiatives:
- Cross-Sectoral Engagement:
- Regular Consultative Meetings:
What Engagement Should Entail
- Evidence-Based Discussions:
- Risk Assessment and Mitigation:
- Normative Framework Development:
- Feedback Mechanisms:
- Awareness Raising:
Conclusion
Multistakeholder dialogue and engagement on AI in the military domain are essential for developing comprehensive and responsible governance frameworks. By implementing structured platforms, promoting inclusivity, and ensuring evidence-based discussions, the international community can effectively address the complexities and challenges posed by AI technologies in military applications.
How can states make progress on risk reduction and confidence building measures on AI in the military domain?
Progress on risk reduction and confidence-building measures regarding AI in the military domain is essential for ensuring responsible governance and mitigating potential threats associated with military applications of AI. States can adopt various strategies to enhance cooperation and establish frameworks that promote safety and accountability.
Key Strategies for Progressing on Risk Reduction and Confidence Building
- Political Commitment
- Multilateral Dialogues
- Development of Normative Frameworks
- Capacity-Building Initiatives
- Regular Consultative Meetings
- Confidence-Building Measures
Conclusion
By implementing these strategies, states can make significant progress in risk reduction and confidence-building measures regarding AI in the military domain. A collaborative approach that emphasizes political commitment, multilateral dialogue, and capacity-building will be essential for addressing the complexities and challenges posed by AI technologies in military applications.
How to advance capacity development efforts to enable broad participation in discussions on responsible AI in the military domain and its future governance?
Advancing capacity development efforts is crucial for enabling broad participation in discussions on responsible AI in the military domain and its future governance. By fostering an inclusive environment, states and stakeholders can better address the complexities and challenges posed by AI technologies in military applications.
Key Strategies for Advancing Capacity Development
- Investment in Education and Training
- Establishment of Formalized Processes for Information Sharing
- Inclusive Multistakeholder Engagement
- Capacity-Building Initiatives for Developing Countries
- Regular Consultative Meetings and Workshops
- Promoting Cross-Sectoral Collaboration
Conclusion
To advance capacity development efforts for broad participation in discussions on responsible AI in the military domain, it is essential to invest in education, establish inclusive platforms for engagement, and foster collaboration across sectors. By implementing these strategies, states can create an environment conducive to meaningful dialogue and effective governance of AI technologies in military applications.
What other forms of capacity building are needed when it comes to responsible AI, and how can international cooperation be advanced in these areas?
Capacity building for responsible AI in the military domain is essential to ensure that all stakeholders can effectively participate in discussions and governance. This includes not only technical training but also broader efforts to enhance understanding, collaboration, and resource sharing among nations.
Key Forms of Capacity Building Needed
- Education and Training Programs
- Technical Literacy Initiatives
- Public-Private Partnerships
- Regional and Thematic Working Groups
- Joint Investment Frameworks
Advancing International Cooperation through:
- Multilateral Dialogues
- Shared Knowledge Platforms
- Capacity Development Network
- Collaborative Data Pools
- Cross-Domain Training
Conclusion
To effectively build capacity for responsible AI in the military domain, a multifaceted approach is necessary. This includes education, public-private partnerships, regional cooperation, and international dialogues. By advancing these efforts collaboratively, states can ensure that all stakeholders are equipped to participate meaningfully in discussions on AI governance.
To what extent are other AI governance processes relevant for the discussion on AI in the military domain (e.g. Scientific Panel)?
The discussion on AI governance in the military domain is intricately linked to various other AI governance processes, including scientific panels and international cooperation frameworks. These connections are crucial for establishing comprehensive governance mechanisms that address the unique challenges posed by AI technologies in military applications.
Relevance of Other AI Governance Processes
- Scientific Panels: Scientific panels can play a crucial role in the governance of AI in the military domain by providing an independent and expert analysis of AI technologies and their implications. Such panels can issue reports that survey AI capabilities, opportunities, risks, and uncertainties, helping to create a common understanding among member states. This is particularly important in addressing the complexities of AI, where timely and reliable scientific knowledge is essential for informed policymaking. An independent panel could also identify areas where further research is needed, thus guiding future governance efforts.
- Multilateral Dialogues: Multilateral dialogues are essential for fostering collaboration among states and other stakeholders regarding military AI governance. These dialogues can establish common norms and principles that reflect diverse perspectives, helping to harmonize approaches to AI technologies across different regions. By creating platforms for ongoing discussions, the UN can facilitate the exchange of best practices and ensure that governance frameworks are responsive to the evolving landscape of AI in military applications.
- Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skills necessary to implement responsible AI practices in the military domain. These initiatives can include training programs for military personnel and policymakers, enhancing technical literacy and ethical understanding. By investing in education and training, states can ensure that stakeholders are prepared to address the challenges posed by AI technologies, leading to more effective governance frameworks.
- Public-Private Partnerships: Public-private partnerships can enhance the governance of AI in the military domain by fostering collaboration between governments and industry. These partnerships can facilitate the sharing of technical expertise and resources, ensuring that AI technologies are developed and deployed responsibly. Engaging industry representatives in governance discussions allows for the integration of practical insights into policymaking, helping to align military objectives with ethical considerations.
- Information Sharing Frameworks: Establishing robust information-sharing frameworks is critical for the governance of AI in the military domain. These frameworks can enable states to exchange knowledge and best practices, facilitating a deeper understanding of the technical and ethical challenges associated with AI. By formalizing processes for information sharing, states can ensure that governance approaches are grounded in collective learning and that they address the transboundary nature of AI technologies.
Conclusion
The relevance of other AI governance processes, such as scientific panels, multilateral dialogues, and capacity-building initiatives, is significant for discussions on AI in the military domain. By leveraging these processes, states can enhance their governance frameworks, ensuring that military AI applications are developed and deployed responsibly and ethically. This interconnected approach will help address the multifaceted challenges posed by AI technologies in military settings.
Brief summary of discussions
Following an introduction to AI and an overview of key issues and processes related to AI and international peace and security, participants explored a set of guiding questions focusing in particular on responsible AI in the military domain.
The definition of responsible AI within the military domain was the central theme of discussion. Participants sought to identify guiding principles and actionable strategies, recognising the need to bridge diverse stakeholder perspectives.
From commonalities to the unique nature
The dialogue highlighted the presence of numerous commonalities across general (i.e. civilian) AI applications and those specifically within the military domain. However, a crucial question arose: What specific problems are we trying to solve on the military side, distinct from the broader AI challenges?
The unique nature of military applications, particularly concerning the use of force (an exclusive state competence), necessitates careful consideration to avoid parallel and potentially conflicting solutions.
Dual nature of AI
Participants raised numerous questions regarding the scope of AI repurposing as the lines seem less clear-cut than traditionally perceived. The comparison to the cybersecurity discussions serves as guidance to better understand this blurred line due to technologies’ dual usage as a tool to both jeopardise and strengthen security. It was said that policy discussions on AI In the military domain stand where discussions on cybersecurity were some 10 to 15 years ago.
Finding a common language
It was observed that while diverse stakeholders grapple with similar questions, their approaches vary significantly, influenced by cultural backgrounds and individual perspectives. Even more, the discussion surrounded whether it was even easier to talk to diplomats across the table rather than with people back home. While diplomats share the same framing system, fostering regional dialogue is beneficial on several levels.
Key issues highlighted include:
- Bridging diplomatic and national-level discussions.
- Regional engagement and understanding.
- Data governance and transparency.
- Ethical considerations.
Interdisciplinary capacity building
The discussion reinforced the importance of capacity building and convergence. There is a need to build technical knowledge while retaining talent within nations. In more advanced countries, the tendency is towards advancing cross-conversations and collaboration. Diplo’s AI Apprenticeship was offered as an example of an initiative that fosters cross-conversation by bringing tech and governance people together.
From trust to convergence
Trust enables deeper discussions. On a regional and national level, you know your partners better. Knowledge of your partner is important to understand their approach. For example, a region might concentrate on cybercrime rather than cybersecurity, as it was more intuitive to follow this track due to the threat landscape.
Convergence in this discussion refers to how we find the challenges and issues around seemingly separate tracks, which should ideally be covered together. For example, bias and discrimination pose challenges in both civilian and military applications of AI.
Advancing debates on governance frameworks
How to design governance frameworks that address challenges encompassing three distinct, but connected layers: AI in general, AI and its implications for international peace and security, and AI in the military domain? For missions who engage in these discussions, the question arises regarding what to report back to their capitals: Which layer is most pressing, giving national circumstances and priorities.
In general, securitisation is becoming more critical as it gets closer to the nations’ interests, and tech companies are moving towards military use.
There is a blurred line between non-military and military data, emphasising the need for comprehensive data governance approaches.
When discussing governance, it is important to look at the entire AI lifecycle, something that is mentioned in the UNGA resolution 79/239. Speaking different (expert) languages remains a challenge for collaboration between various stakeholders.
The measures taken by private sector companies, such as commitments to responsible AI, are often not robust due to their dependence on financial interests and possible management changes. Furthermore, tech companies’ openness to collaboration has changed over the past few years, and it is rather difficult to engage with them in a meaningful way.
The conversation then shifted to stakeholder engagement in governance discussions, highlighting the need for agile and flexible frameworks. The UN’s leading role was emphasised, though agile solutions are needed. Participants were urged to engage beyond their comfort zones and strive for value-driven AI development. The first step to take is to develop AI based on our values.
Next steps
Participants were encouraged to:
- Provide input to contribute to the UN Secretary-General’s report on the opportunities and challenges posed to international peace and security by the application of AI in the military domain. The deadline for contributions is 11 April.
- Participate in UNIDIR’s Global Conference on AI, Security and Ethics, on 27 and 28 March
- Participate in the REAIM Summit, to be hosted by Spain in September 2025.