Ethics and AI | Part 4
Principles for the Ethical Use of Artificial Intelligence in the United Nations system: new wine in old bottles
The United Nations System Chief Executive Board for Coordination (CEB)1 produced, starting from UNESCO work, its own “ethical approach” consisting of ten “Principles for the Ethical Use of Artificial Intelligence in the United Nations system”.
- Do not harm
- Defined purpose, necessity and proportionality
- Safety and security
- Fairness and non-discrimination
- Sustainability
- Right to privacy data protection and data governance
- Human autonomy and oversight
- Transparency and explainability
- Responsibility and accountability
- Inclusion and participation.
All stages of the AI system lifecycle should follow and incorporate humancentric design practices and leave meaningful opportunity for human decision-making.
As UNESCO and CEB enouncements are not identical, we would extract from the CEB list three key principles which are different in formulation and content.
Do not harm: AI systems should not be used in ways that cause or exacerbate harm, whether individual or collective, and including harm to social, cultural, economic, natural, and political environments. […] The intended and unintended impact of AI systems, at any stage in their lifecycle, should be monitored in order to avoid causing or contributing to harm.
Sustainability: Any use of AI should aim to promote environmental, economic and social sustainability. To this end, impacts of AI technologies should continuously be assessed and appropriate mitigation and/or prevention measures should be taken to address adverse impacts, including on future generations.
Human autonomy and oversight: The United Nations system organizations should ensure that AI systems do not overrule freedom and autonomy of human beings and should guarantee human oversight. All stages of the AI system lifecycle should follow and incorporate humancentric design practices and leave meaningful opportunity for human decision-making. Human oversight must ensure human capability to oversee the overall activity of the AI system and the ability to decide when and how to use the system in any particular situation, and the ability to override a decision made by a system. As a rule, life and death decisions or other decisions affecting fundamental human rights of individuals must not be ceded to AI systems, as these decisions require human intervention.2
An initiative by the United Nations Secretary-General
The Secretary-General of the United Nations, António Guterres, could not miss the opportunity to claim his own imprint on the list of attempts to define the role of the world organization in handling AI from a perspective of global governance. The Secretary-General’s convened a High-Level Advisory Body on Artificial Intelligence whose work has culminated in the adoption of its Final Report: “Governing AI for Humanity“, which establishes a comprehensive framework for global AI governance.
The report emphasizes the need for an inclusive and cooperative approach to AI governance, recognizing that current frameworks are insufficient, and that the development of AI is largely controlled by a few multinational companies.
The report issued several recommendations for establishing a robust global governance framework, among which:
- Establishing an independent panel to provide reliable scientific knowledge about AI, helping to inform policy decisions.
- Creating a platform to ensure technical interoperability of AI systems across borders, involving various stakeholders including tech companies and civil society.
- Standardizing data-related definitions and principles to ensure transparency and accountability in AI systems.
Risks associated with the AI
The report does not deal with the issue of ethics but uses a list of risks associated with the AI which is highly relevant for an ethical perspective.
- Damage to information integrity (mis/disinformation, impersonation)
- Intentional use of AI in armed conflict by state actors (autonomous weapons)
- Inequalities arising from differential control and ownership over AI technologies (increased concentration of wealth / power among individuals, corporations)
- Intentional malicious use of AI by non-state actors (crime, terrorism)
- Discrimination / disenfranchisement, particularly against marginalized communities (use of biased AI in hiring or criminal justice decisions)
- Intentional use of AI by state actors that harms individuals (mass surveillance)
- Human rights violations
- Inaccurate information / analysis provided by AI in critical fields (misdiagnoses by medical AI)
- Intentional use of AI by corporate actors that harms customers / users (hyper-targeted advertising, AI-driven addictive products)
- Violation of intellectual property rights
- Environmental harms (accelerating energy consumption and carbon emissions)
- Harms to labour from adoption of AI (disruption of labour markets, increased unemployment)
- Unintended autonomous actions by AI systems (loss of human control over autonomous agents, deceptive / manipulative actions)
- Unintended multi-agent interactions among AI systems (trading AIs engaging in collusive signalling)3
The industry will always have an upper hand in all stages of AI life cycles.
AI and military uses?
The UNESCO Recommendation on the Ethics of Artificial Intelligence appears to be the central piece produced by the United Nations system and recognized as such by the General Assembly in its first resolution on AI, titled “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”.4
However, the adoption of this resolution should not be overestimated. First of all, we are dealing again only with a recommendation, with a political declaration, with a framework for international cooperation. No binding rules. The industry will always have an upper hand in all stages of AI life cycles. The international machinery will always need more time to keep up with the new developments and will have to beg for resources to do anything meaningful. Just raising the flag of ethics and drawing the border between right and wrong will never be enough.
Secondly, and even worse, the governments of major powers and their military establishment will keep have hands free on developing AI system for military purposes. The resolution 78/265 does not hide this truth and make it clear: the resolution „refers to artificial intelligence system in the non-military domain.” (sixth preamble paragraph). The Pandora’s box will be open for ever.
[Part 4 of 6-part series]
- The UN System Chief Executives Board for Coordination (CEB) is the highest-level coordination body of the United Nations system. It is chaired by the UN Secretary-General and meets twice a year. CEB’s main responsibility is to serve as an internal coordination mechanism that provides system-wide strategic guidance, promotes coherent leadership, shared vision and enhanced cooperation among member organizations. ↩︎
- United Nations System, CEB, Chief Executives Board, High-Level Committee on Programmes (HLCP), Inter-Agency Working Group on Artificial Intelligence, Principles for the Ethical Use of Artificial Intelligence in the United Nations System. (20/09/2022) ↩︎
- United Nations, AI Advisory Body, Governing AI for Humanity, Final Report, September 2024 ↩︎
- United Nations, General Assembly, Seizing the opportunities of safe, secure and trustworthy artificial intelligence system for sustainable development, adopted on 21st March 2024, doc. A/RES/78/265. ↩︎
Dr Petru Dumitriu was a member of the Joint Inspection Unit (JIU) of the UN system and former ambassador of the Council of Europe to the United Nations Office at Geneva. He is the author of the JIU reports on ‘Knowledge Management in the United Nations System’, ‘The United Nations – Private Sector Partnership Arrangements in the Context of the 2030 Agenda’, ‘Strengthening Policy Research Uptake’, “Cloud Computing in the United Nations System”, and “Policies and Platforms in Support of Learning”. He received the Knowledge Management Award in 2017 and the Sustainable Development Award in 2019 for his reports. He is also the author of the Multilateral Diplomacy online course at DiploFoundation.
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!