Ethics and AI | Part 6

Published on 20 March 2025

The European Union AI Act: calling a spade a spade

The EU Artificial Intelligence ACT

Another “first” is the European Union’s Artificial Intelligence Act, also known as the “EU AI Act”, the first comprehensive horizontal legal framework for the regulation of AI systems across the EU.1

Regulation (EU) 2024/1689 is radically different from the Council of Europe Convention on Artificial Intelligence as it deals with the entire complexity of the artificial intelligence by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection.

The key goals include: a) harmonization of rules (by establishing a uniform legal framework for the development, marketing, and use of AI systems across the EU and preventing fragmentation due to divergent national regulations); b) promotion of trustworthy AI (aspiring to the uptake of human-centric and trustworthy AI technologies that align with EU values); c) protection against potential harmful effects of AI systems, for users and society at large.

The EU AI Act develops a risk-based approach, by introducing a risk classification for AI systems, categorizing them based on their potential impact and imposing specific obligations accordingly. 

Notably, the EU AI Act offers the same two-sentence definition of an AI system like the Council of Europe, merged into a single sentence (Article 3(1)):

a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

Unlike the Council of Europe Convention, the EU AI Act concerns and define all actors involved in the production and marketing of AI systems and define them in legal terms.

The EU AI Act does not explicitly use the terms “ethics” or “ethical” in its text, but it incorporates ethical considerations throughout its framework. The Act emphasizes principles that align with ethical standards, such as human rights, transparency, and accountability.

The EU AI Act promotes a human-centric approach to artificial intelligence, which aspires to ensure that AI systems respect fundamental rights and European values. This approach is rooted in the belief that human dignity and autonomy must be central to AI development and deployment. 

This risk-based approach ensures that ethical considerations are integrated into the design and implementation of AI technologies.

The EU Act categorizes AI systems into different risk levels—unacceptable, high-risk, and low-risk—each with corresponding regulatory requirements. Unacceptable AI practices, such as those that manipulate human behavior or violate privacy rights, are outright banned. High-risk systems must comply with stringent standards that include transparency, accountability, and human oversight. This risk-based approach ensures that ethical considerations are integrated into the design and implementation of AI technologies.

 People, Person, Motorcycle, Transportation, Vehicle, Face, Head, Machine, Wheel, La Parka, Blue Demon

A significant focus of the Act is placed on transparency. It mandates that users be informed when they are interacting with an AI system rather than a human. High-risk systems must also explain their decision-making processes, allowing users to understand how outcomes are derived. This requirement aims to foster trust in AI technologies by ensuring accountability.

Even if the Act itself makes not direct reference to “ethics”, it is closely tied to the broader context of ethical guidelines established by the EU, known as the EU Ethics Guidelines for Trustworthy AI.2 These guidelines advocate for a human-centric approach to AI that is lawful, ethical, and robust, ensuring adherence to fundamental rights and values. The key ethical requirements outlined in these guidelines include human agency, oversight, robustness, safety, privacy, data governance, transparency, and fairness. 

The guidelines are addressed to all AI stakeholders designing, developing, deploying, implementing, using or being affected by AI in the EU, including companies, researchers, public services, government agencies, institutions, civil society organisations, individuals, workers and consumers.

  • Developers and users should make sure that an AI system does not hamper EU fundamental rights, a fundamental rights impact assessment should be undertaken prior to its development. Mechanisms should be put in place afterwards to allow for external feedback on any potential infringement of fundamental rights.
  • Human agency should be ensured, i.e. users should be able to understand and interact with AI systems to a satisfactory degree. The right of end users not to be subject to a decision based solely on automated processing (when this produces a legal effect on users or significantly affects them) should be enforced in the EU.
  • A machine cannot be in full control. Therefore, there should always be human oversight. Humans should always have the possibility ultimately to over-ride a decision made by a system. When designing an AI product or service, AI developers should consider the type of technical measures that should be implemented to ensure human oversight. For instance, they should provide a stop button or a procedure to abort an operation to ensure human control.

The EU AI Act explicitly enumerates several unethical practices that are deemed to pose an “unacceptable risk” and are therefore prohibited. These practices include:

  • Manipulative techniques: Using AI-based systems that employ manipulative, deceptive, or subliminal techniques to influence individuals to make decisions they would not have made otherwise, particularly if this could cause significant harm to them or others.
  • Exploitation of vulnerabilities: Exploiting the vulnerabilities of individuals based on their age, disability, or socio-economic status to influence their behavior in a harmful manner.
  • Biometric data misuse: Utilizing biometric data to categorize individuals based on sensitive attributes such as race, political opinions, religious beliefs, sexual orientation, or other personal characteristics.
  • Facial recognition practices: Creating or expanding facial recognition databases through untargeted scraping of images from the internet or closed-circuit television footage.

Conclusion: cleaning up our own courtyard?

Like other visions inspired by technologies, AI crossed the border between science-fiction and reality. It did it in very turbulent way. At the level of the consumers, generative artificial intelligence invaded our computers, whether invited or not. The AI assistants intrude into many daily routines, pressing for attention. We are already victims of large-scale marketing campaigns, whose aim is to fuel curiosity and gradually to create dependence and in time monetizing services we did not really ask for. At the level of the society, the fascination for AI increases, whether fueled by fear and uncertainty or by the promise of a better and more comfortable life. The social body will assimilate AI with all its virtues and risks, and it will succumb to its charm. The human institutions will be witness of processes that will overpass their capacity to adapt and react. People will have to live with their consequences if they remain passive and disregard the trespassing of moral and ethical codes.

Against this background the attempts to harness the power of AI and prevent the wrongdoings are welcome but never sufficient. As it was the case in general with ICTs no one could stop the negative phenomena that proliferate in the cyber space, despite good intention of governments and international organizations, their declarations, and plans of action.  

If cheating, plagiarizing, and creating false academic reputations start in education, there will be no hope to stop some AI evils by law.

Yet, it is important that we are aware of the problems, of the risks, of the uncertainties, and we claim the absolute necessity of ethical norms throughout the entire life cycle of AI. After all no technology could proliferate without the massive presence of users. Raising awareness and educating billions of future users of AI systems is indeed a way to avoid or mitigate excesses and abuses. To what extent this would translate into a transfer of power from the industry to the people remains to be seen. The codification of international law, as modest and slow it could be, is a positive and necessary step ahead.

In the meantime, while expecting international cooperation to solve the ethical deficit in using AI systems, the academic world could start cleaning its own institutions. Statistics everywhere show already that about 50 percent of the students cheat on admissions, exams, essay writing by using AI sources, without verifiable references, or by signing shamelessly AI generated texts. Even worse, the imposture takes over aspirants to university positions and credits. I have seen texts clearly produced by AI assistants simply copied and pasted on the Internet, with a name and a photo included. If cheating, plagiarizing, and creating false academic reputations start in education, there will be no hope to stop some AI evils by law. Whatever we call now knowledge would lose its meaning.

[Part 6 of 6-part series]

Artificial Intelligence: Technology, Governance, and Policy Frameworks online course

Read Ethics and AI series

Part 1 | Part 2 | Part 3 | Part 4 | Part 5


  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, entered into force on 1 August 2024 ↩︎
  2. European Parliament, EU guidelines on ethics in artificial intelligence: Context and implementation, European Parliamentary Research Service, PE 640.163 – September 2019 ↩︎

Dr Petru Dumitriu was a member of the Joint Inspection Unit (JIU) of the UN system and former ambassador of the Council of Europe to the United Nations Office at Geneva. He is the author of the JIU reports on ‘Knowledge Management in the United Nations System’, ‘The United Nations – Private Sector Partnership Arrangements in the Context of the 2030 Agenda’, ‘Strengthening Policy Research Uptake’, “Cloud Computing in the United Nations System”, and “Policies and Platforms in Support of Learning”. He received the Knowledge Management Award in 2017 and the Sustainable Development Award in 2019 for his reports. He is also the author of the Multilateral Diplomacy online course at DiploFoundation

Related resources

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!