Do we really need specialised AI regulation?
Most likely not. Many existing laws are applicable to AI as well. This text aims to spark debate and bring greater clarity to society’s response to AI. Before enacting new AI laws we must first ask whether readily available regulations can effectively regulate AI. This was and is the case with, consumer protection, data governance, tort, and liability among others. The principle of legal accountability which has critical relevance for AI has stood the test of time for over 4,000 years, dating back to the Code of Hammurabi.
Contents
ToggleThe Code of Hammurabi
The code regulates relationships among humans and the entities they create—governments, corporations, and international organisations. From the Code of Hammurabi (circa 1754 BCE) to modern legal systems, the core function of law remains unchanged: to hold individuals and entities accountable for their actions. As long as humans retain control over societal affairs, existing legal frameworks are sufficient to address AI’s challenges.
Applying ancient legal principles to AI
The Code of Hammurabi provides a foundational principle for accountability. For instance, Law 229 states:
‘If a builder builds a house for someone and does not construct it properly, and the house which he built collapses and causes the death of the owner of the house, that builder shall be put to death.’
By substituting ‘builder’ with ‘AI developer’ and ‘house’ with ‘AI system’, we see that the principle of accountability remains relevant. While contemporary legal systems would not impose such severe penalties, the underlying idea—holding creators responsible for their creations—is timeless and applicable to AI.
Legal precedent in practice
History demonstrates the resilience of legal principles in adapting to new technologies. For example, when the internet emerged in the 1990s, Prof Lawrence Lessig questioned the need for a separate ‘internet law’. Instead, existing frameworks—property, tort, and commercial—were adapted to address new challenges. Similarly, AI does not necessitate a unique regulatory regime. Existing laws can be extended to regulate AI systems, with adjustments made only where necessary.
The internet precedent
The internet serves as a valuable precedent for AI regulation. Courts have successfully applied existing laws to address issues like data protection and cybercrime, which introduced fundamentally new types of harm. Similarly, AI-generated content distributed via the internet—such as deepfakes or harmful material—can be regulated under existing frameworks. For instance, Section 230 of the 1996 Communications Decency Act (CDA) grants immunity to tech platforms for third-party content. However, this deviation from the principle of accountability highlights the need to enforce existing laws more rigorously rather than creating new ones.
Addressing long-term risks
In 2023, extinction risks were used as the main justification for strict AI regulations. However, the AI risk landscape has evolved over the last two years, as can be seen below, towards more balanced coverage of exclusion and existing risk, which can be covered by existing legislation. The exclusion risk of monopolisation of AI knowledge can be addressed by anti-trust regulation. Existing risks, from torts to jobs and media, are also covered by existing laws.
May 2023
January 2024
January 2025
Concerns about AI’s long-term risks should be approached cautiously. The precautionary principle, which advocates preventive measures in the face of uncertainty, can guide regulatory efforts.
Legal accountability should prevail over ethical discussions
The inflation of AI ethics frameworks—over 1,000 codes, declarations, and guidelines—risks overshadowing enforceable legal standards. While ethical discussions are valuable, they cannot replace the binding force of law. For example, the moral imperative ‘Thou shalt not kill’ underpins criminal laws against murder. Similarly, international humanitarian law holds military commanders accountable for civilian deaths, even in complex battlefield scenarios. Rather than debating the ‘ethics of killer drone algorithms’, we should enforce existing laws to prosecute unlawful actions.
Ethics frameworks for AI are akin to safety seminars for arsonists: well-intentioned but ineffective without legal consequences. When harm occurs, the primary question should not be ‘Was this algorithm ethical?’ but ‘Who broke the law?’. Prioritising accountability over abstract ethical debates is not cynical—it is practical.
What needs to be regulated in the AI domain?
Picture the world of AI as a towering pyramid, each layer representing a critical dimension of its functionality, as pictured below. As we ascend this pyramid, we uncover whether AI requires its own specialised regulations—or if existing frameworks are sufficient to keep it in check.
Layer 1: AI’s hardware and computational power are already governed by a web of technical standards. Think of AI farms—massive data centres buzzing with activity—regulated by environmental laws that monitor energy and water consumption. Even the global flow of semiconductors is tightly controlled, with the USA spearheading export restrictions to certain nations.
The verdict? No new regulations are needed here. Sufficient rules are in place; the challenge is enforcing them effectively.
Layer 2: Algorithms and AI capabilities have been at the heart of regulatory debates, with concerns ranging from AI safety to alignment and bias. Initially, the focus was on quantitative metrics like the number of parameters or FLOPs (floating point operations per second). However, platforms like DeepSeek have turned this approach on its head, demonstrating that powerful AI inference doesn’t always require massive computational resources.
Layer 3: Data and knowledge, the lifeblood of AI, are already heavily regulated by data protection and intellectual property laws. Yet, the courtroom drama unfolding today reveals the cracks in these frameworks. In the USA, OpenAI is battling the New York Times, while Universal Music Group is suing Anthropic. Across the Atlantic, Getty Images is taking Stability AI to court.
These cases highlight the tension between innovation and ownership. While existing laws provide a foundation, the rapid evolution of AI is testing their limits. The question isn’t whether we need new rules but how to adapt the old ones to a world where AI can generate, remix, and repurpose content at scale.
The momentum for strict ‘algorithmic governance’ has waned since 2023, when fears of AI posing an ‘extinction risk’ dominated discussions. President Trump’s dismantling of Biden’s AI safety executive order further eroded this push. Now, governments and organisations are grappling with how to regulate AI capabilities in a landscape where smaller and smarter models are outpacing brute-force computation.
This is where AI’s societal, legal, and ethical consequences come into sharp focus. Whether it’s deepfakes, biased hiring algorithms, or autonomous weapons, the risks stem not from the technology itself but from how it’s applied.
The Apex: AI uses, here, the entire legal system comes into play—contract law, tort law, labor law, criminal law, and more. A foundational principle applies: those who develop or benefit from a technology must bear responsibility for its risks. This means holding companies accountable for harm caused by their AI systems, whether through negligence, misuse, or malicious intent.
The pyramid reveals a clear pattern: most layers of AI are already regulated. Hardware is controlled, data is protected (albeit imperfectly), and algorithms are evolving beyond the reach of rigid metrics. The real challenge lies at the apex—governing AI’s uses. Rather than crafting new, AI-specific laws, the focus should be on adapting and enforcing existing frameworks.
Conclusion: Humans rule, machines follow
AI is a tool, like a hammer or a horse. Hammurabi didn’t regulate hammers; he held builders accountable. We don’t need any ‘AI law’ because the law already binds the humans behind the machines. If we discard the logic of Section 230’s misguided immunity and recommit to timeless principles—liability, transparency, and justice—we’ll govern AI just fine.
After all, 4,000 years of legal wisdom can’t be wrong.
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!