Part 5: Rethinking legal governance in the metaverse
This post is part of the series UN 2.0 and the Metaverse: Are We Seeing What Is Possible?
- Part 1: Harnessing technology, driving SDGs
- Part 2: ‘CitiVerse: Turning the world into a global village (or rather sandbox?)’
- Part 3: ‘Readiness across the spectrum: Countries’
- Part 4: SDGs as ethical, human rights-based, and technological boundaries of the metaverse
- Part 5: Rethinking legal governance in the metaverse
The legal world is often criticised for lagging behind technological progress or even slowing it down. But is this perception justified, or are we caught in a ‘Groundhog Day’ cycle, repeating past mistakes instead of learning from them? Are we endlessly navigating the Collingridge Dilemma, struggling to govern technology effectively as it rapidly evolves? This article examines the complexities of legal governance in the metaverse, where outdated regulatory paradigms collide with emerging technological realities. It explores how trust in legal systems is eroding, why confidence in governance must be redefined, and how the Global Digital Compact signals a shift towards multi-stakeholder responsibility in shaping a sustainable digital future.
A. Recap of Parts 1 to 4
In the previous articles, we explored the relationship between UN 2.0 and the concept of the metaverse. We learnt how UN 2.0 consciously integrates technology into our societies and how countries are mapping out their path into this future. Different benchmarking approaches provide the framework for understanding and measuring progress in this socio-technological transformation. Part 4 highlighted the role of the sustainable development goals (SDGs) as more than just use cases for virtual worlds, but also as boundaries to ensure their ethical and human-rights-based development. Although still regarded as nascent, the metaverse is already challenging the legal world with the claim of lagging behind.
B. What’s wrong with the legal?
The legal world often faces blame, shame, and the looming threat of technological disruption. In this article, ‘the legal world’ and ‘the law’ are used broadly to refer to various aspects of legal systems. If this seems exaggerated, consider a moment from the morning session of the UN Virtual Worlds Day. During the discussion, Karl-Filip Coenegrachts from Open & Agile Smart Cities (OASC) introduced himself by saying, ‘I’m a lawyer. Apologies for that.’
1. Law as laggard
The legal world is frequently accused of lagging behind technological progress, or worse, of actively slowing it down. But is this criticism justified? Karl-Filip Coenegrachts highlighted the complexity of legal frameworks, admitting that even as a lawyer, he struggles to keep track of all the European regulations relevant to the Cityverse and virtual worlds.
‘Much of it has already been regulated,’ he noted, ‘but at the city and community levels, no one knows how to navigate these regulations. As a result, they tend to ignore them and simply do whatever they believe is best.’
To address this issue, Coenegrachts called for clearer, globally coordinated regulations that align with the international nature of these initiatives. He emphasised that such collaboration should take place at the UN level.
The legal world faces two seemingly contradictory challenges simultaneously: too much and too little regulation. Ignorance is at the core of both. Ignorance results from our limited human ability to process information, whether due to its sheer volume or complexity. Legal tech and informatics experts have been trying to solve this problem through automation. Despite lacking proper human semantic understanding, current LLMs can already assist human experts if used properly.
When even lawyers struggle to keep track of all existing regulations – let alone anticipate those needed – we have a profound, systematic problem. Ignorance erodes trust in the system: trust in our legal systems, our broader social structures, and even our social contracts.
2. The erosion of trust
This ignorance directly contributes to the erosion of trust in our legal systems. Trust is lost when a system cannot protect its members from harm – not even from crime. Cyberspace, social media, and (generative) AI are challenging our social contracts worldwide by undermining trust in our systems’ capability to protect people. To illustrate the severity of the situation, Madan M. Oberoi from Interpol asked participants at the UN Virtual Worlds Day to guess the percentage of convictions for crimes committed in cyberspace. Even the lowest estimate of 7% turned out to be overoptimistic.
According to Interpol, only 0.33% of cybercrimes end in conviction. This translates into a confidence level of 99.7% for criminals that nothing will happen to them. Oberoi highlighted that the decisive factor for safety – whether in physical or cyberspace – is the certainty of consequences. The certainty of legal consequences for cybercrime amounts to just 0.33%. This figure does not even include harmful acts that are not classified as crimes but nonetheless cause considerable harm.
3. Virtual harm – Real trauma
An example mentioned during the event was the virtual sexual assault of a girl’s avatar on a virtual reality (VR) platform. Being sexually assaulted in VR does not inflict physical harm but constitutes a profound violation of one’s integrity – a form of dehumanisation and virtual disintegration. The trauma experienced by the victim can be similar to the trauma inflicted by physical rape. This was not an isolated case. According to Fabio Maggiore, head of Cybersecurity Governance at UNICC, many such cases exist.
According to the Interpol expert, harassment and abuse are the main security threats in the metaverse (UN Virtual Worlds Day). Virtual spaces minimise our sense of uncertainty and perceived risk. The online disinhibition effect is well known, particularly on social media platforms. People behave differently when communicating online – for better or worse. A sufficient degree of certainty of consequences is vital for designing regulatory frameworks and (re)establishing the trust that is needed.
4. Presence and embodiment
If you have not experienced immersion in a virtual space with a VR headset, it is difficult to understand the sense of presence in such an environment. The fear reaction to perceived falling is similar to real-life experiences. The critical elements from a psychological perspective are the concepts of presence and embodiment. The more realistic the virtual environment becomes, the stronger the experience. ‘As technology becomes better and, I would say, more convincing to our brains, that immersion has consequences that we may not realize,’ stated Jaimie Stuart from the United Nations University.
We quickly feel present in the virtual space and identify with our avatars. We experience embodiment, meaning it feels as though we are in this virtual world despite the current modest quality of many VR spaces. The effect, which can be profoundly harmful, is also harnessed to raise awareness, improve learning, and enable people to experience digital information in a more impactful way, as outlined in the case studies for the SDGs in Part 4.
5. Stuck in an outdated paradigm
The increasing realism of virtual spaces, powered by advances in AI simulation, further complicates the challenge of maintaining human integrity. Advances in generative AI have been groundbreaking, according to Brent Milliken from Inverse – a fact that is widely misinterpreted. The significance of technological progress over the past few years has largely been ignored. Leading tech companies are not focused on animation; they are focused on simulation.
‘What truly makes it groundbreaking is that it’s not about animation; it’s about physics. […] A deepfake will be part of our lives regardless of who or where we are. It is just near reality now’, stated Milliken.
Behavioural avatars are not just static animations; they can exhibit complex behaviours, learn from their interactions and experiences with human users, and adapt. Their actions are driven by underlying simulation and AI, not merely pre-programmed scripts. The speaker highlighted the dangers of manipulating humans into unintended behaviour – not only through avatars but also via the artificial environment itself.
The mistake made over the past 30 years was absolving providers of any responsibility for the content they host – most notably through Section 230 of the 1996 Communications Decency Act in the USA.
6. From Section 230 to the EU’s AI Act
Section 230 is a piece of United States legislation that provides immunity from liability for online platforms regarding third-party content. In essence, it protects websites and internet service providers from being held responsible for content posted by their users. This legislation was designed to help ‘jump-start’ the emerging industry behind the evolving internet (or, more specifically, the World Wide Web). This protection has been instrumental in the growth of the internet, but it has also been criticised for enabling the spread of harmful content, such as misinformation and hate speech.

The 2022 Digital Services Act (DSA) is a landmark piece of legislation in the European Union aimed at creating a safer and more accountable online environment. Unlike the USA’s approach with Section 230, the DSA places greater responsibility on online platforms for moderating illegal and harmful content.
Another landmark piece of legislation is the 2024 Artificial Intelligence (AI Act). For instance, Article 5 of the EU AI Act prohibits harmful manipulation and deception, the exploitation of vulnerabilities, and emotional recognition. However, will this legislation be sufficient to ensure the safe development and use of behavioural avatars and an adaptive virtual environment?
7. The metaverse as a gigantic foresight exercise
The visions of the metaverse, virtual worlds, and Web 4.0 are immense exercises in foresight. By imagining what could be possible, leading tech and social media companies have revealed how they envisage the future – and have been actively building it for at least four years. Nothing we are seeing now should surprise us, yet it does. The metaverse’s vision is so vast and utopian that it leaves us confused and, consequently, in a state of uncertainty – a highly uncomfortable mental state that often leads to ignorance of developments like the metaverse.
However, UN 2.0 is about fostering a forward-thinking culture and incorporating technology that benefits everyone, with input from all stakeholders. The Global Initiative on Virtual Worlds an AI – Discovering the Citiverse aims to realise this vision by promoting open, interoperable, and AI-powered virtual worlds that can be used safely and with confidence. A forward-thinking culture can only be built on trust. The question is: how can we (re)establish trust in such a situation?
C. Implied confidence
A different approach is needed to re-establish confidence in the metaverse, addressing confusion and uncertainty. Experts from the Global Initiative propose such a framework by focusing on ‘implied confidence’. Confidence is the quality of being certain of one’s own abilities and trusting in the outlined visions of the future.
For this purpose, three progressively detailed confidence frameworks have been developed. These frameworks build upon each other, starting with general ethical principles (FGMV-06), then outlining models for user participation (FGMV-23), and culminating in a comprehensive structure for security and governance (FGMV-24).
1. FGMV-06: Ethical guidelines for confidence and security
The technical report Guidelines for Consideration of Ethical Issues in Standards That Build Confidence and Security in the Metaverse (FGMV-06) introduces a ‘user implied contract of confidence’, based on the broader ethical guidelines of human rights principles (Universal Declaration of Human Rights) and the SDGs. This concept refers to an implicit agreement between a user and a platform provider, which is neither formally written nor spoken but is instead implied through the user’s actions and participation in the metaverse environment.
Trust and confidence should be established from three dimensions: co-ownership, co-responsibility, and transparency. Co-ownership is achieved by granting users control over digital assets. Co-responsibility is fulfilled through shared accountability between users and platforms. Furthermore, transparency is ensured by clearly communicating risks and safeguards.
Certainty and the belief in reliability are considered key factors in establishing the necessary user confidence. The report highlights that co-creation is not merely an option but a fundamental aspect of user engagement in the metaverse. Compared to the current web landscape, the metaverse is characterised by much deeper engagement, blurred boundaries, increased access to personal data, the potential to redefine reality, and an evolving participatory culture.
2. ITU FGMV-23: Online and offline implications of confidence
The technical report titled Technical Report on Considering Online and Offline Implications in Efforts to Build Confidence and Security in the Metaverse (FGMV-23), introduces the metaverse participation realms which distinguish between different spaces within the metaverse based on the scope of participation in its digital and physical components:
The intra-metaverse refers to fully digital participation. The peri-metaverse encompasses interactions that bridge digital and physical realms. The extra-metaverse highlights offline individuals who are still affected by metaverse policies.
3. ITU FGMV-24: A structured framework for confidence
The third technical report, titled A Framework for Confidence in the Metaverse (FGMV-24), is a pre-standardisation approach to confidence in the metaverse. It further expands the user confidence framework with structured security and safety dimensions. It extends from trust dimensions such as privacy, security, resilience, and intellectual property to human dimensions such as safety, inclusion, sustainability, and well-being. By defining personhood in the metaverse, the framework acknowledges digital identities and avatars as legal and ethical entities. The confidence governance model is a multistakeholder approach that involves platforms, users, and policymakers.
4. Facing the dilemma
Let’s make sense of this user confidence framework and implied confidence.
a. Business as usual: One-sided trust
Based on past experience with platform responsibility, this framework could be seen as a pragmatic acceptance of a tech-driven utopian vision – trusting in its vast potential while acknowledging the sacrifices made along the way for the sake of a better future. This reflects a technocratic narrative that has gained traction under the label ‘long-termism’, particularly in relation to AI development.
b. A new approach: Co-creation
This article series aims to explore the relationship between UN 2.0 and the metaverse. Rather than endorsing the framework outright, we will place it within the broader context of UN 2.0 and the AI Action Summit 2025, which reflected the UN’s emphasis on non-binary, multistakeholder governance and pointed towards rebalancing the AI risk topography by prioritising tangible short-term risks.
The user confidence framework puts the user in the foreground. It highlights the necessity of co-creation and co-responsibility to restore the confidence needed to bring the vision of the metaverse into reality. As we noted in Part 3, engagement is vital in developing the metaverse’s ecosystem. Although the new framework focuses on the user, the implication of trust is embedded within the multistakeholder perspective.
This also applies to other stakeholders, such as lawmakers and tech companies, signalling that visionary plans will fail unless this confidence is secured. In our discussion of lagging behind, this means that companies must acknowledge that they are, in fact, falling behind by clinging to an outdated narrative from the last century – the narrative of uncontrolled technological development, which underpins the aforementioned Section 230.
D. Back to the 1980s: The Collingridge Dilemma
To better understand the current governance challenges, we revisit a classic analysis of technology control: the Collingridge Dilemma. In 1980, David Collingridge examined the difficulty of regulating technology by considering two aspects (‘horns’) in search of a more effective approach.
The first aspect concerns the predictability of a technology’s social impact. When a technology is still in its infancy, the interaction between technology and society is not yet strong enough to predict the harmful social consequences of its full development. However, this stage would be the ideal point at which to justify controlling the technology.
The second aspect arises once a technology is sufficiently developed and widespread for its unintended social consequences to become apparent. By this stage, however, regulation becomes far more difficult. The technology is already embedded in society, the economy, and other technological systems, making regulation disruptive, costly, and slow (The Social Control of Technology).
1. Confidence and the essence of control
As the working group emphasises confidence, we will go back in time to examine its role in Collingridge’s The Social Control of Technology. The term ‘confidence’ is used in relation to the first aspect of the control dilemma.
As early as 1980, the author criticised the fact that the high uncertainty of future developments and the limitations of forecasting did not allow for the required confidence to sufficiently justify the control of technology. Instead of attempting to foresee social consequences, he recommended addressing the second ‘horn’. If harm detection is only possible in retrospect, then the objective must be to ensure early detection, and that technology remains controllable.
The essence of controlling technology is not in forecasting its social consequences, but in retaining the ability to change a technology, even when it is fully developed and diffused, so that any unwanted social consequences it may prove to have can be eliminated or ameliorated. It is, therefore, of the greatest importance to learn what obstacles exist to maintainance of this freedom to control technology.
– David Collingridge, The Social Control of Technology
2. The roots of the dilemma
The roots of the dilemma extend from systemic constraints and temporal and scalability challenges to ideological biases and expert advocacy.
a. Systemic constraints
Entrenchment and competitive pressures create systemic constraints: the more a technology becomes integrated into society, the harder it is to change. Furthermore, when technology is driven by competition – whether military, economic, or corporate – it becomes self-reinforcing and difficult to control, as competitive pressures often prioritise rapid deployment over careful consideration of social impacts.
b. Temporal and scalable challenges
Challenges related to planning, lead times, and expansion arise: the tendency to hedge against uncertainty can lead to technological overcommitment, making control more difficult. Long development times trap policymakers in outdated decisions, as technology evolves faster than regulatory frameworks. Moreover, the sheer scale and complexity of modern technologies make them increasingly difficult to adjust or replace.
c. Ideological biases and expert advocacy
Dogmatism and the role of expertise introduce ideological biases: political and ideological momentum can blind decision makers, making bad technologies difficult to reverse. Moreover, experts are not neutral or objective; they advocate for their interpretation of data. Collingridge challenged the belief that experts provide disinterested, purely factual advice, arguing that they should be seen as participants in debate rather than ultimate authorities. Biases and interests shape their views, making them more like advocates than neutral experts. To Collingridge, advocacy itself is not a problem.
3. Monitoring as control
Collingridge advises making these biases and interests visible to enable a ‘science court’ to make informed decisions that can later be adapted to changing circumstances. To make technology controllable, experts advocate for their interests and, in doing so, monitor technological progress arbitrarily.
The science court would approach decisions differently, recognising the biases and interests underlying various experts’ interpretations of data. To Collingridge, monitoring is the best way to control technology, as predicting consequences is impossible. However, monitoring is effective only if the government acts on early warning signs, despite uncertainty and industry opposition.
4. Groundhog Day: AI dilemma
Half a century later, these foundations still feel remarkably relevant, particularly in the case of (generative) AI, a key metaverse technology.
a. AI’s systemic constraints
AI is rapidly becoming entrenched in sectors such as healthcare, finance, and media, making it difficult to reverse or modify its use, even when unintended consequences emerge. Meanwhile, competitive pressures in AI development often favour rapid deployment and market dominance over ethical considerations and social impact assessment. This can lead to a ‘race to the bottom’, where companies prioritise speed over safety and responsibility.
b. AI’s temporal and scalable challenges
AI development presents unique challenges for planning, lead times, and expansion. Its rapid evolution makes it difficult for policymakers to keep pace. While regulations take years to develop and implement, AI capabilities can change drastically within months, often rendering regulations outdated before they take effect. Additionally, the global scale of AI systems – characterised by decentralised and interconnected models – transcends national borders. This makes effective regulation by individual countries difficult, necessitating international cooperation and coordination.
c. AI’s ideological biases and expert advocacy
Ideological biases and expert advocacy influence AI governance. Strong ideological positions on AI – ranging from unfettered innovation to existential risk mitigation – can hinder the development of balanced and effective regulations. While experts play a crucial role in informing policy decisions, they often represent specific interests and viewpoints. This can lead to biased interpretations of data and distorted regulation unless experts are recognised as advocates, with their biases explicitly framed as their point of view.
E. Establishing a feedback loop
Crucially, when tech companies prioritise rapid advancement and dismiss concerns about potential harms, they effectively sever the feedback loop necessary for responsible technological development. This loop is essential for integrating technology into social systems in a way that benefits humanity and mitigates unintended consequences. We risk repeating past mistakes and failing to learn from previous technological revolutions by suppressing this feedback loop.
1. The narrative of Groundhog Day
While the feedback loop remains disrupted, the establishment of effective governance cannot progress. We are not merely lagging behind – we are trapped in a socio-technical ‘Groundhog Day’ narrative. Groundhog Day, a 1993 film, depicts a weatherman forced to relive the same day repeatedly. Similarly, we find ourselves caught in a recurring cycle with each technological advancement. The promise of progress is obscured by our failure to learn from past regulatory shortcomings. This ‘Groundhog Day’ scenario highlights the urgent need to break free from this repetitive pattern.

F. Changing habits – Shifting mindsets
1. The narrative behind implied confidence
To escape this ‘Groundhog Day’ scenario, we must fundamentally shift our mindsets and adopt new approaches to technology governance. UN 2.0 and the Global Initiative on Virtual Worlds offer a potential framework for addressing the Collingridge Dilemma. By consciously integrating technology into the fabric of reality and promoting a multistakeholder approach, UN 2.0 seeks to foster a more proactive and collaborative model of technology governance. This approach recognises that governing technology is not merely about controlling it but about shaping its development and deployment to serve society as a whole.
Unlike in 1980, we now have extensive knowledge of technology’s societal effects – including the most tragic cases, such as teenage suicides (see Part 4). This awareness should give us the confidence to justify regulation. The nefarious use of deepfake technology, the harmful effects of disinhibition, and the lack of practical enforceability of existing norms – whether due to the massive scope of technology or issues of sovereignty – are no longer matters of speculation but foreseeable crises.
2. A new twist in the plot
The user confidence framework introduces a subtle yet crucial shift in the dominant narrative of confidence. It reorients the focus towards user empowerment and co-responsibility as essential conditions for the metaverse’s development. From this perspective, the framework acknowledges that true confidence in the metaverse does not arise from blind faith but from active participation, shared ownership, and a collective commitment to shaping the future of technology.
3. A unified effort
For such a unified effort, multistakeholder governance is essential. The governance of technology must involve governments, private sector actors, civil society, and academia. Rather than treating regulation as a race against time and blaming the legal (governance) world for lagging, governance should incorporate flexibility, participation, and continuous reassessment to prevent technological irreversibility. The question is not whether technology can be controlled, but how governance can remain adaptive enough to retain control.
In 2015, the UN launched the Global Digital Compact, an initiative designed to voluntarily bring together companies, foundations, associations, and academic entities committed to implementing universal sustainability principles and supporting United Nations goals. The mission of the UN Global Compact is to contribute to the creation of a sustainable and inclusive global economy that delivers lasting benefits to people, communities, and markets.
4. The Global Digital Compact
Nearly ten years later, in 2024, this collaboration was reinforced through the Global Digital Compact (GDC), a framework for guiding international cooperation and establishing shared principles on digital governance. The GDC was initially envisioned as a more binding agreement than the adopted version. Its original objectives included establishing a comprehensive international framework for data governance and imposing binding guidelines on member states for AI governance (Unpacking Global Digital Compact: Actors, Issues, and Processes).
As negotiations progressed, however, it became clear that member states varied in their readiness to commit to such ambitious frameworks. This led to a softening of the language and commitments, ultimately resulting in a non-binding agreement that promotes international cooperation rather than enforcing strict legal obligations. Although the GDC sets out principles and commitments, it allows for flexibility in implementation, recognising the diverse national contexts of member states. The commitments serve as guiding actions rather than enforceable legal requirements.
5. The UN Virtual Worlds Day: A momentum for human collaboration?
The UN Virtual Worlds Day took place in June 2024, just a few months before the adoption of the Global Digital Compact (GDC) in September 2024. This timing suggests a strategic effort to leverage the event as a platform for raising awareness and building momentum for digital governance discussions leading up to the Summit.
In fact, UN Virtual Worlds Day serves as a case study for achieving SDG 17: Partnerships for the Goals, highlighting the essential role of collaboration between governments, the private sector, and civil society in shaping the future of technology and ensuring its benefits are equitably shared. However, while creating momentum is important, we must remain cautious not to fall into the ‘Groundhog Day’ dilemma – where political and ideological momentum blinds decision makers, making harmful technologies difficult to reverse.
G. Conclusion of Part 5
Humanity is drawn to technology both as a means of improving our lives and as a source of wealth and power. As the film Groundhog Day illustrates, the passage of time is meaningless without learning; if we fail to adapt, we remain trapped in a cycle of repetition. Rather than shaming and blaming Justicia – the embodiment of justice and the legal system – technology facilitators must embrace impactful governance and adapt what is necessary to sustain human life.
What, then, is wrong with the legal system? The issue is not the system itself but the overwhelming volume of existing regulations, outdated paradigms that hinder effective governance, and the lack of a unified, multistakeholder approach to tackling the challenges of the digital age. We have already witnessed a pivotal shift in AI governance at the Paris AI Summit 2025 – a move away from speculative risks toward innovation, job creation, and the public good. The question now is whether we will move beyond the blame game and collaborate to shape a future where technology serves humanity and contributes to a more equitable and sustainable world.
H. Next up
As we navigate metaverse development, it’s crucial to remember that standards are not just about technology – they’re about people. Our next article will explore how the standard-setting process can help establish a new relationship between technology and society. We’ll discuss how lessons from disaster regulation and insights from the UN Virtual Worlds Day can contribute to building an innovative and beneficial digital ecosystem for all.
I. Ask Diplo’s AI Assistant
Are you curious to explore the 52 technical reports by the Focus Group Metaverse and other relevant documents? To make research more accessible for our readers, we have developed a dedicated DiploAI Assistant for UN Virtual Worlds . If you have any questions, simply ask the DiploAI Assistant.
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!