The year of AI clarity: 10 Forecasts for 2025

Published on 05 January 2025

Clarity is the keyword for AI and digital developments in 2025.

Clarity follows on the hype of 2023 and the grounding of 2024.

In 2025, we will better understand AI’s risks, opportunities, and policy issues that must be regulated. But clarity, we also mean a return to digital basics. It’s easy to forget that even the most cutting-edge AI is built on decades-old foundations—like the humble TCP/IP protocol that underpins our digital reality.

Our 2025 forecast begins with the evolution of AI technology itself, exploring how geostrategic interests and positions are shaping its development. From there, we delve into governance, where these interests crystallise into policies and regulations. With the stage set, we turn to key issues: security, human rights, the economy, standards, content, the environment, and global development.

The first reality check for forecast starts on 20 January, when we will test if we were right on predictions about President Trump’s tech priorities, followed by an outlook for the rest of the year.

Throughout the year, we will continuously monitor forecasts around the monitoring questions listed below in each section. You can also submit your questions and topics on AI and digitalisation that you want us to monitor in 2025.

Best wishes for 2025!

Jovan Kurbalija

Be careful with AI predictions and forecasts!

Any AI prediction, including this one, should be approached with caution. The history of AI predictions is riddled with inaccuracies. Take Geoffrey Hinton, the 2024 Nobel Prize Laureate, who declared back in 2016:

‘We should stop training radiologists now. It’s completely obvious that within five years, deep learning will outperform radiologists.

Yet, radiology—like many other professions—remains alive and well.

The list of flawed AI predictions is extensive, ranging from exaggerated risks of open-source AI to the impact of AI on elections and so on.

Why are Hilton’s and other AI predictions often false?

Hinton’s false prediction illustrates a common misconception about AI’s capabilities and limitations. Here’s why radiology wasn’t the “low-hanging fruit” for AI many thought it would be—and what we can learn from this miscalculation:

Quality of data: AI thrives on vast amounts of high-quality, annotated data. But in radiology, getting that data is no easy feat. Medical images are sensitive, requiring strict privacy protections and expert labeling. Plus, the diversity of images—based on patient demographics, diseases, and imaging techniques—makes it hard for AI models to generalize effectively. What works in one scenario often fails in another.

Lack of ‘ground truth’: Unlike identifying cats or dogs, interpreting medical images is complex. Radiologists often disagree on findings, and images can contain multiple abnormalities that need precise detection and analysis. This lack of a clear “ground truth” makes it tough to evaluate AI’s performance.

Workflow woes: Even if an AI model performs well in a lab, integrating it into the real-world radiology workflow is a whole other challenge. Radiologists need tools that are trustworthy, explainable, and seamlessly integrated into their systems. Add to that ethical, legal, and regulatory hurdles, and it’s clear why AI adoption in radiology has been slower than expected.

So, what’s the takeaway? Hinton’s false prediction reminds us that AI’s potential is vast, but its path is riddled with complexities. Radiology isn’t disappearing—it’s evolving, with AI as a powerful tool rather than a replacement (read more). 

And perhaps the biggest lesson is that predicting AI’s impact is as much about understanding human systems as it is about the technology itself. The future of AI isn’t just about what it can do—it’s about how we choose to use it. 

While making even incorrect predictions might seem harmless speculation, they can have real-world consequences. For example, the overblown fears of existential risks from AI have led to a tsunami of AI governance initiatives, some of which may be premature or misdirected.

How can we guard against the risks of false predictions?

We have three main suggestions:

Avoid overreacting to uncertain predictions: Governance and business initiatives should not be based on highly uncertain predictions. The past two years have seen numerous AI governance initiatives driven by speculative risks, which may have diverted resources from more pressing issues.

Develop a balanced risk toolkit: A combination of tools and approaches is essential to address various risks while focusing on key ones. As many have come to understand that the emphasis on existential risk was misguided and overinflated in 2023, it is crucial not to swing to the opposite extreme and disregard existential risks entirely.

Include non-technical expertise: While technical expertise on how AI functions is important, it must be paired with deep insights into how societies adopt, adapt to, and integrate new technologies. Balancing these perspectives will be key to shaping AI’s role in a way that is both innovative and socially responsible.

What can we learn from Diplo’s annual predictions since 2021?

Diplo’s experience in predicting digital governance trends since 2011 shows that change often occurs much slower than perceived. Areas like data governance and content policies are examples of glacial progress in technology regulation.

With hindsight, the “slow governance for fast technology” approach has, despite its challenges, facilitated technological growth.

For more insights, consult Diplo’s previous predictions:  : 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2022 | 2021 | 2022 | 2023 | 2024

Trump and Tech: More of the same, but with a twist

As President Trump prepares to take office on 20 January, we can expect a mix of continuity and subtle policy shifts in the tech realm, as outlined here:

🔹 Historical continuity: Trump’s pro-business stance aligns with the US’s long-standing tradition of private sector-led innovation, resisting international regulations that could constrain American tech businesses.

🔹 Content regulation: Expect a de-emphasis on combating misinformation, as already signalled by policy changes of X and Meta. Key issues will be the future of Section 230 and dealing with the trend of stricter content regulation worldwide.

🔹 AI policy: Trump will likely scrap Biden’s Executive Order on AI safety, focusing instead on innovation, workforce skilling, and global competitiveness. This shift will sync with a global shift from AI safety narratives to opportunity and development ones.

🔹 Geostrategy: Continuity in relations with China relations with more export restrictions of advanced technology and solidifying tech block of like-minded countries.

🔹 Digital taxes: Unresolved tensions over tech taxation will resurface after the failure of the OECD to introduce global solutions. Countries like Germany and France will revisit digital tax policies, potentially clashing with US tech giants and Trump’s administration.

🔹 Cryptocurrencies: The crypto industry stands to benefit, with Trump expected to introduce crypto-friendly regulations, including a strategic crypto reserve and improved access to banking services.

🔹 TikTok’s future: Trump will be open to the TikTok deal, avoiding major internal disruptions and a risky precedent that can expose US tech companies to ‘nationalisation’ pressure in other jurisdictions.

READ MORE

 Weapon

In 2025, the bigger is better paradigm in AI will be challenged. Smaller models, such as DeepSeek, are increasingly outperforming larger ones. These smaller models are cost-effective to train and use. For instance, the cost of training DeepSeek is USD 5.6 million—100 times less than training models like Cloud3.5 Sonnet or Meta’s Llama 3, and 500 times less than Elon Musk’s using 100,000 Nvidia H100 GPUs for Grok.

Smaller AI models are also more efficient because their inference (generating answers) is faster and more affordable. Additionally, they require less energy for training and operations, making them more environmentally friendly. This shift from “bigger is better” to “small is beautiful” in AI technologies will have several significant impacts:

Reality-check of artificial general intelligence (AGI) narrative: Since the launch of ChatGPT in November 2022, there has been widespread speculation about the arrival of AI that can think and act like humans in any field. As AGI is not likely to emerge soon, there is an increasing narrative shift towards AI agents (see below) as a substitute for the envisaged AGI. This reframing of the AGI discussion will accelerate in 2025. 

The power of AI agents: An AI agent is a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilising available tools, thus encompassing a wide range of functionalities beyond mere natural language processing, including decision-making, problem-solving, interacting with external environments and executing actions (IBM, 3 July 2024).

A combination of AI, human expertise, and specific use cases will converge around AI agents, which will dominate the AI landscape in 2025. Similar to previous technological phases, there will be a “tech tsunami” of AI agents promising solutions to all our problems. The best way to navigate this tsunami is to choose AI agents based on practical needs—whether a new tool is nice or genuinely helpful in tasks ranging from drafting diplomatic agreements to arranging protocol details for official dinners or summarising meeting reports.

Bottom-up growth of AI: Affordable and smaller AI models will facilitate the growth of bottom-up AI, addressing the specific needs of communities, companies, and countries. This approach is both technically feasible and financially viable. Moreover, it grounds increasingly abstract discussions about ethics and biases in local cultural contexts. Local communities can develop AI that reflects their unique cultural and ethical values while protecting data and knowledge in simple yet effective ways.

Open-source AI prevails: The open-source approach has won the competition against closed models. The 2023 narrative that open-source AI could pose dangers has proven unfounded. The digital world is returning to open AI as the preferred approach. While the AI industry initially pushed for closed models, citing concerns about misuse, open-source AI has emerged as the key approach. In addition to platforms in the USA and Europe, China is becoming a major player in open-source AI. Today, Quen 3.5 is the most powerful open-source AI model, and open-source AI will be established as the dominant approach.

Necessity remains the mother of invention in the AI era: DeepSeek developed one of the most powerful AI models using a fraction of the funding available to major companies and older-generation Nvidia GPUs, which are still exportable to China. These limitations were overcome through the innovation and creativity of the development team.

AI is becoming a commodity: Advances in technology have made it possible to develop large language models (LLMs) and AI platforms with limited resources. An almost daily development of new LLMs has led to what is described in China as the ‘war of a hundred models’.  Affordable AI is growing rapidly worldwide. Yet…

AI transformation is a complex task: In 2025, many businesses and organisations will search for a formula for making shortcuts in the AI Gartner Curve towards the Plateau of Productivity.

AI Transformation: The commodity within, the cultural shift beyond

AI is an affordable commodity, but AI transformation is very ‘expensive’.

At first glance, this statement seems paradoxical. How can something so readily available be almost priceless. Yet, it captures today’s dilemma facing businesses, governments, and organisations. 

AI has become accessible to many – you can create an AI chatbot in hours – but unlocking its potential requires far more than technology. It demands a shift in professional cultures, a break from old routines, and embracing new ways of thinking and problem-solving.

Effective AI adoption isn’t about purchasing the latest software or algorithms; it’s about reshaping how we work, collaborate, and innovate.

AI transformation requires challenging the status quo, rethinking long-held practices, and fostering a culture of continuous learning and adaptability. But the rewards are immense. 

The good news? This journey isn’t just about efficiency, profit, or technological shift; it’s a cultural and philosophical evolution of contributing to the well-being of our communities, countries, and humanity as a whole. Moreover, AI nudges us to reflect on what it means to be human – both ontologically and spiritually.

As we use AI effectively, we may find ourselves closer to answering some of humanity’s eternal questions of purpose and happiness and dealing with our predicaments.

How much are new(er) technologies developed as ‘solutions to problems’ and how much as ‘solutions in search of problems’?

Monitoring update will be provided in February 2025.

Why do we want AGI or superintelligence? And why do humans tend to be obsessed with building AI that matches human intelligence and has human attributes?

Monitoring update will be provided in February 2025.

How to best leverage the power of AI agents while not seeing them as a substitute for human expertise?

Monitoring update will be provided in February 2025.

What does it take for local communities to be able to develop AI that reflects their cultural and ethical values while protecting their data and knowledge?

Monitoring update will be provided in February 2025.

 Logo, Text, Outdoors

In 2025, geography will play an increasingly significant role in shaping global politics and economies. The strength of national borders as barriers to the flow of capital, people, and goods will intensify. A central question will revolve around the global flow of data. So far, internet traffic has resisted significant fragmentation, but will this remain the case in 2025?

As digital geostrategy gains prominence, the influence of China and the USA in the digital realm will grow. However, the geostrategic landscape will not be purely bipolar. In certain sectors, such as digital trade, new power centres are emerging, particularly in the Global South, reflecting a more multipolar digital economy.

Geopolitics often dominates media coverage, focusing on the use of technology to advance security and other non-economic interests. Geoeconomics, on the other hand, centres on accumulating wealth and expanding markets. Meanwhile, emotions and perceptions help to explain why societies embrace or resist technology, reflecting a spectrum of enthusiasm, bias and fear. 

Together, these dimensions—geopolitics, geoeconomics, and geoemotions—shape the complex interplay between technology, society, and global power dynamics in the 21st century.


Geopolitics

Digital networks and AI developments are critical assets for countries worldwide. Thus, they become central to national security, the projection of power, and the protection of national interests. Over the last few years, political considerations were prioritised over economic interests. It is particularly noticeable in various export restriction regimes on semiconductors, limited market access, the deployment of submarine cables, and the launching of satellites. 

Semiconductors

The growth rate of the semiconductor industry will slow down in 2025 to 12.5% growth from 16% in 2024, according to the World Semiconductor Trade Association.

Less demand for, in particular, GPUs could be caused by the development of new AI models, which would require less processing power for training and, in turn, fewer GPUs. 

The USA starts returning the semiconductor industry home with the opening of a fabrication plant (or ‘fab’) in Arizona. As the USA restricted the export of semiconductors, China invested heavily in its local industry. China will focus on producing less advanced but essential, so-called ‘mature-node’ chips that can have wider economic use. 

Till now, AI has been processed in powerful data centres. In 2025, we will notice the emergence of ‘AI factories,’ purposely developed to train AI models. Nvidia is preparing a new Blackwell GPU, which will be used for AI processing. 

EVENTS

First meeting of the International Advisory Body for Submarine Cable Resilience | late February 2025, Abuja

IPCC Plenary Meeting 15-17 April 2025, Montreal


Geoeconomics

In 2025, the trend of ‘securitisation’ of the economy will significantly impact the tech sector, compelling companies to align their economic interests more closely with their home countries’ political and security priorities. This shift is driven by the growing recognition of technology as a strategic asset in geopolitics. 

Currently, tech companies wield unprecedented power, often surpassing the GDPs of entire nations. For instance, Apple’s market capitalisation in January 2025 was $3.524 trillion, Nvidia’s was $3.262 trillion, and Microsoft’s was $3.101 trillion, each of which is comparable to the total 2023 GDP of the entire African continent ($3.1 trillion) and close to the GDPs of the UK ($2.27 trillion), France ($3.03 trillion), and India ($3.57 trillion). Other tech giants like Amazon, Meta, Alphabet, Alibaba, and Tencent have similar income and profit levels, further emphasising the economic clout of the tech industry.

The power of tech companies extends far beyond technology, permeating various aspects of global society and governance, including social influence, data centralisation, and political impact. No company in the history of humanity, including the East India Company, has had such combined power extending beyond the economy to social and political realms.

In 2025, tech giants will likely face pushback from national governments and local companies. For example, Flipkart in India, which controls one-third of the market, and Mercado Libre, an Argentinian firm, are challenging global tech giants in their respective regions. According to The Economist, this trend is also supported by bottom-up financing by local financial initiatives such as M-Pesa in Africa and Nubank in Brazil and South America. 

Furthermore, governments worldwide are increasingly imposing ‘data localisation’ requirements, which will significantly impact the business models of tech companies that rely on the free flow of data across national borders. 

In summary, 2025 will be a pivotal year for the tech sector as it navigates the dual pressures of aligning with national security priorities, adapting to regulatory changes, facing competition from regional players, and addressing the financial inclusion needs of underserved populations in the Global South.


Geoemotions

In 2025, geoemotions will impact the acceptance and growth of AI. If societies fear AI, the use of technology will be limited. A 2024 Ipsos study positions countries in two dimensions: nervous and excited about AI.

AD 4nXfYymLlXRAmVG mvepj5x07l7417FR1zjKcAxuhOntbe06OKgQi5IQ 8h0H81IcenoFIaRLcX1mrvrkwUD j j4g8oFBR0jqQz14qeMsGzCCkWlNEt232A9CE3eWa6S qfSb9tfkQ

Generally speaking, societies from the Anglosphere, including the United States, are highly nervous and display low excitement about AI. This view is probably shaped by the strong media campaign about AI risks in 2023. On the opposite side are Asian societies with high excitement and low nervousness about AI. 

A similar YouGov research from September 2024 showed that the most positive attitude towards AI is in the UAE (60%), while the least enthusiastic is in the USA (17%). 

AD 4nXdFmEywt ZbNAl eA3uWvKymOKMjCw7o53H7jnAYq0BIZ8znmdKnCCduH3wm9QTJUkO4gieEU0UNl3CnRrIze oUXMZ sU5yPxzMT1lBhwLdXjz6SnsG8xMYy MB2Yopwdh54DayQ

In 2025, these trends will likely change as Trump’s administration will shift focus from safety issues towards AI opportunities and narratives. This political shift is likely to impact media and academic coverage of AI.

How close or far are we from a significant fragmentation of the internet?

Monitoring update will be provided in February 2025.

What will be the consequences of the ongoing chip war between the world’s largest technological powers? 

Monitoring update will be provided in February 2025.

Beyond statements and principles, which concrete actions will help strengthen the resilience of submarine cables? Who could or should take such actions? 

Monitoring update will be provided in February 2025.

How much is too much regarding the power held by big tech companies? How to keep this power in check?

Monitoring update will be provided in February 2025.


 Logo, Text

After 2024 – a year of intense negotiations on AI and digital governance – 2025 will shift focus to implementation as the world works to put resolutions, agreements, and treaties into action. This should be a year of clarity and consolidation, with two key themes echoing across the AI and digital realms:

  • Avoiding governance duplication by synchronising the Global Digital Compact (GDC) and the World Summit on the Information Society (WSIS) framework, particularly in the context of the WSIS+20 review.
  • Deflating the ‘AI governance bubble’ that has ballooned in recent years.

Sync between GDC and WSIS

The World Summit on the Information Society (WSIS) framework, shaped between 2003 and 2005, has been tested and refined over two decades. The Global Digital Compact (GDC), introduced in 2024, represents a fresh, dynamic approach to digital governance. Metaphorically, WSIS is a marathon, while the GDC is a sprint.

The main challenge in 2025 will be to sync the experience and expertise of the WSIS framework with the GDC’s new energy. This alignment is crucial to avoid duplication and ensure that both frameworks complement each other. The WSIS+20 review in 2025 will provide the perfect context for this synchronisation, offering an opportunity to integrate lessons from the past with the urgency of the present.

 Advertisement, Poster, Book, Publication

Sorina Teleanu provides a detailed analysis of the Global Digitial Compact, including:

  • main issues covered
  • negotiating history
  • link to other governance processes
  • actors
  • implementation and follow-up
  • AI governance initiatives

READ MORE

EVENTS

28th CSTD session | 7-11 April 2025, Geneva

28th CSTD session | 7-11 April 2025, Geneva

IGF 2025 | 23 – 27 June 2025, Lillestrøm 

WSIS Forum (titled ‘WSIS+20 High-Level Event 2025) | 7–11 June 2025, Geneva

Other digital negotiations and processes

Beyond these overarching themes, several specific initiatives will shape the year:

Global cybersecurity framework: In February and July, the Open-Ended Working Group on cybersecurity will finalise its proposal for a global future mechanism to continue cybersecurity negotiations beyond 2025. Key sticking points include creating dedicated thematic groups, modalities of stakeholder engagement in the future process, and whether a new binding treaty is necessary to regulate cyberspace.

UN convention against cybercrime: Following a formal ceremony hosted by Vietnam in 2025, the Hanoi Convention on Cybercrime will be open for signature and enter into force 90 days after being ratified by the 40th signatory. Once in force, the focus will be on translating agreed provisions into actionable measures to combat cybercrime while safeguarding human rights.

IBSA digital dynamics: South Africa’s G20 presidency in 2025 will build on the momentum of the IBSA (India-Brazil-South Africa) partnership. India and Brazil made significant strides in linking AI and digital growth to development goals. South Africa is expected to extend this focus to Africa, home to the world’s largest concentration of developing nations, ensuring that tech advancements benefit those who need them most.

Deflating the ‘AI governance bubble’

Since the launch of ChatGPT, AI governance has become a fashionable topic, attracting significant public interest and funding. Much of this attention was driven by the ‘extinction risk’ narrative that dominated 2023, painting AI as an existential threat to humanity. However, AI won’t destroy humanity on its own – unless humans misuse it to that end. The real risk lies in the technology and how we wield it.

This realisation calls for a return to basics, including ancient texts. For example, Hammurabi’s Code – one of humanity’s earliest legal texts – established that responsibility for societal impact lies with those who develop or benefit from tools and activities. We can address real-world risks in education, trade, media, and beyond by grounding AI governance in such timeless principles.

The challenge in 2025 will be to streamline the proliferation of AI governance initiatives – dozens of commissions, expert groups, and overlapping efforts – into a cohesive, actionable framework. The goal? To tackle concrete risks rather than chasing speculative doomsday scenarios.

What are the AI governance issues?

The search for clarity in AI governance includes identifying issues to be addressed in developing and using AI, as illustrated by the pyramid below.

AD 4nXcWjzBS6n1pMQYt qePCX1H0aqI1KGNI9jFDv8rjcbR2KHhzWym1t25NThH ivIrHrmYXXGjLqaKVoocyPnStUTi25EN

Hardware level: While technical standards govern most hardware, the environmental impact of AI’s energy consumption will introduce some stricter regulations. Additionally, the security implications of AI have placed semiconductors and related infrastructure under export controls and sanctions in certain countries.

Data level: Existing frameworks for personal data protection and intellectual property are already in place, but legal battles highlight ongoing tensions. In the U.S., OpenAI and Microsoft face lawsuits from the New York Times, while entities like Universal Music Group pursue Anthropic. In the UK, Getty Images is suing Stability AI.

Meanwhile, Tennessee passed the ELVIS Act to protect performers’ likenesses, and California has enacted laws against political deepfakes.

Most nations seek a balance, allowing the use of protected data and knowledge while acknowledging ownership, often through opt-out clauses.

Algorithmic level: In 2025, the assumption that AI governance should focus mainly on quantitative metrics like parameters or FLOPs is being revisited. Platforms like DeepSeek demonstrate that high-quality data and innovative solutions can compensate for lower computational power. This shift calls for a deeper focus on using AI rather than just its technical capabilities.

Uses level: The most critical area for governance is AI’s applications. AI’s societal, legal, and ethical consequences stem from its uses. A foundational legal principle holds that those who develop or benefit from a technology should bear responsibility for its risks. 

However, this principle was undermined by Section 230 of the U.S. Communications Decency Act, which grants tech platforms immunity for hosted content. Extending such immunity to AI could pose greater risks than uncontrolled algorithms.

Perhaps it’s time to return to the simple, enduring principle that individuals and entities are responsible for their actions, we can better address AI’s governance and regulatory challenges.

How do we deal with AI risks?

In 2025, the AI risk landscape will continue shifting from long-term, existential concerns to more immediate and tangible risks. Existing risks related to jobs, education, and misinformation will dominate the discourse, while exclusion risks stemming from the monopolization of AI technologies will gain more attention. This evolution is illustrated below.

May 2023

The image shows a diagram with three overlapping circles representing a prediction of the coverage of the risks of AI in 2024. The biggest is existing risks, such as AI's impact on jobs, information, and education,, and the other two, extinction risks, such as AI destroying humanity, and exclusion risks, such as AI tech monopolising global knowledge, are both smaller and of roughly equal size.

January 2024

 Diagram, Venn Diagram, Disk

January 2025

Existing risks (short-term): These will dominate the AI risk discourse in 2025. Concerns about AI’s impact on jobs, education, misinformation, and cybersecurity will remain at the forefront. Governments and regulatory bodies will prioritise addressing these immediate threats, as they are more tangible and directly affect society.

  • Key concerns: Job displacement, data protection, intellectual property rights, misuse of AI in education, and the proliferation of deepfakes and misinformation.
  • Regulatory tools: Existing frameworks will be adapted and expanded to address these risks, focusing on enforcement and accountability.

Exclusion risks (medium-term): As AI knowledge and capabilities become increasingly centralised among a few major tech companies, the risk of exclusion will grow. This could lead to a scenario where access to AI-generated knowledge and benefits is limited to a select few, exacerbating global inequalities.

  • Key concerns: Monopolisation of AI technologies, limited access to AI benefits for smaller communities or countries, and the potential for dystopian outcomes where AI knowledge is controlled by a handful of entities.
  • Regulatory tools: Antitrust laws, data protection regulations, and intellectual property rights will be key tools in mitigating these risks. Governments may also push for more open AI platforms and equitable access to AI technologies.

Extinction risks (long-term): While existential risks (e.g., AI surpassing human control and threatening humanity’s survival) will continue to be paid attention to, they will likely take a backseat to more immediate concerns. 

Extinction risks will remain a part of the global AI governance debate but will be balanced with more immediate concerns. International bodies like the International Network of AI Safety Institutes will continue to address these risks more informed and measuredly.

New risks: The AI industry will continue to experience significant market volatility, as seen in the case of Alphabet’s $80bn loss in market value. Overpriced AI companies with high capital expenditure but unclear revenue streams will pose a risk to investors and the broader economy. The sustainability of AI investments will be questioned, particularly if companies fail to demonstrate solid business models. This could lead to market corrections and increased scrutiny of AI companies’ financial health.


EVENTS

AI Action Summit | 10-11 February 2025, Paris

28th CSTD session | 7-11 April 2025, Geneva

IGF 2025 | 23 – 27 June 2025, Lillestrøm 

WSIS Forum (titled ‘WSIS+20 High-Level Event 2025) | 7–11 June 2025, Geneva

What does building a meaningful synchronisation between WSIS and GDC implementation and follow-up processes take? What are the risks and challenges of having two parallel processes moving forward? 

Monitoring update will be provided in February 2025.

What will South Africa’s presidency of G20 mean for the African continent regarding digital governance and digital transformation?

Monitoring update will be provided in February 2025.

How can the proliferation of AI governance initiatives be streamlined into a cohesive, actionable framework? 

Monitoring update will be provided in February 2025.

What unintended consequences might arise from the rush to develop new regulations for AI, and how can we proactively address them?

Monitoring update will be provided in February 2025.

What are the implications of treating algorithms as ‘black boxes’ beyond human comprehension? How might this opacity erode public trust in AI?

Monitoring update will be provided in February 2025.

How do we reconcile the need for global AI governance with the vastly different cultural and ethical perspectives on AI across regions?

Monitoring update will be provided in February 2025.

 Logo, Text, Outdoors

Hanoi cybercrime convention

Following Budapest, the 2001 Council of Europe Cybercrime Convention host, Hanoi, will emerge as the next toponym in cybercrime language. The Vietnamese capital will host the signing ceremony for the new UN Cybercrime Convention, which will remain open for signatures until 31 December 2026. The convention will enter into force 90 days after the 40th ratification.

At the same time, the Ad Hoc Committee on Cybercrime decided that it would complete its work on the Convention by holding a session in Vienna, lasting up to five days, one year after the Convention’s adoption. Since the Convention was adopted on 24 December 2024, this follow-up may occur by the end of 2025. During this session, the Committee will draft the rules of procedure for the Conference of the States Parties and other rules outlined in Article 57 of the Convention. 

We expect the Convention to enter into force in 2025, with 40+ ratifications. Though this might be tricky, there is considerable diplomatic and political momentum, a need to address growing cybercrime, and relatively general satisfaction with the truly global nature of this binding agreement adopted by consensus in the UN (which might not be so common these days).

UN Cybersecurity partnership framework

2025 will be an important year for cybersecurity negotiations—the mandate of the UN Open-Ended Working Group (OEWG) on the security of and the use of information and communications technologies is ending in July 2025 with its eleventh session. What will follow is a new mechanism for dealing with cybersecurity under the UN auspices. 

Currently, states disagree on the scope of thematic groups in the future mechanism: while some countries insist on keeping traditional pillars of the OEWG agenda (threats, norms, international law, confidence-building measures (CBMs) and capacity building), others advocate for a more cross-cutting and policy-orientated nature of such groups. There is also uncertainty regarding the modalities of multistakeholder engagement in the future mechanism. Agreements on these issues are key if states want to hit the ground running and not get tangled in red tape at the beginning of the next mechanism. 

In the OEWG report, which should be adopted by July 2025, there are a few points where we can expect consensus. CBMs are gaining wider support, including establishing the Global Points of Contact (POC) Directory and capacity-building portal. We may also have some implementation checklists for the existing 11 cyber norms. Support for a voluntary fund for capacity building is not yet certain. The protection of critical infrastructure and the impact of AI are likely to feature highly in follow-up processes. 

The main controversies in remaining OEWG negotiations will be around modalities for the future cybersecurity process at the UN (i.e. institutional dialogue): should a future cybersecurity architecture deal mainly with implementing existing norms or negotiating new norms and legally binding instruments? Other open issues include the decision on topics and several thematic groups, and the inclusion of other actors through multistakeholder provisions (which will remain a central stumbling stone). One thing appears clear: the process will be continued, and the next one will likely be a permanent mechanism rather than time-limited.

By July, the following outcomes are possible:

  • Consensus around least-common denominator of the future process, especially on implementing the existing vs. negotiating the new (and binding) mechanisms, which will be open for interpretation by future negotiators. 
  • In the absence of such consensus, two resolutions might be tabled, as it happened a few years ago, with one (sponsored by the US, EU and their partners) establishing a POA-like mechanism, focusing on norms implementation and capacity building. The other (sponsored by Russia and its partners) established a continuous OEWG focused on negotiations on binding norms and agreements. 

Although time is not on the side of negotiators, there will be quite a few activities in the next 6 months, giving states more discussion opportunities. An informal town hall meeting to discuss the next mechanism will be held before the tenth substantive session scheduled for February. The OEWG’s schedule for the first quarter of 2025 includes the Global POC Directory simulation exercise, an example template for the Global POC Directory, and reports on the Global ICT Security Cooperation And Capacity-Building Portal and the Voluntary Fund.  Further, the chair can schedule additional intersessionals if deemed necessary. 

Encryption

Race between cryptography and quantum computing

In 2025, governments and companies will ramp up preparations for quantum computing, a technology poised to render current encryption obsolete. To address this, the US National Institute of Standards and Technology (NIST) introduced a post-quantum cryptography standard in 2024, featuring algorithms designed to withstand quantum attacks. This proactive policy approach demonstrates how society can tackle the uncertainties of emerging technologies like quantum computing.

Meanwhile, the EU’s ‘Chat Control’ initiative dominates current encryption policy debates. It mandates tech platforms to scan content for illegal activities but faces strong opposition over privacy and human rights concerns. In December 2024, 10 member states rejected the proposal. Efforts to revive it through ‘Chat Control 2.0,’ which introduces ‘upload moderation’ requiring user consent for message scanning, are unlikely to succeed. Critics argue it fails to address the core issue of undermining encryption and creating security vulnerabilities. Major platforms like WhatsApp, Signal, Telegram, and Threema have threatened to exit the EU market if forced to weaken encryption protections.

Military uses of AI and LAWS

The implications of AI for international peace and security are typically tackled separately from the broader discussions on AI governance at the UN level. In December 2024, the UN General Assembly adopted the first-ever resolution on AI in the military domain, affirming the applicability of international law to systems enabled by AI in the military domain and encouraging member states to convene exchanges—at multilateral and multistakeholder levels—on responsible application of AI In the military domain.

In 2025, the UN Secretary-General will have to follow up on this resolution, as he was requested to ‘seek the views of member states and observer States on the opportunities and challenges posed to international peace and security by the application of AI in the military domain, with specific focus on areas other than lethal autonomous weapons systems (LAWS)’;  a substantive report summarising those views and cataloguing existing and emerging normative proposals will have to be submitted to the General Assembly at its eightieth session, starting in September 2025. 

The GGE on emerging technologies in the areas of LAWS (convened yearly as a group of the High Contracting Parties to the Convention on Certain Conventional Weapons since 2017) will continue its work in 2025, with two meetings planned for March and September. The group is tasked with ‘considering and formulating, by consensus, a set of elements of an instrument, without prejudging its nature, and other possible measures to address emerging technologies in the area of LAWS’.

In 2024, the GGE worked on a so-called ‘rolling text’, which outlines provisional rough consensus on several formulations on issues such as the characterisation of a LAWS; applicability of international humanitarian law; human control and judgement as essential with regard to the use and effects of LAWS; several prohibitions on the use of LAWS, including a prohibition of employing LAWS that operate without context-appropriate human control and judgement; obligations for states prior to potential employment and as applicable throughout the entire life cycle of LAWS; and obligations for states to ensure human responsibility and accountability. 

EVENTS

2025 GGE sessions | 3-7 March & 1-5 September 2025

International humanitarian law

With the increasing ‘digitalisation’ of ongoing conflicts, the applicability of international humanitarian law (IHL) in the cyber realm is set to become more prominent. A 2024 report by the International Committee of the Red Cross (ICRC) highlights two key challenges:

  1. Clarifying legal grey zones, such as those involving hybrid warfare and proxy warfare.
  2. Applying IHL principles to emerging technologies used in warfare (ICRC, 2024).

In 2025, IHL and cyber conflicts are likely to be central to several cases before the International Court of Justice (ICJ), including South Africa vs. Israel, Nicaragua vs. Germany, and Ukraine vs. Russia

With the conclusion of OEWG in 2025, what will the new mechanism for cybersecurity at the UN level look like?

Monitoring update will be provided in February 2025.

How can we operationalise international norms on cybersecurity and critical infrastructure protection?

Monitoring update will be provided in February 2025.

What will implementing the UN Convention against cybercrime look like in practice? 

Monitoring update will be provided in February 2025.

Will the 2024 UNGA resolution on AI in the military domain translate into concrete actions towards a responsible application of AI in the military domain?

Monitoring update will be provided in February 2025.

How can international law obligations effectively translate into technical requirements for AI systems in military applications? And how can liability be determined when AI systems are involved in military actions that violate international law?

Monitoring update will be provided in February 2025.

As end-to-end encryption becomes more widespread, how can we balance the need for privacy and security with the challenges it poses for combating child exploitation online? Are current proposals for ‘client-side scanning’ a viable solution or a dangerous precedent?

Monitoring update will be provided in February 2025.

With the increasing complexity of supply chains in technology manufacturing, how can we effectively implement ‘security by design’ principles when multiple actors across various jurisdictions are involved in the production process?

Monitoring update will be provided in February 2025.

How to establish universal baseline or minimum cybersecurity requirements for critical infrastructure protection across jurisdictions?

Monitoring update will be provided in February 2025.

 Logo, Text, Outdoors, Nature

In 2025, significant political, technological, and societal shifts will shape the landscape of human rights and digitalisation. 

According to all indications, Trump’s presidency will deprioritise human rights compared to the Biden administration’s focus. As tech companies retreat from content moderation, they will likely backpedal on the impact of their platforms on human rights. 

EU countries will likely increase their focus on human rights in the digital age, addressing issues such as AI ethics, surveillance, and the impact of technology on privacy and freedom of expression. The EU will aim to ensure global relevance of EU’s regulations of digital relevance: GDPR, AI Act, DSA, etc. 

AI will be raised on the agenda of the UN Human Rights Council and other initiatives and organisations dealing with human rights. AI will bring new angles to reshaping ‘traditional’ human rights, such as freedom of expression and privacy protection. In addition, it will foster new types of human rights dilemmas. For instance, human identity will be highly relevant as individuals are impersonated through AI to mimic human looks and voices. Here, the dilemma is whether the question of identity can be covered by privacy protection or if it will require a new set of legal and policy rules. 

The rapid development of neurotechnologies, spurred by AI and biotech advancements, will bring neurorights to the forefront of human rights agendas. The 2024 report on neurotechnology and human rights of the UN Human Rights Council Advisory Committee, along with UNESCO’s work on the ethics of neurotechnology, will likely catalyse new international norms and regulations to protect cognitive liberty, mental privacy, and the integrity of the human mind.

While AI will bring risks across human rights, there are areas where AI can help realise human rights.  For example, AI can and should play a crucial role in increasing the well-being of people with disabilities. In the governance realm, the main focus should be on developing usability standards for people with disabilities. 

The main regulatory development on technology and disabilities will be the entry into force on 28 June 2025 of the European Accessibility Act, which will impose legal liability for businesses for providing equal access to digital products and services. 

EVENTS

58th session of the Human Rights Council | 24 February – 04 April 2025, Geneva

Are new policies and/or rules needed to address the impact of advanced technologies on issues like human identity, human agency, and the integrity of the human mind?

Monitoring update will be provided in February 2025.

How do we leverage technology to help realise human rights?

Monitoring update will be provided in February 2025.

How can we enhance data collection efforts to capture the diversity among persons with disabilities better, ensuring the development of more accurate and inclusive policies and interventions?

Monitoring update will be provided in February 2025.

 Logo, Text, Outdoors

In 2025, geopolitical tensions, technological developments, trade barriers, and industrial policies will affect the digital economy. The resilience of the digital economy will face three critical tests in 2025:

First, can data flow freely in an economically fractured world? So far, the internet and digital networks have resisted significant fragmentation in the flow of capital, goods, and services. The success or failure of this test will not only shape the future of the digital economy but, more importantly, the internet itself.

Second, will the AI bubble burst in 2025? This risk stems from massive investments in AI and its limited impact on businesses and productivity. While significant funding has fueled the development of AI models, driving the market capitalisation of companies like Nvidia to new heights, the real-world adoption of AI in business and productivity remains low. The risk of an “AI bubble burst” grows with the emergence of cost-effective models, such as DeepSeek, which are developed and deployed at a fraction of the cost compared to those by OpenAI, Anthropic, and other mainstream AI platforms.

Third, will the digital economy become securitised? Current geopolitical trends are increasingly integrating tech companies into nation-states’ security and military frameworks. The growing securitisation of the tech industry will likely trigger pushback worldwide, as the involvement of foreign tech companies in internal markets will no longer be evaluated solely on economic grounds.

Digital taxation 

After the OECD’s failed digital tax negotiations (Pillar One) in mid-2024, countries like Canada, India, France, and Germany will likely roll out digital services taxes (DSTs). This patchwork of regulations can spark tensions, especially with the US, as the Trump administration shields tech giants from foreign taxation. DSTs won’t stand alone; they’ll become bargaining chips in broader trade wars, entangled in Trump’s tariffs and restrictions. The digital economy, once a unifying force, risks becoming a battleground in a fragmented world.

Worth paying attention to in 2025 is an intergovernmental negotiating committee tasked with drafting a UN Framework Convention on International Tax Cooperation and two early protocols. The UN General Assembly decided to establish this committee in December 2024; the committee is to meet in 2025, 2026, and 2027, and have an organisational session in February 2025. 

Its work will be guided by a UNGA-approved terms of reference (ToR) for the UN Framework Convention on International Tax Cooperation developed by a dedicated Ad Hoc Committee. According to the ToR, such a convention is expected to tackle issues highly relevant in the context of the digital economy, such as fair allocation of taxing rights (including equitable taxation of multinational enterprises), and addressing tax avoidance and evasion.

Moreover, the two early protocols that are to be developed simultaneously with the convention are to deal specifically with digital issues: one should address taxation of income derived from the provision of cross-border services in an increasingly digitalised and globalized economy, while the second will have taxation of the digitalised economy among priority areas. 

Digital trade

The Joint Initiative on e-commerce at the WTO hangs in the balance. After five years of talks, 82 members of the JSI expressed acquiescence to a stabilised ‘Agreement on Electronic commerce’. Nevertheless, a final agreement remains elusive. Although some digital economy powerhouses, such as the EU and China, are on board, other countries, such as the United States, Brazil, and Indonesia, have not yet agreed.

Turkey has joined the group of countries that oppose JSIs as a negotiating instrument. In addition, even if Australia, Japan, and Singapore (the three facilitators of the JSI) broker a deal, it will be far from the ambitious vision set years ago, which included topics such as data flows and source code. In 2025, these topics will likely continue to be regulated outside the WTO through preferential trade agreements and Digital Economy Agreements (DEAs). 

A JSI agreement would still be valuable in fostering the harmonisation of global rules, especially when enabling and facilitating e-commerce. In the present fragmentation scenario, an agreement would be a victory of multilateralism amid global failures like the OECD tax collapse.

Anti-trust

As anti-trust processes take time, most of the trends from last year will continue in 2025.

In the United States, anti-trust pressure on tech companies will decrease, as the incoming Trump administration has already hinted. 

The main development will be the court ruling by mid-2025 on Chrome and Android divestitures from Google. Google also faces an anti-trust investigation mainly centred on the dominance of the Chrome browser and search engines in Japan; Canada’s Competition Bureau initiated a similar case against Google.  Google has announced further changes to its search results in Europe in response to complaints from smaller competitors.

Meta’s anti-trust lawyers will also have a busy year. The EU hit Meta with a fine of nearly Eur 800M for anti-competitive practices to its Marketplace features. India’s Competition Commission imposed a $25.4 million fine and restricted data-sharing between WhatsApp and other Meta-owned applications for five years. Apple also faces an anti-monopoly probe in India.

In the EU, tech companies face anti-trust challenges under the Digital Market Act (DMA).

EU is expanding anti-monopoly actions in the AI realm by scrutinising two partnerships: Microsoft-OpenAI and Google Samsung.

Anti-monopoly is used in the battle among tech companies. Elon Musk has expanded his legal battle against OpenAI by adding Microsoft to his lawsuit, accusing both companies of engaging in illegal practices to monopolise the generative AI market. 

Digital currencies

Overall, the digitalisation of currencies and finances will receive a new boost with a ‘crypto-friendly’ administration in the United States. Countries will continue introducing digital versions of their countries. 

Research by the Atlantic Council reveals that all G20 nations are now exploring central bank digital currencies (CBDCs), with 44 countries currently piloting them, up from 36 last year. Authorities are accelerating these efforts in response to decreasing cash usage and the potential threat from cryptocurrencies like Bitcoin and big tech companies.

Notable growth has been observed in the CBDCs of the Bahamas, Jamaica, and Nigeria, while China’s digital yuan (e-CNY) has seen its transaction value almost quadruple to 7 trillion yuan ($987 billion). The European Central Bank has also launched a multi-year digital euro pilot.

Future of jobs 

So far, AI has led to more jobs being created than displaced. According to the WEF’s Future of Jobs Report 2025, this trend will likely continue, with 170 million new roles set to be created and 92 million displaced, resulting in a net increase of 78 million jobs by 2030. 

The main challenge in AI transformation will be closing the skills gap with the shift towards AI, big data, and cybersecurity skills. As according to WEF, 59% of the workforce will require re-skilling and training by 2030, and investment in training and education will be high on the policy agenda of entities such as UNESCO. In addition, the International Labour Organisation should prioritise creating standards and policies for AI and automation to ensure that AI technology complements rather than replaces human labourers.

 Text, Logo

In 2025, technical standards will become more relevant as a ‘safety net’ for global networks during economic and political fragmentations. Standards ensure interoperability of apps and services across other divides and divisions. TCP/IP (Transport Control Protocol/Internet Protocol) remains the glue that keeps the internet together despite political and economic fragmentations. 

AI standardisation will gain additional momentum in 2025, as the three main international standard development organisations –  the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU) – have announced a set of initiatives including the International AI Standards Summit and creating an AI standards database.

Other notable standardisation initiatives will focus on open-source AI, digital public infrastructure, mobile networks like 6G, and brain-computer interfaces. 

EVENTS

AI Standards Hub Global Summit | 17-18 March 2025, London

International AI Standards Summit | 2-3 December 2025, Seoul

Can technical standardisation processes be kept separate from geopolitical tensions?

Monitoring update will be provided in February 2025.

What does it take to ensure that technical standards for AI can serve as a meaningful governance tool?

Monitoring update will be provided in February 2025.

 Cutlery, Fork, Logo

In 2025, content governance will be altered by the shift from fact-checking to a community notes approach by social media platforms. The content policy landscape will shift from its current heavy focus on content regulation to a more relaxed approach.

The ‘hands off’ approach was enthusiastically adopted by tech companies, as it will reduce their cost of maintaining a complex system of policies, organisations, and networks that was formed over the last 10 years, involving, according to some estimates, close to 30.000 people who monitor content on social media platforms. In addition, social media platforms transfer moderation responsibilities to users. 

The underlying question is whether this shift from fact-checking to a system of community notes will address the problem of quality and reliability of content on social media platforms. We will monitor developments in 2025 around the following inquiry questions. 

Are community notices enough to avoid misuse of social media platforms?

Community notices, such as Twitter’s Community Notes (formerly Birdwatch), are designed to provide additional context or corrections to potentially misleading content. While they are a step forward in promoting transparency and user-driven moderation, there are views that they are not sufficient alone to prevent the misuse of social media platforms.

What is the reaction of the EU and other countries to the change of content moderation by social media platforms? 

The EU has a regulatory tool—the Digital Service Act—that can be used for main social media platforms with a potential fine of 6% of the worldwide turnover of these platforms in the case they breach provisions of DSA. 

The question of content moderation goes beyond immediate policy issues to deeper cultural roots and historical context. For example, EU’s societies have a lower threshold of tolerance on hate speech and disinformation. For example, European courts addressed the first cases related to content including the Compuserve case in Germany and the Yahoo case in France. 

Currently, one of the main focuses is on the impact of TikTok on the results of Romanian elections. The European Commission has initiated formal proceedings on the effectiveness of the Community Notes system to provide content moderation as per provisions of the EU’s Digital Service Act (DSA) mitigation of systemic risks.

What are age verification regulations and policies for access to social media platforms?

1. United Kingdom

  • Current Policy: The UK’s Online Safety Act 2023 mandates “highly effective age assurance” to prevent children from accessing harmful content such as pornography, self-harm, and suicide-related material. Ofcom has issued guidance and statutory codes, with the main implementation date (“AV-Day”) set for July 2025.
  • 2025 Initiatives:

2. United States

  • Current Policy: Several states, including Florida and South Carolina, have enacted age verification laws for pornographic websites. The Supreme Court is reviewing the constitutionality of Texas Law HB 1181, with a decision expected by July 2025.
  • 2025 Initiatives:

3. European Union

  • Current Policy: The Digital Services Act (DSA) regulates Very Large Online Platforms (VLOPs) like Pornhub and xVideos, requiring robust age assurance measures. The EU is also piloting the EUDI Wallet for age verification.
  • 2025 Initiatives:

4. Canada

  • Current Policy: Canada is reviewing Bill S-210, which mandates age verification for accessing adult content. The Office of the Privacy Commissioner (OPC) supports privacy-preserving age assurance methods.
  • 2025 Initiatives:

5. Australia

  • Current Policy: Australia is conducting a landmark trial of age assurance technologies, with results expected by June 2025. The trial evaluates methods like biometrics, parental consent, and app store age checks.
  • 2025 Initiatives:

6. International Initiatives

  • Global Standards: The ISO/IEC 27566-1 framework for age assurance is nearing finalization, with parts 2 and 3 expected in 2025. This standard will provide a unified approach to age verification, estimation, and inference.
  • Collaborative Efforts: The Global Age Assurance Standards Summit and the International Age Assurance Working Group are promoting interoperability, privacy, and regulatory consistency across jurisdictions.
  • 2025 Initiatives:

Key Themes for 2025

  1. Legislation Implementation: Jurisdictions like the UK, US, and EU will see new age assurance laws come into force, requiring platforms to adopt robust verification methods.
  2. Interoperability: Initiatives like AgeAware® will enable seamless, privacy-preserving age verification across platforms and borders.
  3. Global Standards: The adoption of ISO/IEC 27566-1 and IEEE 2089.1 will provide a unified framework for age assurance, ensuring consistency and reliability.
  4. Complementary Measures: Digital literacy, parental controls, and app store age checks will play a crucial role in protecting children online.

Innovation and Advocacy: Continued innovation in age estimation and verification technologies, coupled with advocacy for child safety, will drive progress in 2025.

Who are the national authorities in charge of supervising social media platforms?

National authorities responsible for the supervision of social media platforms vary by country and region. Below is an overview of key authorities based on the search results:

1. China

  • Central Cyberspace Affairs Commission (CAC): The CAC is China’s top internet watchdog, responsible for regulating online content, including social media platforms. It issues guidelines and enforces rules to ensure compliance with national laws, such as cracking down on misinformation, fake accounts, and illegal content.

2. European Union (EU)

  • European Data Protection Board (EDPB): The EDPB provides guidelines and oversees the implementation of data protection laws, including those affecting social media platforms, under the General Data Protection Regulation (GDPR).
  • National Regulatory Authorities: Each EU member state has its own regulatory body. For example, Germany’s Federal Network Agency (BNetzA) enforces the Network Enforcement Act (NetzDG), which requires social media platforms to remove illegal content promptly.

3. United States

  • Federal Communications Commission (FCC): While the FCC primarily regulates telecommunications, it also plays a role in overseeing aspects of online communication, including social media platforms, particularly in areas like net neutrality and broadband access.
  • Federal Trade Commission (FTC): The FTC enforces consumer protection laws and addresses issues like privacy violations and deceptive practices on social media platforms.

4. United Kingdom

  • Office of Communications (Ofcom): Ofcom is the UK’s communications regulator, which has been granted expanded powers to regulate online harms under the proposed Online Safety Bill. It oversees social media platforms to ensure they remove illegal and harmful content.

5. India

  • Ministry of Electronics and Information Technology (MeitY): MeitY oversees the implementation of the Information Technology Act and related rules, including intermediary guidelines for social media platforms. It works with platforms to ensure compliance with content removal and data localization requirements.

6. Germany

  • Federal Network Agency (BNetzA): BNetzA enforces the Network Enforcement Act (NetzDG), which mandates that social media platforms remove illegal content within strict timeframes. Non-compliance can result in significant fines.

7. Australia

  • Australian Communications and Media Authority (ACMA): ACMA regulates online content, including social media platforms, under the Broadcasting Services Act. It enforces rules related to harmful content and misinformation.

8. Brazil

  • National Telecommunications Agency (Anatel): Anatel oversees telecommunications and internet services, including social media platforms, ensuring compliance with national regulations.

9. Singapore

  • Infocomm Media Development Authority (IMDA): IMDA regulates online content and enforces the Protection from Online Falsehoods and Manipulation Act (POFMA), which targets misinformation on social media platforms.

10. South Africa

  • Independent Communications Authority of South Africa (ICASA): ICASA regulates electronic communications, including social media platforms, to ensure compliance with national laws.
What national policies exist for licensing and legal incorporations of social media platforms?

Several jurisdictions around the world require social media platforms to obtain licenses or establish local legal entities to operate within their borders. These requirements are often tied to national security, content moderation, and data localization concerns. Below is a summary of key jurisdictions and their specific licensing or local entity requirements:


1. Malaysia

  • Licensing Requirement: Malaysia mandates that social media platforms and messaging services with over 8 million active users obtain an Application Service Provider Class (ASP(C)) license from the Malaysian Communications and Multimedia Commission (MCMC). This regulation took effect on January 1, 2025, and aims to combat cybercrime and harmful content. Non-compliance can result in fines of up to RM500,000 and/or imprisonment for up to 5 years.

2. India

  • Local Entity Requirement: India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 require social media platforms with significant user bases to appoint local compliance officers, establish a physical presence in India, and comply with content removal requests within 36 hours. Platforms must also publish compliance reports every six months.

3. Turkey

  • Local Representation: Turkey requires social media platforms with more than 1 million daily users to appoint a local representative and store user data within the country. Failure to comply can result in fines, bandwidth throttling, or outright bans.

4. Russia

  • Data Localisation and Licensing: Russia mandates that social media platforms store user data locally and comply with content removal requests. Platforms must also register with Roskomnadzor, the federal communications regulator, and face fines or restrictions for non-compliance.

5. China

  • Strict Licensing and Localisation: China requires foreign social media platforms to obtain licenses and establish local entities to operate. However, most Western platforms like Facebook and Twitter are blocked, while domestic platforms like WeChat and Weibo are heavily regulated under China’s strict content moderation laws.

6. European Union (EU)

  • Conditional Immunity: While the EU does not mandate licensing, its Digital Services Act (DSA) requires platforms to establish local points of contact and comply with strict content moderation and transparency rules. Platforms must also adhere to the General Data Protection Regulation (GDPR) for data handling.

7. Brazil

  • Local Legal Representation: Brazil requires social media platforms to appoint local legal representatives and comply with content removal requests, especially during elections. Failure to comply can result in fines or temporary bans.

8. South Korea

  • Licensing and Compliance: South Korea requires platforms to comply with its Information and Communications Network Act, which includes content moderation and data protection requirements. Platforms must also register with the Korea Communications Commission (KCC).

9. Singapore

  • Licensing for News Platforms: Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA) requires platforms disseminating news to obtain licenses and comply with content correction orders. While not specific to social media, it impacts platforms like Facebook and Twitter.

10. Australia

  • Age Verification and Local Compliance: Australia’s Online Safety Act requires platforms to comply with content removal requests and implement age verification systems. Platforms must also appoint local representatives to handle regulatory matters.
What are the liabilities of social media companies for the following types of content: disinformation, incitement of violence, hate speech, pornography, copyright infringement, scams, blasphemy, and impersonalisation?

More monitoring infomration will be provided.

What national policies are in place concerning companies paying news platforms for content made available on social media platforms?

More monitoring infomration will be provided.

What regulations are in place for AI-generated content?

EU:  AI Act requires labelling of AI-generated content.
USA: The Federal Trade Commission (FTC) has issued guidelines urging companies to disclose when content is AI-generated, particularly in advertising and marketing.

China: China has implemented strict regulations requiring platforms to label AI-generated content, especially deepfakes, and to obtain consent from individuals before using their likenesses.

What are the practices of social media companies of using technical solutions such as geolocation for filtering content according to jurisdiction?

Social media companies employ various technical solutions, including geolocation, to filter content according to jurisdictional regulations. These practices are essential for complying with local laws, addressing cultural sensitivities, and managing licensing agreements. Below is an overview of the key practices and their implications:

For example, social media companies use geolocation to help identify users from Germany when they have to remove hate speech of illegal content within 24 hours as required under the NetzDG law

 Logo, Text, Outdoors

In 2025, digital development will remain a central theme in international cooperation, particularly through the WSIS+20 process. The WSIS+20 High-Level Event, scheduled for July 2025 in Geneva, will discuss issues such as bridging the digital divide and advancing the use of digital technologies for development.

The formal WSIS+20 review meeting at the UNGA level (likely in December 2025) will not only assess 20 years of implementation of WSIS action lines in support of an inclusive information society but will also outline future priorities.

Additionally, the Hamburg Declaration on Responsible AI for the SDGs will introduce new frameworks for ethical AI development, emphasizing inclusivity and sustainability.

Inclusion is a cross-cutting development issue and cornerstone of the 2030 Agenda for Sustainable Development. Digital inclusion has the following main aspects:

Connectivity inclusion

In 2025, efforts to ensure equal access to the internet and digital technologies will intensify, particularly in rural and underserved areas. The WSIS+20 process will play a pivotal role in bridging the digital divide by outlining development priorities and concrete actions.

For example, governments and private sector players are expected to invest in affordable connectivity solutions, such as low-cost satellite internet and community Wi-Fi networks, to ensure that marginalised communities can participate in the digital economy.

Financial inclusion

The financial inclusion sector is transforming in 2025, moving beyond mere access to financial services to focus on financial health and well-being. Initiatives like CGAP’s Financial Inclusion 2.0 emphasise integrating resilience, equity, and broader development goals, such as climate change mitigation and gender inclusion.

For example, sustainable finance models, such as green bonds and ESG-linked financial products, are expected to grow significantly, with the green bond market projected to reach $2 trillion by 2025. However, targeted policies will be crucial in less developed financial systems to prevent digital financial inclusion from exacerbating gender disparities.

Economic inclusion

Economic inclusion in 2025 will focus on enabling full participation in the labour market and entrepreneurship opportunities. Digital platforms will be key in supporting small and medium-sized enterprises (SMEs), particularly in developing regions.

For example, open finance ecosystems will provide SMEs with access to credit, savings, and insurance services, fostering inclusive digital economies. Additionally, competency-based education models will align workforce skills with market demands, ensuring that individuals from diverse backgrounds can thrive in the digital economy.

Work inclusion

Work inclusion efforts in 2025 will prioritise equal access to careers in the tech industry and beyond. One tendency according to the WEF 2025 Job Report, 19% of companies are planning to shift from credential-based to skill-based hiring practices. This tendency is particularly noticeable in AI and tech sectors. Reskilling and upskilling will gain momentum as ensuring jobs in the context of the major shift generated by AI and digital advancements will be (or should be) a priority. 

Gender inclusion

Educating and empowering women and girls in the digital and tech realms. This includes initiatives to increase female participation in STEM fields, provide digital skills training, and ensure women access digital tools and resources equally.

Policy inclusion

Encouraging the participation of stakeholders in digital policy processes at the local, national, regional, and international levels. This includes fostering multistakeholder collaboration to ensure that digital policies reflect the needs and perspectives of diverse communities. The WSIS+20 process, for example, involves consultations with governments, private sector entities, civil society, and international organisations to shape inclusive digital governance frameworks.

Knowledge inclusion

Contributing to knowledge diversity, innovation, and learning on the internet. The rise of AI brings new relevance to knowledge diversity, as current AI models are often based on limited datasets, primarily from Western sources. In the coming years, communities will aim to develop bottom-up AI solutions that reflect their cultural and knowledge heritage. This includes initiatives to create diverse datasets, promote local AI innovation, and ensure that AI technologies are inclusive and representative of global perspectives.

Commons

AI and digital commons will feature prominently in 2025, starting with the AI Summit in Paris in February. Commons could be realised through specific initiatives such as: a potential global data governance framework, open data initiatives in climate, health and education, knowledge inclusion initiatives, open-source AI platforms, etc. 

EVENTS

World Economic Forum 2025 | 20-24 January 2025, Davos

AI Action Summit | 10-11 February 2025, Paris

Generative AI Summit | 31 March – 2 April 2025, London

WSIS Forum (titled ‘WSIS+20 High-Level Event 2025) | 7–11 June 2025, Geneva

AI for Good Global Summit 2025 | 8-11 July 2025, Geneva

2025 High-Level Political Forum on Sustainable Development (HLPF) | 14-23 July 2025, New York

World Summit AI | 8-9 October 2025, Amsterdam

What are the implications of over-emphasising the role of technology in achieving sustainable development goals? How to ensure that the broader systemic challenges (social and cultural) are not neglected in pursuing technological advancements?

Monitoring update will be provided in February 2025.

What is missing in our current approaches to addressing digital divides, and why are we not there yet?

Monitoring update will be provided in February 2025.

Given the slow progress in addressing digital divides despite years of effort, what fundamental assumptions about digital inclusion might we need to challenge or rethink to make meaningful progress in the coming decade?

Monitoring update will be provided in February 2025.

How do we balance the growing emphasis on AI divides and governance with the need to address broader issues of digital inequality and infrastructure gaps, ensuring that the focus on AI does not overshadow other critical areas of digital policy that require attention?

Monitoring update will be provided in February 2025.

A blue background with white letters

AI, digitalisation, and energy

The year 2025 will see digitalisation and environmental sustainability increasingly intertwined, with AI and digital technologies driving innovation while posing new challenges.

Key focus areas will include energy efficiency, circular economy practices, water security, and enhanced sustainability reporting.

Collaboration across sectors, robust governance, and strategic investments will be critical to achieving a sustainable and resilient future.

AI has significantly increased energy consumption, with data centres now consuming approximately 2% of global electricity, a figure comparable to the airline industry. By 2025, the energy demand from data centres is expected to double, reaching 1,000 terawatt-hours (TWh) annually—equivalent to the electricity consumption of Japan. This surge is driven by the exponential growth of AI workloads, particularly generative AI, which requires vast computational resources and energy-intensive cooling systems.

To address this, companies are exploring innovative solutions such as power capping (limiting processor power to 60-80% of capacity) and carbon-aware computing, which shifts workloads to times or locations with lower carbon intensity. Additionally, there is a growing emphasis on renewable energy sources. For instance, Microsoft plans to restart a nuclear power station at Three Mile Island to power its data centres, while Google has ordered advanced nuclear reactors from Kairos Power. These efforts aim to balance AI’s energy demands with sustainability goals.

Circular economy and e-waste management

The adoption of circular economy principles will accelerate in 2025, focusing on product longevity, repairability, and recycling. This shift is critical as global e-waste will reach 82 million tonnes by 2030. Companies like Cisco are leading the way by reusing and recycling nearly 100% of returned products, setting a benchmark for sustainable practices in the tech industry.

Moreover, AI-driven e-waste management strategies are emerging. For example, AI can optimise recycling by identifying reusable components and reducing waste generation. These innovations are expected to reduce e-waste by 16-86% through proactive management and circular economy practices.

However, in 2025, e-waste management remains a major challenge despite updated regulations. New Basel Convention amendments, effective 1 January 2025, require prior consent from importing and transit countries to prevent illegal dumping. OECD countries have also updated their e-waste guidelines to align with circular economy goals, offering a framework for trade with non-Basel parties like the US. However, disagreements over adopting Basel amendments have left OECD countries to choose between existing OECD rules or stricter Basel controls, creating inconsistencies and complicating enforcement. These gaps risk enabling illegal dumping, especially in regions with weaker oversight. 

Alarmingly, only about a quarter of e-waste is recycled properly, with most ending up in informal sectors or landfills. There is still a significant amount of work to be done before the above regulations become a reality.

Data centres, AI, and water consumption

AI facilities are particularly water-intensive due to the high heat generated by GPUs and other hardware. A small 1-megawatt data centre can consume up to 6.6 million gallons of water annually, primarily for cooling purposes. In 2025, global AI demand could account for the withdrawal of 4.2-6.6 billion cubic meters of water annually – roughly half of the UK’s yearly water usage.

To mitigate this, innovative cooling technologies such as immersion cooling and liquid-to-liquid heat exchangers are gaining traction. These methods can reduce water consumption by up to 55% and improve energy efficiency by 10-20%. Regulatory pressure is also mounting, with the EU’s Energy Efficiency Directive and California’s Climate Disclosure Laws pushing for greater transparency and stricter water and energy use regulations in data centres.

Collaboration and governance

Achieving a sustainable and resilient future in 2025 will require collaboration across sectors, robust governance, and strategic investments. Governments and industry leaders increasingly recognise the need for binding renewable energy and efficiency targets for data centres. For example, Germany’s Energy Efficiency Act mandates that data centres achieve 100% renewable energy reliance by 2027.

Additionally, public-private partnerships are essential for scaling sustainability initiatives. Companies invest in on-site renewable energy generation and advanced energy storage solutions to ensure a stable power supply. Hyperscalers such as Google and Microsoft spearhead this initiative by entering into long-term power purchase agreements (PPAs) with renewable energy providers.

In 2025, the International Court of Justice (ICJ) is expected to issue an advisory opinion on the obligations of states concerning climate change, as requested in a 2023 UNGA resolution. While non-binding, this advisory is expected to tackle issues related to the human right to a clean, healthy, and sustainable environment and all the other human rights and related obligations of states, and it could have an impact on intergovernmental processes dealing with climate change matters.

What is missing in our approaches to addressing the environmental impact of digital technologies?

Monitoring update will be provided in February 2025.

How do water governance initiatives address water consumption of data centres?

Monitoring update will be provided in February 2025.


Related resources

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!