Hands of a guy on laptop keyboard

Governments vs ChatGPT: Regulation around the world

Published on 20 April 2023
Updated on 16 June 2023

ChatGPT, the AI-powered tool that has taken the world by storm, has now caught governments’ eyes. In reaction to user complaints and worrying reports (such as those by Europol and the OECD) over data privacy, security, transparency, and other risks, several countries have initiated investigations into the practices of OpenAI, the company behind ChatGPT. 

Governments are also ramping up efforts to introduce regulation that can tackle AI tools such as ChatGPT, collectively known as general purpose AI. These tools might not be designed with a high-risk use in mind, but could nonetheless be used in settings that can lead to unintended consequences.

The three main hotspots for new rules are the EU, China, and the USA. Here are the latest developments:

CHINA

Right now, China is at the front of the regulation race. The Cyberspace Administration of China (CAC) wasted no time in proposing new measures for regulating generative AI services on 11 April, which are open for public comments until 10 May. 

The rules. Providers must ensure that content reflects the country’s core values, and shouldn’t include anything that might disrupt the economic and social order. No infringements of discrimination, false information, or intellectual property are allowed. Tools must undergo a security assessment before being launched.

Who they apply to. The onus of responsibility falls on organisations and individuals that use these tools to generate text, images, and sounds for public consumption. They are also responsible for making sure that pre-trained data is correctly cited and lawfully sourced.

Its no-nonsense approach to regulating tech companies, most evident in recent years, is a good indication that China’s rules will be the first to be implemented.

The industry is also calling for prudence. For example, the Payment & Clearing Association of China has advised its industry members to avoid uploading confidential information to ChatGPT and similar AI tools, over risks of cross-border data leaks.

EUROPE

Well-known for its tough rules on data protection and digital services/markets, the EU is inching closer to seeing its AI Act – proposed by the European Commission two years ago – materialise. While the European Council has adopted its negotiating position, the draft is still being debated by the European Parliament. The act will then need to be negotiated between the three EU institutions in the so-called trilogues (Parliament, Council and Commission. Progress is slow, but sure.

As policymakers debate the text, a group of international AI experts argue that general-purpose AI systems carry serious risks and must not be exempt under the new EU legislation. Under the proposed rules, certain accountability requirements apply only to high-risk systems. The experts argue that software such as ChatGPT needs to be assessed for its potential to cause harm and must also have commensurate safety measures in place. The rules must also look at the entire life cycle of a product.

What does this mean? If the rules are updated to consider, for instance, the development phase of a product, this means that we won’t just wait to look at whether an AI model was trained on copyrighted material, or on private data, after the fact. Rather, a product will be audited before its launch. This is quite similar to what China is proposing and what the USA will consider soon.

The draft rules on general-purpose AI are still up for debate at the European Parliament, so things might still change. 

USA

Well-known for its laissez-faire approach to regulating technological innovation, the USA is taking (baby) steps towards new AI rules.

The Biden administration is studying potential accountability measures for AI systems, such as ChatGPT. In its request for public feedback (which runs until 10 June), the National Telecommunications and Information Administration (NTIA) of the Department of Commerce is looking into new policies for AI audits and assessments that tackle bias, discrimination, data protection, privacy, and transparency. 

What this exercise covers. Everything and anything that falls under the definition of AI systems and automated systems, including technology that can ‘generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments’ (definition by the US National Institute for Standards and Technology). 

What’s next? There’s already a growing interest in regulating AI governance tools, the NTIA writes, so this exercise will help it advise the White House on how to develop an ecosystem of accountability rules. 

Separately, US Senate Democrats are working on new legislation spearheaded by Majority Leader Chuck Schumer. A draft is circulating, but don’t expect anything tangible anytime soon unless the initiative secures bipartisan support.

In a bid to avoid facing intellectual property infringements, music company Universal Music Group has ordered streaming platforms, including Spotify and Apple, to block AI services from scraping melodies and lyrics from copyrighted songs, according to the Financial Times. The company fears AI systems are being trained on artists’ intellectual property. IPR lawsuits are looming.

And if you’re a music fan, here’s Heart on My Sleeve, an AI-generated song generated by someone known only as ‘ghostwriter’ on YouTube, which simulated the voices of Canadian singers Drake and The Weeknd. You can guess what Universal’s reaction was.

YouTube player

Have you read this? Governments vs ChatGPT: Investigations around the world

Sign up for the Digital Watch Weekly to receive the latest analysis and updates on global digital policy. It’s delivered to your inbox every Monday.

Related resources

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!