Generative AI models – a fun game that can easily get out of hand?
Updated on 04 April 2024
All your friends have replaced their social media profile pictures with AI-generated avatars, but you do not know where they got them from? Or are you wondering what exactly people are talking about on Twitter when they post a picture of a conversation with a chatbot and use the hashtag #chatgpt?
You might have also seen this incredible statistic on LinkedIn – the time it took apps to reach 1 million users:
- Netflix – 3.5 years
- Airbnb – 2.5 years
- Facebook – 10 months
- Spotify – 5 months
- Instagram – 2.5 months
- iPhone – 74 days
- ChatGPT – 5 days
If you are a user of the social networks Instagram and Twitter, the probability that you have not seen a dialogue with ChatGPT or an avatar generated by Lensa is minimal. In this text, we will try to demystify both generative models and look at them from the technical and legal sides.
Let’s go!
First of all, what do they have in common? In the simplest form, both are based on generative artificial intelligence algorithms – a type of AI technology that is focused on generating new content rather than recognising or classifying existing content. This can include generating text, images, audio, or other forms of media. Generative AI uses advanced machine learning algorithms to learn the patterns and structures of a given type of content and then uses this knowledge to generate new, original content that is similar to the training data. This can be used for a variety of purposes, such as creating new artwork, generating realistic speech, or even generating entire news articles.
Example of a generative text-to-image algorithm – for the description: Superman eating a lemon in a rock concert, Realistic painting; DALL-E 2 generated the images above
But what are ChatGPT and Lensa?
OpenAI, a research institute in the field of artificial intelligence, has developed ChatGPT, a cutting-edge language model designed to help users generate human-like text. GPT-3 has been enhanced to become ChatGPT, a model more suited for use in a chatbot environment. ChatGPT developers trained the model on a large dataset of conversational text, such as chat logs and social media conversations, to enable the model to learn the patterns and structures of human conversation and produce more natural and conversational responses. Enhancements to the model included the ability to handle interruptions and maintain context across multiple turns in a conversation, as well as other features that improve the model’s performance in a chatbot setting.
And what does ChatGPT say about itself?
Methodology behind ChatGPT
Chat GPT was trained using Reinforcement Learning from Human Feedback (RLHF). This model was initially fine-tuned with human AI trainers, who provided conversations in which they played both sides, the user and the AI assistant. To create a reward model for reinforcement learning, comparison data was collected, consisting of two or more model responses ranked by quality. Proximal Policy Optimisation was then used to fine-tune the model with these reward models. Several iterations of this process were performed. This new artificial intelligence model promises to provide more accurate and reliable results.
Training steps of ChatGPT
Magic Avatars by Lensa AI
Lensa AI, developed by Prisma Labs, is an app that uses AI to create avatars from selfies. The app’s latest feature, called Magic Avatars, uses the Stable Diffusion deep-learning model to generate dreamy selfies in various art styles. Stable Diffusion is a text-to-image generative model that uses images to develop its image model and is commonly used in AI art generators.
Unlike typical photo editors that enhance photos immediately, the Lensa AI app operates uniquely. You need to upload 10 to 20 selfies, so the algorithm can learn your face pattern. When you upload your selfies to the app, they are sent to Amazon or Google cloud servers where they are analysed by AI using the latent diffusion model and a collection of over 400 million images. This requires a lot of computational power, which is why it takes longer for AI to generate images with different styles and variations.
After the training, you can send your detailed query to the application – in the blink of an eye you can become a character from the TV series Game of Thrones painted in the style of Van Gogh, an alien in the style of impressionism, to be in New York or Paris; in short, you can look whatever you want, all you need is imagination.
Stable Diffusion Algorithm
What do our law experts say?
Misuse of AI-generated images applications
AI vs Artists
Kim Leutwyler, a Sydney-based artist, accused the imaging app Lensa of profiting from stolen, uncredited, and uncompensated art. Namely, the artist said that many original portraits were found in the database of the application that trained popular AI art models, without being compensated or credited. As such, Leutwyler stated that there is a need to enhance AI copyright law as the current legislation is not in pace with the speed of technology.
While Lensa’s Terms of Use prohibit content that may infringe or violate any copyright or intellectual property of any person, the training dataset for most AI-generated image applications contains billions of images scraped from the internet. Even though platforms claim they do not use copyright-protected work in their datasets, a number of artists have accused them of allegedly replicating their work. Generally speaking, there are two issues.
The first issue is the legal/ethical use of already existing images for the creation of AI-generated work. The main argument is that other images are used for training purposes only, and it is thus unlikely that copyright infringement would occur. This is also seen in real-life examples where artists claim to be inspired by each other’s work when producing art. However, this does not justify the legal or ethical training process of AI platforms for using someone else’s product. Another solid argument is that every artist has inevitably drawn inspiration from the works of others and thus does not infringe on anyone’s copyright. In the case of artificial intelligence, we may assert that they only draw inspiration, although on a far greater scale.
The second issue is the instructions given to AI platforms for the creation of artwork. Namely, if the AI platform is instructed to generate something that resembles the style of another artist, it would most likely constitute an infringement of intellectual property. The absence of judicial practice, however, is the main issue, and we are unsure of how it would be resolved in an actual court case.
A tool that could be used to check whether an artist’s work has been used to train AI is a website called ‘haveibeentrained.com’, also used by Leutwyler. Essentially, the website allows artists to search databases and flag their work for removal if used to train AI platforms. |
EU Intellectual Property Rights (IPS) Law on AI
From the European Union (EU) law perspective, no legislation on AI and IPS has been adopted. At the same time, in October 2020, the European Parliament issued several resolutions and draft reports on AI, including a resolution on intellectual property rights for the development of artificial intelligence technologies. Paragraph 12 of the resolution establishes that patent protection can be granted to AI technologies provided that ‘the invention is new and not self-evident and involves an inventive step’. Additionally, paragraph 15 states that creations generated by AI technology must be protected under intellectual property law to encourage investment and improve legal certainty for citizens, businesses, and users of AI. Now, if we were to analyse the Leutwyler case from the perspective of the above-mentioned resolution, the court ruling could go in two directions: either to conclude that the work generated by Lensa is not entirely innovative because the work of other artists has trained it, or to rule that AI is merely drawing inspiration. What is certain, Lensa could be held liable for the unauthorised use of Leutwyler’s work.
ChatGPT and cybercrime risks
A possible risk of the creation of ChatGPT is that it may be easily used by cybercriminals to teach them how to ‘create’ attacks. According to cybersecurity expert and co-founder of cyber resilience organisation Picus Security, Suleyman Ozarslan, ChatGPT could be used for malicious purposes despite its policy not to write ransomware. Specifically, Ozarslan explained that he described the tactics, techniques, and procedures of ransomware, to the chatbot, without directly describing it as such. He was still able to get the results he wanted, while the chatbot also wrote ‘an effective virtualization/sandbox evasion code’, which could be used by hackers to evade detection and response tools. Meaning that ChatGPT would not write ransomware as such but would create its ‘pieces’.
It is, therefore, evident that the regulation over the use of ChatGPT needs to be strengthened, as it could enhance and even increase cybercrime. According to its Terms of Use, restrictions on its use are considered when used in a way that infringes other person’s rights or is misused for commercial purposes. On the other hand, the words used may be left to the users’ creativity, as some words may be flagged but could still be generated if used differently.
Follow Diplo AI&Data Lab on Medium!
Related blogs
Related events
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!