Part 8: ‘Maths doesn’t hallucinate: Harnessing AI for governance and diplomacy’
This post is part of the AI Apprenticeship series:
- Part 1: AI Apprenticeship 2024 @ DiploFoundation
- Part 2: Getting introduced to the invisible apprentice – AI
- Part 2.5: AI reinforcement learning vs human governance
- Part 3: Crafting AI – Building chatbots
- Part 4: Demystifying AI
- Part 5: Is AI really that simple?
- Part 6: What string theory reveals about AI chat models
- Part 7: ‘Interpretability: From human language to DroidSpeak’
- Part 8: ‘Maths doesn’t hallucinate: Harnessing AI for governance and diplomacy’
By Dr Anita Lamprecht (supported by DiploAI and Gemini)
Maths doesn’t hallucinate. But does AI? This seemingly paradoxical statement is key to understanding the latest advancements in artificial intelligence. In this final blog of my AI Apprenticeship series, I’ll show you how Diplo harnesses AI for governance and diplomacy, exploring the fascinating reality behind AI ‘hallucinations’ and their implications for this crucial field.
Maths does not hallucinate
AI hallucinates. It’s a phrase that’s thrown around a lot these days, often with a sense of fear and wonder. Machine hallucinations have found their place in art and even in dictionaries. But what if I told you that AI doesn’t actually hallucinate? What if the reality is far more complex – and even more fascinating?
The current hype around (generative) AI can be traced back to a 2017 invention by Google: the transformer architecture. This architecture represents a fundamentally different approach to how AI systems work. Stripping the transformer of all the magic reveals that it is built upon a foundation of mathematical functions and statistical methods. These functions work together to process information, learn patterns, and generate outputs. The results have created a lot of excitement, but there is a trade-off when using maths.
Transforming numbers into meaning
Human societies are more than mathematical functions. I once read the statement: ‘Maths doesn’t care about culture or our history.’ Taken out of context, this might seem absurd. Why? Because it is so obvious that maths does not care about anything – it’s just maths. But stay with me. Let’s transform mathematical functions and numbers into language. What is the effect? Suddenly, we have meaning for everyone, regardless of one’s mathematical comprehension. However, the meaning of words can differ significantly. What words mean to us is a question of the value or the weight we assign to them. The word ‘hallucination’ itself is a great example.
Setting an example: Hallucinations
The word ‘hallucination’ traditionally refers to a sensory human experience that appears real within the mind but does not exist. Think of seeing things that aren’t there or hearing music when no music is played. Hallucinations can be perceived as a sign of mental illness, a source of psychedelic fun, a spiritual revelation, or even a form of creative inspiration, depending on the context. However, when applied to AI, the word ‘hallucination’ takes on a different meaning, describing outputs that are factually incorrect or nonsensical. This shift in meaning can be confusing. After all, AI is fundamentally based on maths – algorithms and probabilities – and maths itself doesn’t hallucinate.
Take the following video as an example:
Watching the AI struggle to generate a realistic diving animation evokes a strong sense of unease. The distorted movements and unnatural poses trigger an instinctive reaction in us. But imagine if we were presented with the underlying mathematical calculations and probabilities instead of the video. Would we feel the same way? Likely not. The numbers themselves, while indicating unusual outputs, wouldn’t carry the same emotional weight or disturbing effect.
Opening the doors to the multiverse?
This highlights the fundamental difference between how humans and AI process information. We are driven by emotions, sensory experiences, and our understanding of the world. AI, on the other hand, navigates a landscape of probabilities, producing a ‘multiverse’ of potential outcomes with each calculation. Instead of showing us factual numbers, generative AI shows us probabilities.
This is where the ‘multiverse’ analogy comes into play. Transformers essentially open the doors to something comparable to the multiverse. Each time we hit enter, an AI system will choose one door from millions, and show us a hypothetical alternative world based on its probability calculations.
Showing alternative realities, in itself, is not necessarily a problem. The visionary narratives surrounding the metaverse and virtual worlds are based on this notion. The UN Virtual Worlds Day in June 2024 demonstrated how simulating different alternative perspectives (or hypothetical worlds) could support and accelerate the sustainable development goals (SDGs). We will dive deeper into this topic in my upcoming futures literacy blog series.
Watching the impossible
However, these plans are still more in the visionary realm, like the narrative of artificial general intelligence (AGI). Take the diving video as an example. Despite the potential showcased at events like the UN Virtual Worlds Day, this technology has yet to reach the maturity required for automated generation of realistic videos or lifelike simulations.
Our tolerance for processing unnatural movements, as seen in the diving video, is very low. This means our minds cannot so easily be tricked into believing in this part of the ‘multiverse’ because it shows something impossible. Impossible, in my understanding, as we would not survive in a world that defies the laws of biological human life.
With language, however, our tolerance for probability is much higher. Missing words and twisted sentences are not a huge problem for us. They seem possible to our minds. Why does this matter for our core topic – governance of and with AI?
Harnessing the possible
The appeal of generative AI is undeniable. Chatbots generate impressive output, fuelling our hopes for a technology capable of solving complex problems. Big tech companies continually release new or improved generative AI models, further raising our expectations, with some even predicting the arrival of artificial general intelligence (AGI). However, simply hoping for a solution doesn’t get us anywhere. What can we do? How can we regain agency over our here-and-now presence?
A major problem: The ‘multiverse’ effect
One issue we must address is the ‘multiverse’ effect, often referred to as ‘hallucinations’. This phenomenon arises because AI models don’t follow a fixed set of rules. Instead, they calculate the probability of different words and sentences fitting together, leading to variations in their responses – even when given the same input. This probabilistic nature can sometimes produce outputs that seem disconnected from reality.
Chatbots with agency
We now harness these effects by providing context – assigning weights to datasets and conducting elaborate prompting exercises. This was our practical task during the AI Apprenticeship online course. Step by step, we created chatbots designed to produce output based on our dataset. Our datasets, along with limitations and sources from the internet, serve as our defined ‘reality’ (our perspective or ‘world’).
Traditional chatbots typically follow pre-programmed rules and decision trees. Our Diplo chatbots are much more flexible and adaptable. They learn from interaction and can perform tasks beyond simple conversations. Calling them chatbots feels like an understatement. What we are building are agents, adapted to our knowledge and needs.
From LLM’s multiverse to the world you represent
Agents act as a layer on top of large language models (LLMs). They are guided by the context we provide – particularly through system prompts, databases, weights, and structured prompting. These specialised agents act on our behalf, navigating the ‘multiverse’ to deliver outputs that align with our desired outcomes. This is how we actively adapt AI to fit our socio-technological systems.
It is important to remember that hallucinations – or the multiverse effect – occur because generative AI models calculate the probability of different words and sentences fitting together. This leads to variations in their responses, which can sometimes seem disconnected from reality. Understanding this dynamic is essential for AI governance and diplomacy, as it reveals actionable areas where intervention is possible now.
Don’t wait for the future: Learn to use AI now
What about bias? Shouldn’t all countries aim to develop their own foundational AI models to address bias in data? Creating completely unbiased models might be an illusion. As our lecturer, Jovan Kurbalija, pointed out: bias is part of life. We are all biased; neither we nor our data are neutral. This doesn’t mean we should accept harmful biases. Instead, Jovan recommends shifting attention to protecting our knowledge. How can we protect our knowledge? By retaining it in our chatbots or agents. Our knowledge provides the context that LLMs need to generate accurate output, helping to avoid getting lost in the multiverse of possible results.
While building and running foundational AI models (LLMs) may be beyond the reach of many countries, the situation is different for agents. The AI Apprenticeship course underscored a crucial point: anyone can build and host independent AI agents. This includes countries, organisations, and even individuals with reasonable budgets, reducing reliance on big tech companies. Such independence is vital for diplomats representing their countries’ interests in international negotiations and for organisations that must maintain impartiality.
Geneva AI Attaché
To effectively utilise this technology now, we must identify which parts of a process it can support. Diplo has demonstrated how the new Geneva AI Attache AI platform can empower small and developing countries in diplomacy by supporting preparation, negotiation, and follow-up in multilateral processes. As Jovan Kurbalija explains, ‘As we look to the future, the Geneva AI Attaché can be a beacon of innovation and empowerment in the world of diplomacy.’ Discover the Geneva AI Attaché and explore its potential to transform governance processes.
Closing thoughts
Dear reader, thank you for joining me on this journey through the AI Apprenticeship blog series. Being literate in this field is essential. The AI Apprenticeship online course is an outstanding opportunity to acquire AI literacy by building your specialised chatbot agent and understanding its implications for governance.
If you are still unsure about our socio-technological system’s direction, stay tuned for my next blog series, where I focus on futures literacy for AI governance. Together, we will explore tools to dismantle future narratives and learn to govern technology in a world where the future is both a fact and an uncertainty.
The AI Apprenticeship online course is part of the Diplo AI Campus programme.
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!