Hands of a guy on laptop keyboard

Understanding AI: Why does it matter?

Published on 17 October 2024

The 50th annual International Forum on Diplomatic Training (IFDT) meeting held in Budva, Montenegro, from 8 to 11 October 2024, gathered Deans and Directors of Diplomatic Academies and Institutes, from Tokyo and Beijing to Washington and Lima.

It was a wonderful opportunity not only to meet old friends and colleagues but also to exchange views and best practices. Diplo was kindly asked by the organisers to design and deliver the session entitled How to Use Artificial Intelligence in Diplomatic Training, and we found that the same question arose in various guises, namely: ‘We know AI is important, but we are not sure what to do with it’.

We’ve heard this question at many events over the last year or two, dealing with subjects not just related to (diplomatic) training—as in the case of the IFDT—but also with many other topics, from policy regulations to internal organisational restructuring.

In this blog post, I will try to provide our view on how to navigate the current landscape of AI narratives.

AI narratives: A compelling, competing, and conflicting ragbag!

The confusion around the implementation of AI is predictable, given all the competing narratives that still surround this topic.

The images hows an illustration showing a signpost sending confusing messages (this way, that way, that other way) while pointing to different aspects related to AI narratives.
The confusing AI narratives landscape

On the one hand, we have big tech companies that have developed the technology and view it as a commercial product from which they aim to profit. The narrative they promote is often overwhelmingly positive, presenting AI as a solution to all our problems—and a solution that’s just one click away, as simple as that!

While such an approach is entirely legitimate from a business perspective, it leads to unrealistic expectations about how the tool can be used ‘out of the box’, instantly meeting all our wants and needs.

This is not only incorrect, but also results in a typical human dichotomywhile some take AI for granted and plan to use it without additional effort to double-check the output, others tend to see it as the end of humanity, evoking scary scenes from doomsday sci-fi movies and wanting to ban it altogether.

Next, we have governments and various government bodies who understand the potential of the technology and want to either curb it, control it, or regulate it in order to safeguard national or geostrategic interests. This narrative is, again, completely legitimate, but it reinforces the incorrect idea that AI is omnipotent and too dangerous to engage with.

Finally, there are various users, so-called ‘early comers’, who are not afraid to engage with the technology and who have been using various AI-assisted tools for different purposes with, more often than not, different results. Many of them then promote such an approach as the only proper way to use the tool, which creates the notion that there are correct and incorrect ways of using the technology. Not only does this add to the overall confusion, but this view is also untrue.

The nature of AI: More artificial than intelligent

We at Diplo have been dealing with the interplay between technology and diplomacy since the emergence of the internet some 30 years ago, so when we first encountered the AI technology, we decided to address it the same way we’ve been addressing other technological advancements:

  1. Look under the hood to see what AI actually is
  2. Find a way to implement it in our daily work (in order to make our lives easier and our work more efficient)
  3. Convert the expertise and knowledge gained from this experience into comprehensive capacity development programmes that we can then offer both to our partner organisations, and to students attending our courses

In this process, we came to the conclusion that AI is still much more artificial than intelligent. Why? Because behind every AI algorithm, there is basically a large language model (LLM) designed to mimic natural human language, not just in terms of grammar or syntax but also on the level of semantics.

These models are pre-trained on an enormous dataset of information, following principles of patterns and probabilities. When you ask a question or start the conversation, an LLM will look for similar questions in its dataset and respond, not by providing links to sources as search engines do, but by generating the responses. This process of generating a response starts by predicting the most likely first word in the response, then selecting the most likely next word from several possible candidates, and repeating the same process for every other word in the rest of the sentence.

This is also the cause of a few other problems, namely:

  • The difficulty in determining the validity and factual accuracy of sources. Since the answer is always generated rather than referenced, we often receive slightly different responses to the same question.
  • The answers provided, and conclusions reached, result from programming that does not follow the standard human cognitive process; i.e. we do not fully understand how it actually works. This is where ‘artificial’ becomes ‘alien’.
  • These built-in variations in response mimic natural language—i.e. how we humans typically speak—tricking us into believing we are interacting with an intelligent entity when, in fact, we are engaging with a large language algorithm.

AI is not intelligent because, first of all, it does not understand the context of your question. Instead, it treats your question as a prompt to  generate a response based on probability rather than addressing a particular situation. 

Secondly, it does not recognise the speaker’s need, i.e. the underlying driving concern behind your question. This is also why, thirdly, it cannot provide a response that is best suited to your specific situation. Instead, it just repeats some general phrases that may sound nice, worldly, or even elaborate, but in most cases, they are mere phraseology exercises, with little applicability.

AI as a tool: The human element

All the things mentioned above—context, understanding, and creative thinking, i.e. the intelligent aspects—are the domain of humankind. This is good news, as it means that in the interplay between technology and humans, technology is artificial, while humans remain the intelligent ones who still have the upper hand.

However, if we want to benefit fully from this technology, we can’t afford to be lazy and expect AI to do all the work for us. Instead, we should seize the opportunity that this unbelievably fast and powerful tool offers, actively engage with it, and use it to our advantage. This is precisely what we at Diplo have been doing over the past year, producing tools that not only assist us in our daily work but also allow us to have some fun along the way. A prime example is the  Ask Sir Humphrey chatbot, designed by DiploAI to provide humorous responses and witty repartees laced with British humour.  

If we don’t act now, while the technology is still far from reaching the point of singularity—i.e. the moment when algorithms will begin to mimic not just human language but human thinking as well—we may soon find ourselves living a cautionary tale, in which AI is a good servant but a bad master. Whether or not AI singularity presents a threat will be discussed in the next blog post.

Related resources

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog