Hands of a guy on laptop keyboard

Part 2 – Speaking of futures: Que será, será

Published on 29 May 2024
Updated on 20 June 2024
Diplo Wisdom Circle

Part 1 – Speaking of futures: Story-capsules | Part 3 – Speaking of futures: Presuppositions

When Doris Day sang Que será, será / Whatever will be, will be / The future’s not ours to see / Que será, será in the mid-1950s, she gave voice to a ‘cheerful fatalism’ that characterised the post-war decade. Yet, the plot of the film in which the song featured, Hitchcock’s thriller The Man Who Knew Too Much, suggests that it is only through supreme effort that a bad end can be averted once the dice are cast.

These two narratives – fatalism and agency – characterise the tension at the heart of our relationship with the future. Fatalism, whether cheerful or doomsaying, still colours many people’s views of the future as something we can’t control. We can either do our best to attend to the variables within our small remit and then embrace the unforeseeable with an accepting shrug and a wry smile. Or, in those cases where our fatalism assumes the worst, we can dig bunkers and stock them up in anticipation of Armageddon, congratulating ourselves on our superior foresight.

In contrast, narratives that emphasise human agency assure us that we can defy and deflect worst-case scenarios by deploying values such as vision, courage, determination, resourcefulness, and the thinking-outside-the-box naure of individualism. These values, like the magic potion in Asterix and Obelix, not only empower us to destroy tyrannical enemies bent on our extermination, but may also enable us to build idyllic and heroic societies ‘of indomitable Gauls who still hold out against the invaders’.

In my previous blog post, ‘Speaking of Futures (1): Story-capsules’, we looked at how story-capsules found in connotations and metaphors can subliminally influence the way we think about the future. This week, my focus is on the framing devices contained in larger narratives about AI, from science fiction and cautionary tales to logical fallacies.

The image shows a picture of the back of a person's shaved head repeated three times, with the words style, soldier, and survivor on each respectively.
HSBC’s perspective-changing ads play on our tendency to tell a story.

I like to think of frames as the selection and promotion of certain elements to influence the interpretation and moral evaluation of an issue. I emphasise ‘moral’ because we often have an instantaneous or so-called ‘gut’ reaction which prompts us to make a judgment on a presentation, whatever that presentation may be: an image, an image with caption, a story, film, song, or even just a metaphor (see The Coming Wave by Mustafa Suleyman) or a morpheme, as with prefixes such as killer-bot and chat-bot. To the extent that this judgment is subliminal, we are more likely to be influenced by it. And if we are readily influenced by somebody else’s framing devices, we may more readily subject ourselves to their values, arguments, and maybe even their demands, uncritically. Framing, in other words, is a form of storytelling, and all storytelling is selective in what it says and why.

Science fiction is a form of speculative storytelling or ‘fabulation’ which imagines possible worlds, usually in the future but not necessarily, in which some technological advance creates a challenge that human protagonists have to negotiate. It has been defined as ‘a story built around human beings, with a human problem, and a human solution, which would not have happened at all without its scientific content’ (Theodore Sturgeon, 1952).

We find two recurring themes in science fiction: one to do with morality, the other with identity.  In the first, AI as either good or bad, often with a complex dynamic between the two or a transition occurring from helpful to harmful. This is very obviously the case in Golem, Frankenstein, 2001 A Space Odyssey, and I, Robot for example, but may also occur in films that involve AI-human friendships or relationships (She, Ex Machina), where friendly AIs are rightly or wrongly tainted by a shadow of doubt and hyper-intelligent robots on a mission to destroy us come back as heroic friends (Terminator). In Star Trek, the good android is personified by Data, and the amoral one by his twin brother Lore. The identity theme plays around the idea of what it is to be human and often involves androids who emulate humans (Star Trek) or humans who may themselves be replicants (Bladerunner). Morality and identity put defining human concerns at the centre of fiction that is about possible worlds. Science fiction often tests the nature of our humanity in what-if scenarios.

Although AI and intelligent aliens may be framed either positively or negatively, dystopian science fiction imprints itself more strongly on our imagination. This may be in part due to the nature of the genre, which Arthur C. Clarke specified with the following distinction: ‘Science fiction is something that could happen – but you usually wouldn’t want it to. Fantasy is something that couldn’t happen – though you often only wish that it could.’ Why wouldn’t we want the worlds of science fiction to come about? I believe it is because those worlds so often address our atavistic fears of embracing strangers who present as friends only to reveal themselves as foes. It is very likely therefore that these dystopian narratives and the so-called ‘Frankenstein complex’ (our fear of mechanical men turning against us) not only provide a frame for our perception of AI in the future, but are themselves inspired by an ancestral proto-narrative. Archetypal narratives seem to be embedded in our genes and resonate strongly with some of our deepest drives and fears. Drives include our desire for exploration, fulfilment, and finding a complementary other, whereas fears include our fear of losing control, of being enslaved, of losing our souls, or of becoming dehumanised in some other way, and finally of being exterminated.

What solution or consolation do these narratives offer us, if any? More often than not, as in the case of our indomitable Gauls, a marginalised oddball proves to be the hero who saves the day through their refusal to follow the pack, their moral integrity, and life-saving resourcefulness. These three attributes can be seen as an affirmation of human values which are most essential to our survival. It would seem then that in many of our narratives about possible AI-centred worlds, we appeal both to our greatest fears of the outgroup ‘other’, and then prioritise our greatest assets in an ‘us humans’ ingroup.

The image shows three book covers: Klara and the Sun by Kazuo Ishiguro, Machines Like Me by Ian McEwan, and The Three-Body Problem by Liu Cixin.
‘Science fiction is a literature of “what if?” What if we could travel in time? What if we were living on other planets? What if we made contact with alien races? The starting point is that the writer supposes things are different from how we know them to be.’ Christopher Evans, 1988.

Until I watched Liu Cixin’s 3 Body Problem on Netflix (released March 2024), my own preferred framing device in science fiction was when the tables are turned and we humans are portrayed as the baddies, either through a lack of compassion in the way we treat our artificial friends (as in Klara and the Sun by Kazuo Ishiguro) or because we are riddled with moral flaws, vulnerabilities, and inconsistencies. In Machines Like Me by Ian McEwan, Adam, the humanoid android, exposes and attempts to correct these flaws in ways which not only seem cold-hearted, but which ultimately threaten one of our defining qualities: that we are complex and full of contradictions as a species. There is a secondary level of framing that occurs in both these novels, namely an allusion to earlier, possibly less corrupted stages of our evolution.

The Three-Body Problem explores our flaws and contradictions at many levels: Cultural Revolution ‘othering’, our inertia in the face of climate change, our love-hate relationship with the possibility of extraterrestrial life, and our general tendency to think small, like bugs in a large, complex, interrelated universe. Both the book and the film adopt a narrative style based on virtual reality games and the convergence of multiverses. This narrative style is characterised by two distinctive features: chronological time is dispensed with in favour of evolving iterations, and human agency occurs across both temporal and virtual boundaries.

How do these three novels illustrate my claims about framing and my contention that fatalism and agency represent the driving concern of our relationship with the future? No matter whether we are dealing with a simple, linear narrative or with frames within frames, and allusions within allusions, we tend to be very responsive to stories which address our drives and our fears. The selection and promotion of certain elements by each author, often elements in conflict with each other, invite us to reflect on the moral challenges and the proposed outcomes raised by the story. They may also invite us to judge not just these elements but ourselves too, and to reconsider our prior beliefs and prejudices.

Any story told by humans, about a human problem with a human solution, is likely to be a cautionary tale at some level, not necessarily because it was intended as such, but because we are inclined to seek lessons in other people’s stories and experiences. Parables are both intended and understood as moralising stories which indicate, through the power of analogy, how we should act. The Bible is full of them, and so are films and literature.

Even books on AI appeal to cautionary tales. In the preface to his 2014 book Superintelligence: Paths, Dangers, Strategies, Nick Bostrom recounts the ‘unfinished fable of the sparrows’ in which a group of sparrows decide to adopt an owl chick and train it to catch their meals for them while protecting them from other predators. Only Scronkfinkle, a ‘one-eyed sparrow with a fretful temperament’ (our marginalised resourceful oddball), suggests thinking about the complicated question of how to tame the owl before introducing it into the community. Bostrom’s dedication of his book to Scronkfinkle suggests that he is uncertain as to whether any of the AI-control measures that he proposes can prevent the ‘owl in our midst’ from exterminating our species.

Logical fallacies are another highly effective framing device which draw their power from the mini-narratives they encapsulate and the fact that our neural networks seem primed to respond to them. Although the term ‘fallacy’ invites us to dismiss them as mistakes in reasoning, logical fallacies are very effective tools of persuasion.

The appeal to fear, which we have discussed above, is a logical fallacy. Similarly, the appeal to any emotion rather than to hard evidence, is considered a logical fallacy. Yet, through millennia of evolution, we are still more immediately responsive to narratives that engage our passions than those which bullet-point facts. At the neurological level, it has been shown that we not only build many more neural circuits, but we activate them more often and thereby reinforce them when we have our emotional buttons pressed, both positive and negative, than when we appeal to reason alone (Drew Westen, The Political Brain, 2012).

The following list is just a small sample of the many logical fallacies that may be relevant to our perception of AI:

  • Appeal to authority: Where even parables and proverbs may count as authority.
  • Anecdotal evidence: Where an argument is based on a friend’s experience or on a story one has been told (as in the case of fiction and films).
  • Slippery slope: If you take one step down that road, you will not be able to stop yourself and will end up as a broken heap at the bottom of the slope (this fallacy is appealed to by those who want to constrain research into AI ‘before it is too late’).
  • Hasty generalisations: Rushing to a conclusion without considering all the evidence or variables.
  • The fallacy of false choice: Where you are given a binary choice between two exclusive options (its either us or AI) when many other possibilities exist.
  • The fallacy of anthropomorphism: The attribution of human characteristics and intentions to non-human entities.
  • The if-by-whisky fallacy: Curious?! I’m saving the best till last!
The image is made up of four seperate pictures. The first two show anthropomorphic robots sitting at desks on laptops, as though they are office workers. The third shows a figure with a human torso, head and face, and with robotic arms, posing as though for a portrait. The fourth picture shows a cartoon robot speaking with a cartoon child.
‘An “Image” is that which presents an intellectual and emotional complex in an instant of time.’ Ezra Pound, 1913.

These and other fallacies reflect deep-seated tendencies in our ways of thinking. According to Drew Westen, our neural networks are primed for certain ways of emotional thinking and fallacies simply capitalise on our predispositions. By providing ready-made templates, logical fallacies, like metaphors and connotations, conform to Ezra Pound’s definition of an image as an ‘intellectual and emotional complex in an instant of time’. Fiction and films project that image through time into a narrative, but similarly engage our emotional and intellectual resources.

Let me finish with the if-by-whisky fallacy since I live on an island which, though tiny, nevertheless boasts not one, but TWO world-class whiskies (Scapa and Highland Park). This fallacy exemplifies the power of framing on our perception and reaction to a subject. It also reminds us that that there are at least two ways, and usually many more, of framing any one issue. It is a subcategory of the fallacy of equivocation where we sit on the fence on a controversial issue, trying to please both sides, and it can readily be recast to speak about AI. I suggest you refill your glass and replace the key terms yourselves as you savour your ‘water of life’ (Gaelic uisge beatha, Latin aqua vita):

My friends, I had not intended to discuss this controversial subject at this particular time. However, I want you to know that I do not shun controversy. On the contrary, I will take a stand on any issue at any time, regardless of how fraught with controversy it might be. You have asked me how I feel about whiskey. All right, this is how I feel about whiskey:

If when you say whiskey you mean the devil’s brew, the poison scourge, the bloody monster, that defiles innocence, dethrones reason, destroys the home, creates misery and poverty, yea, literally takes the bread from the mouths of little children; if you mean the evil drink that topples the Christian man and woman from the pinnacle of righteous, gracious living into the bottomless pit of degradation, and despair, and shame and helplessness, and hopelessness, then certainly I am against it.

But, if when you say whiskey you mean the oil of conversation, the philosophic wine, the ale that is consumed when good fellows get together, that puts a song in their hearts and laughter on their lips, and the warm glow of contentment in their eyes; if you mean Christmas cheer; if you mean the stimulating drink that puts the spring in the old gentleman’s step on a frosty, crispy morning; if you mean the drink which enables a man to magnify his joy, and his happiness, and to forget, if only for a little while, life’s great tragedies, and heartaches, and sorrows; if you mean that drink, the sale of which pours into our treasuries untold millions of dollars, which are used to provide tender care for our little crippled children, our blind, our deaf, our dumb, our pitiful aged and infirm; to build highways and hospitals and schools, then certainly I am for it.

This is my stand. I will not retreat from it. I will not compromise.

The whisky speech was delivered in 1952 by Noah S. Sweat, a Mississippi judge and state representative, and concerned the prohibition on alcohol in his state. ‘Whiskey’ is the Irish spelling, ‘whisky’ the Scottish.

Related resources

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!