Leave a Reply
Want to join the discussion?Feel free to contribute!
As we face unprecedented change and an uncertain future, it is the right moment to revisit the fundamentals upon which our society is built. A full examination of the values, ideas, and concepts that have worked in our society is necessary to guide what will work as we grapple with accelerating modernity. With technologies such as artificial intelligence (AI), advanced genetic modification, and automated weapons all quickly becoming a reality, our humanity will be challenged like never before. The search for solutions should start by going ‘back to fundamentals’, as was done during last week’s discussion in Zurich organised by the European Forum Alpbach and the Dezentrum and Foraus think tanks. The following questions echoed:
Can we simply go back to the fundamentals and build a better society or do we need to revisit and perhaps revise these very fundamentals? What are the fundamentals of society? How far back in history should we go to best determine the fundamentals of modern society?
The search for societal fundamentals brings us as far back as the ‘Axial Age’, when our ancestors began grasping their destiny via spiritual transcendence and rational agency. In the span of five centuries (500 BC – 0 AD), Hinduism, Buddhism, and Jainism were born in India while Taoism and Confucianism took hold in China. Philosophers Socrates, Plato, and Aristotle emerged in Greece and the Second Temple, Judaism, and Christianity came to be in the Middle East. With Islam, which emerged a few centuries later, humanity developed a ‘societal software’ that still operates today.
Fast forward to the Enlightenment when Descartes, Hume, Rousseau, Voltaire, Kant, and other thinkers put humans and rationality at the centre of societal developments. The two pillars of the Enlightenment – modernity and humanity – have shaped our world to this day.
Modernity has human rationality and progress in its core. Modernity gave science and technology the ability to truly grow. Industries flourished. Societies developed. Endowed with longevity, human life became less dangerous and more enjoyable. Modernity optimised the use of time and resources.
Humanity, the other pillar of the Enlightenment, put humans at the centre of society. Its key tenants are respect for human life and dignity, the realisation of human potential, and the individual right to make personal, economic and political choices. Although they originated during the Enlightenment, humanity’s values were eventually codified in core international documents, such as the Universal Declaration on Human Rights and the UN Charter by the mid-20th century.
Over the past few centuries, modernity and humanity reinforced each other in a virtuous cycle of sorts. Advancements in science and technology helped attain the emancipation of millions worldwide. The free and more educated mind breathed creativity and ingenuity into science and technology. The Enlightenment formula seemed to work.
However, in the last decade, tensions between modernity and humanity have started to emerge with the fast growth of digital technology, and, in particular, AI. This has made us harbour questions about our future.
Will advanced technology reduce the space for human agency and, ultimately, our right to make personal, political, and economic choices?
From time immemorial, we have been making choices using our brains (logos), our hearts (ethos), and our gut (pathos). Those choices, good or bad, were ours, and ours alone. Suddenly, machines became capable of making more optimal and informed choices. They started gathering enormous amounts of data and more importantly, they started gathering data about us – what we like, what we search for, what we purchase, where we go, and how we get there. The algorithms behind the machines came to know us better than we know ourselves.
As machines start to gradually replace our human agency to choose; from helping us identify our lifelong partner, to showing us what item we should purchase next. We need to ask ourselves whether we will still be able to resist this advice if we want to. While it may be tempting to allow AI to choose for us, a blanket reliance on it can have far-reaching consequences on our society, economy, and politics.
To solve this growing dilemma, we will need to revisit the interplay between modernity and humanity. Will modernity and humanity continue to reinforce each other, or will modernity, driven by science technology, stifle humanity? Should we safeguard our right to human imperfection, especially, in situations where our abilities are no match for AI and machines?
These and other questions will remain with us in the coming years as we discuss a new social contract that can capture our shared understanding on the future of humanity. At worst, we need to avoid the autoimmune trap where modernity harms our core humanity. At best, we need to find new ways how modernity and humanity can continue reinforcing each other.
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Dear Jovan
Dear Jovan,
Your post is predicated on the Greek wisdom that we can “know ourselves.” What if this societal fundamental assumption was wrong?
In our past, the earth was the centre of the universe and humans the top and unique species. Within the species, we viewed ourselves as individuals uniquely endowed as rational and ethical agents. We could emerge from Plato’s cave into the eternal light of truth.
“Modernity” has shown this lofty self-regard to be an illusion. Humanity may in fact be evolving biologically and culturally toward a “survival of the friendliest.” In this process, the survival of the social group, not the individual, is central.
As individuals, we seek meaning for inner stability. Reality, however, is chaotic – multicausal and unpredictable. We bridge this contradiction through illusions that preserve meaning while we are, in fact, unconsciously adapting. Adaptation by illusion rather than a rational process.
You write: “From time immemorial, we have been making choices using our brains (logos), our hearts (ethos), and our gut (pathos). Those choices, good or bad, were ours, and ours alone.” Good rhetoric, Jovan, but little else. Show me the underlying neural networks. Were we to apply current scientific knowledge about the inner workings of the self to these terms, they would dissolve like morning mist – terms best committed to the dust heap of intellectual history.
May I further remind you that hailed concepts like “liberty” are rather empty vessels? Locke, whom you place in the middle of your drawing, spouts these unalienable rights: Life, Liberty, and Property. Life is shorthand for economic choices; liberty for political choices; at this level of generality, property is entitlement – which contradicts both liberty and life. I could happily go on punching holes into this or any other system of classical values. Finally, remember Gödel. My conclusion? Rather than leaving Plato’s cave, we have to accept that we can’t separate ourselves from our shadows. They make us what we are. And it’s a mess.
Be what may, the classical system furthermore fails properly to address the core issue of “aggregation of individual choices.” For we are a hyper-social species and we must move in some way from personal choices to common action. How to we move from micro-choice to macro- or social choice? Is the process one of numerical addition (neo-cons) or an emergent property of the social system (Greber)? We have no proper answer – democracy is a weak reed on which to hang our understanding of the aggregation process.
So, two illusions thus underlie your system: the illusion of personal choice, and the illusion of rational collective choice. True, we have not fared badly so far. Yet remember: past performance is no indicative of future results.
You speak of the dangers of AI – “machines start to gradually replace our human agency to choose.” What else is new? Your Axial Age saw the emergence of the first social algorithms – religion. They have manipulated us for two-thousand years. The only question to me is: which works better today?
Consequences, oh! Consequences!
Let’s go back to individual choice. There is always a good and a true reason in what we do. “Modernity” is beginning to explore and laying bare this useful contradiction by recording action rather than proclaimed intention. You state yourself: “They started gathering enormous amounts of data and more importantly, they started gathering data about us – what we like, what we search for, what we purchase, where we go, and how we get there. The algorithms behind the machines came to know us better than we know ourselves.” Here, I’d say, tendentially we’ll be faring better with AI than from philosophers pontificating from personal experience.
The next step is more complicated.
AI does not limit itself to being descriptive. It ambitions prescription, i.e. to move individuals and groups or society as a whole in a revealed or preferred direction. At best, it influences and facilitates the social consensus building. So far, we know little or nothing about the aggregation process it uses. It is a dark hole.
This situation has consequences.
The immediate downside in my view is the base predictive error – overreach. AI’s knowledge about our choices may still be ruefully inadequate (it’s my sense, at the moment). The sorcerer’s apprentice story comes to mind. He knew how trigger but not how to stop the process: with AI, there is wizened wizard handy to stop the impending obvious error.
As influencer, AI is not much different from the influencing systems of the Axial Age – the religious systems. It shares with them the universal ambition. As it advances over time, it may target lesser groups and foster tribalism. We know little of the scalar potential here.
Malevolence is the main downside. A bully may get hold of the AI process to foster his own ends. Ever since the paleolithic, there has been tension between the social group and the bully. So far, we have succeeded in keeping them at bay. This was the speediest way.
The Axial Age codified our collective intent of getting rid of bullies by placing the influencer role in the transcendent realm. In the event, bullies took over the system. Religion came to underpin inherited authoritarianism – mainly based on fear all around.
With the ‘Enlightenment,’ we introduced the markets as a replacement for authority. Up to a point, it was chasing the devil with Beelzebub. The capitalists became the (hidden) kings. Today, we worship efficiency above all else. As a corollary, economic and political war was waged against the periphery (imperialism) and future generations (nature exploited as resource). Inequality accumulated to a breaking point.
There were short-term advantages, indeed: we slowed down the process of social choice. As calculating interest replaced subjective emotions, religious wars faded and we created a good life for many.
Today, AI can assist the bullies in setting up an authoritarianism of a different form altogether. No longer based on wholesale fear but on tailor-made inducements. We see its beginnings in China. A form of social eugenics. Humanity’s “goodness paradox” writ in code.
I’ll forego judgment in favour of further observation and deeper understanding. So, I’ll borrow Zho Enlai’s favourite answer to questions like yours: “too early to tell.”