Don’t waste the crisis: How AI can help reinvent International Geneva
International Geneva stands at a critical turning point. The city’s global institutions face unprecedented challenges: financial austerity, declining faith in multilateralism, and intensifying geopolitical tensions.
So, how could Geneva respond? Cash infusions might offer some relief in the face of budget cuts, but the depth of challenges ahead requires more transformative changes to secure Geneva’s relevance on the international scene.
Artificial intelligence (AI), often cast as a disruptor, could instead become Geneva’s rescuer: applied smartly, the technology can help revitalise the city’s humanitarian mission, modernise its vast knowledge networks, and equip its workforce and organisations for a future that has already arrived.
As AI becomes a commodity through open-source and free AI platforms and tools, AI transformation won’t require massive investment in infrastructure.
Instead, it requires proactive creativity facilitated through modest funding and a bottom-up AI approach. Below, you can find a strategy starting with immediate steps to deal with job losses via organisational changes leading to preserving Geneva’s knowledge and inspiring global AI governance. Each proposal is paired with actionable steps which are technically feasible and financially affordable.
Contents
Toggle1. Preparing people for AI transformation
The most urgent action is about the city’s workforce. Job losses across International Geneva have already begun due to budgetary cuts and automation of administrative and text-based roles. These trends affecting thousands of people are likely to accelerate. Dealing with immediate job crises should be coupled with longer-term adjustment of the educational system to changes and challenges that AI introduces in pedagogy. Thus, Geneva’s reaction should be comprehensive and multi-speed.
Actions
AI Chômage and Apprenticeships
Launch AI apprenticeship focusing on reskilling those lost jobs or are likely to lose in the coming period.
Geneva’s multilingual, highly educated professionals possess valuable skills that can be reskilled for the AI era.
Translators and interpreters, for example, can use their linguistic knowledge to contribute to the development and refining of Large Language Models, a critical feature of the current AI developments. Lawyers can play a critical role in auditing AI systems to ensure fairness and compliance. Policy professionals and social scientists familiar with organisational dynamics can support developing and deploying ethical AI platforms. Many professions can assist in enriching and contextualising knowledge for their domains.
AI reskilling can draw on Switzerland’s long tradition of apprenticeships, combining theoretical learning with practical experience of helping businesses and organisations prepare their data and processes for AI transformation.
AI pedagogy
Integrate AI into educational curricula, from philosophical ethics to practical applications in law, governance, and science, fostering critical thinking alongside technical skills.
In reimagining AI pedagogy, two core dimensions emerge. First, there is a need to embed AI comprehensively across all educational levels—covering its technological foundations and its legal, ethical, and societal implications—to equip learners with a well-rounded understanding of AI’s transformative role. Second and more challenging is the necessity to overhaul teaching methodologies to use AI to foster critical thinking, evidence-based reasoning, and creative problem-solving. This transformation calls for re-evaluating conventional assessment methods; for instance, traditional written assignments should be redesigned to encourage dynamic interaction with AI tools, stimulate innovative idea generation, and help students craft compelling narratives in an AI-driven context.
Micro-learning
Provide needed AI skills through just-in-time innovative formats such as coffee break training and meet-up exercises.
Learning is increasingly breaking free from traditional classrooms. Platforms like YouTube and TikTok have become prime spots for picking up new skills, while informal chats with friends and colleagues are proving essential to how we learn. To tap into this shift, we should champion micro-learning—those short, sharp bursts of knowledge—through online tools, casual coffee break discussions, and AI-themed meet-ups.
Expert matching
Overcome thinking and policy silos through personal networks supported by the creative use of AI tools.
Expert matching becomes highly important as AI thrives on interdisciplinary collaboration across a wide range of professional and cultural disciplines. For example, philosophers and theologians play a critical role in helping AI developers navigate ethical challenges—even though they often operate in different professional circles. Boundary spanners, individuals who connect across these disparate domains, will be essential in bringing experts together and, even more importantly, sustaining communication and collaboration. Boundary spanner efforts could be counter-intuitive against predominant working trends in increasingly specialised and separated scientific and policy fields.
2. Building adaptive organisations for the AI era
Organisational management is on the brink of a profound transformation as AI increasingly automates traditional processes across various domains, including management, accounting, and human resources. Current practices based on the Taylorian emphasis on industrial efficiency and the Weberian reliance on rigid hierarchies are not optimal for new AI dynamism.
Geneva-based organisations will be impacted, as AI is likely to flatten traditional hierarchies by equipping employees at all levels—particularly those on the organisational periphery—with real-time data and advanced analytics.
For instance, AI-driven tools can enable faster lower-level decision-making, reducing the dependency on top-down directives and enabling a more responsive organisational model. The result will be a hybrid decision-making model that blends human insight with AI-driven precision, a concept increasingly explored in management and organisation literature.
Actions
Alleviate compliance pressure and administrative load
Reduce the compliance and reporting requirements on Geneva actors.
For many small NGOs, these requirements pose a crushing burden. In addition to a compliance reform, Geneva actors would also benefit from introducing organisational reforms that can streamline workflows through AI-enabled reporting, data analysis, and legal compliance checks. By reducing this administrative load, Geneva actors can redirect the already scarce resources to their core missions, ensuring their efficiency and ultimate impact. For example, ICRC and other Geneva-based humanitarian organisations can use AI to reduce bureaucratic burdens (e.g., automating compliance reports) while strengthening their unique human-centred work in the frontline and crisis worldwide.
AI sandboxes for organisational change
Encourage experimental and bottom-up initiatives to test the practical use of AI in the procedures and activities of actors in Geneva.
AI transformation lacks a one-size-fits-all blueprint, making experimentation essential for tailoring implementations to specific professional and organizational cultures. However, experimentation inherently carries risks. AI sandboxes should be utilized to mitigate these, providing controlled environments where organizations can safely test and refine their AI strategies.
Accountability and transparency
Establish protocols to delineate when and how AI should assist humans in decision-making.
As AI increasingly makes decisions on our behalf, its use must be accountable and transparent. First, AI designers should determine which decisions can be delegated to machines. Second, all AI decisions should be fully transparent. Third, those who deploy AI should always be accountable and responsible for its decisions.
Traceability and diversity
Ensure that AI inferences (answers) can always be traced to sources that should be as diverse as possible.
Organizations cannot use AI as a black box. They should always understand the reasoning behind AI platforms, which can be ensured through the traceability of the sources and the logic deployed in the reasoning process. In addition, AI sources should be diverse, providing organizations with the necessary diversity of views and positions.
3. Terroir savoir: Codifying, preserving, and sharing Geneva’s knowledge wealth
As knowledge is becoming critical for the AI era, even more than data, Geenva should activate the city’s rich reservoir of knowledge on a wide range of issues, from science to policy and climate protection to humanitarian logistics, to name a few areas.
So far, this “terroir savoir” (knowledge of the place) remains underutilised, siloed in documents, archives, and tacit know-how. As part of Geneva’s AI transformation process, focus should be placed on codifying, preserving, and sharing this knowledge to benefit the global public.
Actions
Codification of explicit knowledge
Use AI to digitise and organise archives from international organisations and academia.
The codification process should be supported by human expertise through data labelling and enriching contextual layers of available knowledge.
‘Consult ExpriTech de Geneve‘ illustrates how the opus of philosophers, thinkers, and scientists who were born or lived in Geneva can bring different and often creative perspectives to our current challenges.
Overcoming thinking silos
Use AI’s comprehensive coverage to overcome thinking and policy silos within and between organisations.
Recent research shows high fragmentation of the Geneva knowledge scene, with only 0.49% of 94.939.472 hypertext links of 44 websites in International Geneva pointing to resources at other Geneva-based organisations. It means that Geneva-based organisations refer to each other very little in their reports, analyses, and policy documents. AI can help map and connect these knowledge repositories, fostering collaboration across humanitarian, trade, health, and other sectors.
Knowledge linking to local and Swiss AI dynamism
Develop joint projects and activities with Swiss academic, research, and business sectors.
Switzerland boasts a vibrant AI scene, with active participation from both research institutions and business organizations. In Geneva, there is an opportunity to strengthen systemic cooperation with the local AI community, particularly through partnerships with universities such as EPFL and the University of Geneva, as well as with the startup ecosystem, including FONGIT, a prominent startup incubator in Geneva. A critical factor in fostering this cooperation will be cultivating and maintaining relationships at the expert level.
4. Walk the talk of human-centred AI governance
Geneva’s AI transformation can do more than rejuvenate the city—it can inspire the world to deal with AI governance. Through practical actions – anchored in the Red Cross tradition and enriched by thinkers like Rousseau and Voltaire – can address growing concerns of the humanities about the impact of AI on the erosion of privacy, monopolisation of knowledge, erosion of our right to choose and even, according to some views, threats to the very existence of humanity. By leveraging its legacy, Geneva can pioneer human-centred AI governance through actions such as:
Actions
Strengthening human rights protection
Protecting core human rights and freedoms from new AI risks and challenges.
AI challenges traditional rights like freedom of thought and expression by its ability to shape our choices through algorithms and manipulation of information. In addition to continuing debates on strengthening the applicability of existing human rights frameworks in the context of AI, Geneva-based organisations should also consider revisiting traditional human rights through AI optics. Two examples of issues that could benefit from being looked at more carefully include:
Protecting human knowledge
Introducing the right to human knowledge against depriving us of our knowledge syphoned by big AI platforms.
Human knowledge is the foundation of our identity and agency as thinking beings. In an era dominated by AI systems that extract and centralize vast troves of data, the right to safeguard our collective and individual knowledge—including insights, traditions, and cultural heritage—must be prioritised.
Knowledge shapes not only how we understand the world but also how we define ourselves; its erosion risks undermining humanity’s diversity, autonomy, and future potential. This urgency demands that the human rights community expand its focus to explicitly protect knowledge as a pillar of human dignity, ensuring equitable access and preventing monopolisation by unchecked technological power.
Protecting core humanity
Exploring new rights to protect our core humanity, such as a right to human imperfection.
Humanity must safeguard its irreplaceable uniqueness—not as competitors to machines’ efficiency, but as beings defined by creativity, ethics, and imperfection. Rather than framing progress as a race against AI, we should ask: What does it mean to thrive as humans in this era? Central to this is reclaiming our right to be imperfect: to err, to evolve, and to exist beyond algorithmic optimization.
Risks: What can endanger AI transformation?
AI transformation is often seen as a technical challenge, but as AI platforms become affordable commodities, the real challenge lies in deployment and organizational integration, not just technology.
For instance, creating an AI app takes a day or less, preparing a dataset for a functional app takes a month, and fully deploying an AI platform across an organization takes at least a year. While technology is quickly acquired, meaningful impact hinges on deep organizational change, which is far slower. This gap creates managerial and organizational risks:
Procurement: Beyond one-time purchase
By inertia, AI is treated like off-the-shelf software that can be obtained by buying a license. However, AI is more than just a product; it’s an ongoing development requiring continuous adaptation through training and fine-tuning. That’s why procurement strategies must evolve. Instead of one-time purchases, organizations need retainer-based models—ensuring on-demand expertise, troubleshooting, and iterative improvements.
Individual experimentation and organisational integration
While tools like ChatGPT are simple for individual use, scaling AI enterprise-wide requires structured knowledge-sharing, data security, and governance. An “AI sandbox” approach bridges this gap: employees test applications in controlled environments, with successful pilots scaled into formal operations.
Security: Dealing with tacit risks
Risks extend beyond breaches of confidential data. Everyday interactions—like querying AI platforms—can inadvertently expose organizational priorities, workflows, or biases. Mitigation requires layered security protocols, employee training, and risk assessments tailored to AI’s unique vulnerabilities.
Management: Cross-functional leadership
Tech teams often lead AI projects, but real transformation requires leaders to understand the organization’s core activities—people, processes, and tacit knowledge. The ideal AI transformation leader is not just a tech expert but a boundary-spanner, capable of bridging disciplines and breaking down silos.
Ultimately, AI transformation is not just about implementing technology—it’s about reshaping how organizations operate, collaborate, and compete in an AI-driven world.
Next steps: Actionable strategy
As AI technology is becoming a commodity, there is no need for huge investment in AI farms and development models. Existing open-source tools and platforms can be leveraged smartly to respond to the needs of typical Geneva actors, including international organisations.
In addition to the listed actions and overall vision, concrete and targeted action should be supported by the Geneva AI Fund, which should support:
- AI Apprenticeship programmes supporting the reskilling of people on AI Chomage;
- Master briefings for leaders of organisations to approach AI transformation as more change than a technological project.
- Human-centred AI projects for local communities, small businesses, and academic organisations.
- Sandbox projects for AI transformation of international organisations.
- New events and discussions focus on tangible challenges as opposed to increasingly repetitive debates focusing on general aspects of AI. Geneva can feature itself as not only a place where governance is discussed but also a ‘lab’ where AI governance solutions are developed and applied.
Geneva AI fund could be managed innovatively using AI and blockchain technologies. Funds could disburse small increments automatically as projects hit milestones (e.g., installing platforms, uploading datasets, training models), cutting red tape while ensuring accountability.
The current crisis is a rare chance for reinvention. While most geopolitical developments shaping Geneva’s future are beyond the city’s influence, AI transformation is technically feasible, financially affordable, and ethically desirable. It can strengthen Geneva’s humanitarian legacy, democratise its rich knowledge base, and future-proof its workforce and organisations for years to come.
Don’t waste the crisis!
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!