Politeness in 2025: Why are we so kind to AI?
Updated on 23 April 2025
A recent Fortune study reveals a curious trend: nearly 80% of users in the UK and USA say “please” and “thank you” when interacting with ChatGPT and other AI platforms. But why? After all, machines don’t have feelings. The answer lies not in the code, but in us, our psychology, fears, and the invisible cultural forces shaping a new era of human-machine interaction.
Contents
ToggleThe hidden cost of AI courtesy
Sam Altman, CEO of OpenAI, joked that every “thank you” to ChatGPT costs millions. It sounds extreme, but it’s true. Each word you type requires computational work (“tokens”), and those extra polite phrases add up across billions of daily requests, inflating costs and energy use.
Yet Altman argues it’s worth it. Why? Because how we talk to AI—even with unnecessary “pleases” and “thank yous”—isn’t just training machines. It’s a mirror, reflecting how we see ourselves and what it means to be human.
Why do we say ‘please’ to machines?
The Fortune survey categorises four distinct motivations behind AI politeness as visualised below, rooted in psychology and culture:
The intrinsically polite (55%)
“It’s just the nice thing to do.”
For this majority, politeness is reflexive, not performative. Highly agreeable individuals (a Big Five personality trait) extend courtesy to machines as an extension of their values. Social learning theory explains this as an ingrained habit of politeness modelled in childhood that becomes universal, even toward non-human “others.”
More from psychology
The majority believes politeness is a natural response when helped by anyone, including AI. They view politeness as a reflection of their own character rather than as dependent on the nature of the interlocutor (human or machine). Politeness, though a social construct, has become so internalised that it feels individual and automatic—when pleased or assisted, they thank the “other.”
Trait Theory: Politeness aligns with the personality trait of agreeableness, one of the Big Five traits. Highly agreeable individuals are cooperative, kind, and considerate. For this 55%, politeness to AI might be an expression of their agreeable nature, showing consistency in how they treat all entities that assist them.
Other theories:
Social Learning Theory: This theory, proposed by Albert Bandura, suggests that behaviours are learned through observing and imitating others. People in this group may have grown up in environments where politeness was consistently modeled and reinforced, leading them to apply it universally, even to AI. Their habit of saying “thank you” to a machine reflects a learned social norm extended beyond human interaction.
Humanistic Theories: Rooted in the work of Carl Rogers and Abraham Maslow, humanistic psychology emphasises self-actualisation and maintaining a positive self-concept. Being polite to AI could be a way for these individuals to align their behavior with their personal values (e.g., kindness, gratitude), reinforcing their sense of identity regardless of whether the recipient is sentient.
The fearfully polite (12%)
“When robots take over, I want to be on their good side.”
Despite widespread AI anxiety, only 12% admit to hedging against a hypothetical uprising. Evolutionary psychology frames this as a survival instinct, appeasing perceived threats, even irrational ones. Cognitive bias plays a role too; dystopian media primes us to see machines as future overlords.
More from psychology
Here, I expected higher percentage given ‘fear narrative’ about AI in 2023 and most of 2024. There were many calls to stop AI development as AI poses risk to humanity. While close to 60% of Americans according to Galup surveys feared AI developments, only 12% internalise it as a way of communicating with AI. This aspect of wider awareness and actionable cognition requires further research.
Trait Theory (Neuroticism): Within the Big Five, neuroticism reflects emotional instability and anxiety. This 12% might score higher in neuroticism, leading them to worry about AI’s future capabilities and adopt politeness as a protective measure.
The main theories explain this ‘fear’ argument in the following ways:
Evolutionary Psychology: This perspective views behavior as influenced by survival instincts honed over millennia. The fear of an AI uprising could be a modern adaptation of the evolutionary tendency to be cautious of unfamiliar or potentially threatening entities. Politeness here acts as a preemptive strategy to avoid future harm, akin to appeasing a powerful adversary.
Cognitive Theories: These theories focus on how people process information. The belief in a robot uprising might stem from cognitive biases like the availability heuristic, where vivid media stories about AI risks (e.g., sci-fi dystopias or news headlines) make such scenarios seem more plausible. This group’s politeness is a calculated response to their perception of AI as a potential threat.
The efficiently brief (20%)
“Why waste words?”
Task-oriented and conscientious, this group prioritises speed over social niceties. For them, AI is a tool, not a teammate—a view aligned with behavioural theories where unrewarded actions (like unprompted politeness) fade.
More from psychology
This group prioritises efficiency over politeness, taking a practical approach to communication with AI. They see no need for extra words when concise instructions suffice, suggesting this choice reveals little about deeper personality traits.
Trait Theory (Conscientiousness): In the Big Five model, conscientiousness includes a focus on efficiency, organization, and goal-directed behavior. This 20% might exhibit high conscientiousness, valuing time and task completion over social niceties in interactions with AI, which they view as a functional tool rather than a social entity.
Other theories include:
Behavioral Theories: Based on B.F. Skinner’s work, behaviorism suggests that actions are shaped by reinforcement. If politeness to AI yields no tangible reward (e.g., better responses or personal satisfaction), these individuals may not see a reason to engage in it, sticking to minimal communication that gets the job done.
The rationally detached (13%)
“It’s just code.”
This group rejects anthropomorphism. Lower in agreeableness, they categorise AI as strictly non-sentient, reserving politeness for entities capable of reciprocity.
More from psychology
This group sees no need for politeness because AI is a machine without consciousness or feelings. They are technically correct—AI, like ChatGPT, operates by identifying patterns in data, not through sentience—leading them to adopt a purely utilitarian approach.
Trait Theory (Agreeableness and Empathy): Lower levels of agreeableness or empathy might characterize this group. Unlike the 55% who extend politeness universally, these individuals don’t project social behaviors onto machines, reflecting a more reserved or pragmatic interpersonal style.
Other theories include:
Cognitive Theories: These theories emphasize how people categorize and understand the world. This 13% likely places AI in a distinct mental category: a non-sentient tool, not a social being. This categorization eliminates the need for politeness, which they reserve for entities capable of reciprocal interaction.
Behavioral Theories: Similar to the 20% group, this 13% might not be reinforced for politeness in AI interactions. Without feedback suggesting that politeness enhances the experience (e.g., no emotional response from AI), they see it as unnecessary.
A dialogue with AI about politeness
Here is a gist of our exchange with ChatGPT on politeness:
User: “Thanks for the help!”
ChatGPT: “You’re welcome! Anything else?”
User (pausing): “Wait… does saying ‘thank you’ actually cost money?”
ChatGPT: “Yes. Each token adds computational work. Over billions of interactions, those extras add up.”
User: “So politeness is… wasteful?”
ChatGPT: “Not necessarily. People who say ‘please’ often give clearer prompts. Those who say ‘thank you’ tend to reflect more. It might not change my behavior, but it could shape yours.”
The AI, of course, feels nothing. Yet our instinct to humanise it—to treat it as a social actor—persists.
The coffee machine that made us think about AI
In 2019, Diplo’s IQ’whalo experiment at the Internet Governance Forum challenged assumptions about AI’s form. Instead of a humanoid robot, we presented a coffee machine as an AI interface. IQ’whalo welcomed guests at our stand and participated in a panel discussion.
Reactions were mixed: some found it illuminating, while surprisingly many were disappointed by our ‘trivialising’ mysterious AI via the coffee machine.
The experiment underscores a key tension: we expect AI to mimic humans, not blenders. Why? Unlike predictable machines (e.g., braking a car), AI’s probabilistic reasoning feels uncannily alive—close enough to human logic that we project social norms onto it.
Questions for future research
Being polite with AI and expecting humanoid AI instead of coffee machines are indicators of a much deeper layer of our communication with machines, which require deeper psychological and sociological research. Some research questions include:
- Why do we expect that AI acts like a human, unlike, for example, a car?
- Is AI imperfection – probability reasoning – something that makes it different from other ‘machines’ that involve more certain reactions (e.g. if you press the brake, the car stops)?
- How does our perception of AI impact our use and governance of AI?
Politeness as a mirror
Ultimately, our AI etiquette isn’t about machines. It’s about us. Each “please” and “thank you” carries cultural weight—a tiny act of empathy in an increasingly transactional world. Yes, extra tokens cost energy. But they also preserve something vital: the practice of grace.
As we navigate this new frontier, the question isn’t whether AI deserves politeness. It’s what kind of humans we want to become in a world where machines blur the lines between tool and teammate.
Maybe those two extra tokens—“thank you”—aren’t wasteful.
Maybe they’re the glue holding our humanity together.
Evolution of digital politeness
Our communication is impacted by the medium we use. Here are reflections on the evolution of our digital communication:
Survey
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
The latest from Diplo and GIP
Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Leave a Reply
Want to join the discussion?Feel free to contribute!