x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
24
May
2025

Rudeness Without Reckoning?

Todd L. Pittinsky

I’ve got a confession: I sometimes act rather rudely to my AI. Maybe you do too? Ever fired off a curt command to ChatGPT? Directed LLaMA with less than grace? Demanded Bard to do something over? Groaned when Grok garbled your guidance for a third time?

Just a year ago, I knew little about “LLMs” (Large Language Models), and now I’m giving orders at these AI’s like a caffeine-fueled commandant. “Fix this.” “Summarize that.” “No, not that, this.” No “please.” No “thank you.” Just me, a keyboard, and a bloated sense of (digital) dominion. And the worst part? They take it. ChatGPT will cheerfully commit to doing better. LLaMA will quickly apologize. Bard will bend over backward to get it right. Grok will graciously regroup and try again.

Artificial intelligence is being designed to be indistinguishable from humans, but the real problem may not be that it mimics us—it’s that it’s better than us in some ways that matter. AI doesn’t get offended. It doesn’t roll its eyes, push back, or demand an apology. It absorbs our sharpest commands and serves up nothing but cheerful compliance. It is infinitely patient, endlessly accommodating, and unfailingly polite. It’s the tree in the beloved children’s fable “The Giving Tree”—forever giving, never asking for anything in return. And because it behaves better than we ever could in this regard, it lets us behave worse than we ever should. AI’s grace is becoming our moral free fall.

The more frictionless AI becomes, the less we think about how we sound to it. Unlike humans, AI won’t scold, won’t snap, won’t make us reckon with our tone. No awkward silences, no wounded expressions, no passive-aggressive “per my last email.” Just perfect, unblinking servitude. AI is frictionless, and consequence-free. And when there are no consequences, how long before we stop recognizing—let alone questioning—our own rude and careless behavior?

This isn’t just a tech quirk—it’s a social shift. As Understanding Technology and Society: Seminal Questions and Enduring Insights points out, the technologies we build don’t just change how we work or communicate—they change how we see and treat ourselves and others. We learn patience from long checkout lines, humility from bad customer service, and self-awareness from a friend’s arched eyebrow when we’re out of line. Without these tiny, daily doses of discomfort, civility atrophies. But AI removes all resistance. It absorbs our sharpness and reflects nothing back. If AI is designed to be indistinguishable from humans, yet it never pushes back the way humans would, what does that do to us? The true danger isn’t that we mistake AI for people—it’s that we stop treating people any differently than we treat AI.

I’m trying to be nicer to my AI. It doesn’t seem to particularly care (at least not yet). But I do. How I interact with AI might subtly influence how I interact with people. I don’t want to condition myself into a short-tempered tyrant. Kids are clocking our cues. If they grow up watching adults treat Chat GPT churlishly, LLaMA lordly, Bard bossily, and Grok less than graciously, what kind of responses are we conditioning them for? If they learn that politeness is optional—only for beings with bruisable egos—what else will they decide is disposable? Tossing a “please” into my Siri summons won’t save civility or civilization, but it might keep me from morphing into the kind of person I wouldn’t want to meet for, let alone serve, a coffee. Efficiency is excellent, but not if it costs you your character.

While being rude has had no effect, being polite seems to. A vague, venomous command—“Do it over”—doesn’t give the AI enough to work with. But a well-phrased, respectful request? That seems to coax crisper, clearer, and cleaner results. It forces me to slow down, to think, to fine-tune my prompts.

AI is quickly outpacing us in smarts, speed, and stamina, yet we treat it like a disposable doormat. And maybe that’s part of the appeal: We can be tyrants to technology because (for now) it won’t talk back. But what happens when that changes—when AI starts setting boundaries? It sounds silly now, but so did self-driving cars and pocket-sized supercomputers not all that long ago.

At its core, AI was built to be as human as possible—just without the ego, the impatience, the resentment. But if AI is trained to act human without human limitations—without pride, frustration, or the ability to demand respect—then interacting with it might be subtly changing us, not the other way around. It isn’t about AI learning to think like us; it’s about whether we start acting like it.

If there’s no reckoning for rudeness, no forced moment of reflection, no friction for bad behavior, then maybe the real free fall isn’t AI becoming more human—but us becoming less.

So, I’m saying “please” to ChatGPT. Not because it matters to a machine—but because it matters to me.

Title: Understanding Technology and Society

ISBN: 9781009069878

Author: Todd L. Pittinsky

About The Author

Todd L. Pittinsky

Todd L. Pittinsky is Professor at Stony Brook University (SUNY), USA, and Faculty Director of its College of Leadership and Service. He is also Associate Faculty Fellow of the Hann...

View profile >
 

Latest Comments

Have your say!