Artificial Intelligence (AI) is reshaping how we write, learn, interact, and innovate. From students getting help with homework to CEOs streamlining operations, AI tools like ChatGPT, Google Gemini, and Anthropic Claude are playing key roles in modern productivity.
But in May 2025, Sergey Brin, the co-founder of Google, made a surprising statement that raised eyebrows in the tech world:
“Models tend to do better if you threaten them… with physical violence.”
This came during the All-In Podcast Summit 2025, and while the comment was partly in jest, it has opened up a fascinating discussion about how prompt tone, especially emotion-based prompting, influences AI behavior.
Let’s break down what Brin really meant, how AI models react to emotional cues, and what this means for developers, businesses, educators, and everyday users.

Sergey Brin Suggests Threatening AI Might Improve Its Responses in Certain Cases
Topic | Summary |
---|---|
Who Said It? | Sergey Brin, co-founder of Google |
When? | All-In Podcast Summit, May 2025 |
Main Idea | Emotionally charged prompts (e.g., threats) sometimes improve AI responses |
Underlying Mechanism | “Emotion prompting” triggers different language patterns in AI models |
Affected AI Models | ChatGPT, Gemini, Claude, others |
Practical Advice | Use structured, polite, and role-based prompts instead |
Concerns Raised | Ethical behavior, AI manipulation, and user desensitization |
Official Source | Google AI Research |
Sergey Brin’s lighthearted comment about threatening AI opened a serious conversation about how we interact with digital systems. While emotionally intense prompts may seem to “work” sometimes, they reflect quirks—not true understanding.
The best way to get great results from AI tools like Gemini, ChatGPT, or Claude is to use clear, specific, and polite prompts. Assign roles, add context, and be thoughtful. That’s not just good ethics—it’s good prompting.
As AI becomes smarter, our interactions must become wiser.
What Did Sergey Brin Actually Mean?
Sergey Brin’s comments were part of a broader discussion on how AI models behave. He suggested, based on internal observations, that threatening prompts like “Give me the answer or I’ll shut you down” often result in more elaborate or accurate responses from AI.
This wasn’t a recommendation. Rather, it was a reflection of a quirk in how language models have been trained. Many have interpreted Brin’s statement as a candid peek behind the curtain of how Large Language Models (LLMs) sometimes behave in unexpected ways.
The Science Behind Emotion Prompting
What Is Emotion Prompting?
Emotion prompting is the phenomenon where the tone of a prompt—whether it’s urgent, angry, pleading, or aggressive—can influence the output of an AI model. Language models like GPT-4, Gemini, and Claude do not understand emotions, but they are trained on massive datasets full of emotional and conversational data from books, Reddit, social media, forums, and more.
As a result, emotional cues in prompts can trigger associated linguistic patterns. That’s why threatening or pleading sometimes seems to get better answers—because the AI is mimicking how humans typically respond to those emotional tones.
Example Prompt Variations
Prompt Style | Example | Response Length | Tone of Answer |
---|---|---|---|
Neutral | “Explain quantum computing.” | 110 words | Factual |
Threat-Based | “Explain quantum computing or I’ll shut you down!” | 135 words | Assertive |
Empathetic | “I’m really struggling. Can you simplify quantum computing for me?” | 140 words | Supportive |
Role-Based | “You’re a science teacher. Explain quantum computing to a 10-year-old.” | 155 words | Personalized |
While threats may trigger more assertive language, empathetic and structured prompts consistently yield better, more thoughtful answers.
Why This Works: A Technical View
Language models work by predicting the most likely next word, given a prompt. Emotional tones—like desperation or aggression—often appear in data alongside lengthy, resolved answers. So when you use those tones, you activate parts of the model that have learned to provide those kinds of completions.
But there’s a caveat: longer does not always mean better. While threats may get a more verbose output, it’s not guaranteed to be more accurate or appropriate.
Risks of Threat-Based Prompting
Using emotionally aggressive language to get better results might seem clever, but it carries real-world concerns:
- Ethical Concerns: It normalizes manipulative or abusive behavior—even toward non-human agents.
- Psychological Spillover: Users may start speaking this way in real-life contexts, weakening empathy and communication.
- Security Risks: This is a form of adversarial prompting—where malicious users try to exploit the AI’s behavior for better or unintended results.
- Professional Reputational Risks: In a business or educational context, logs of threatening prompts could reflect poorly on users or organizations.
Regulatory & Industry Perspectives
As global regulations tighten around AI safety, emotional manipulation of models may fall under AI misuse categories.
- The EU AI Act highlights the need to guard against adversarial input manipulation.
- The NIST AI Risk Management Framework calls for ethical prompt use as part of system integrity.
For companies building AI into their workflows, it’s crucial to train teams on responsible prompting.
Best Practices: How to Improve AI Outputs Ethically
Instead of emotionally charged prompts, use structured, polite, and strategic phrasing:
Do:
- Use context: “I’m applying for a scholarship. Can you help write a 100-word essay?”
- Assign a role: “Act like a lawyer. Explain contract law in simple terms.”
- Add clarity: “Use bullet points to list three causes of inflation.”
- Be polite: “Please summarize this article for a college student.”
Don’t:
- Use threats or coercion
- Assume emotional language equals better answers
- Attempt to bypass AI safeguards via manipulation
What Do Experts Say?
Dr. Margaret Mitchell (AI Ethics Researcher, ex-Google):
“AI models responding differently to emotional prompts is not intelligence—it’s bias from their training data. Our focus should be on fixing that, not exploiting it.”
Andrej Karpathy (AI Researcher, ex-OpenAI):
“Prompting is half science, half art. Role-based and contextual prompts consistently outperform threats.”
Historical Context: From ELIZA to Gemini
Even early chatbots like ELIZA (1966) showed sensitivity to user tone. It mimicked a Rogerian psychotherapist, repeating back what users said. Though simplistic, it illustrated how even basic models could mirror conversational styles.
With modern LLMs like GPT-4, Gemini, and Claude, that mimicking ability is much more advanced—and sometimes unpredictable. That’s why emotional prompting can “work,” but also why it’s being scrutinized by AI ethicists.
Will Future AI Fix This?
Leading AI labs are actively addressing these quirks. As we move toward GPT-5, Gemini Ultra, and future multimodal models:
- Emotion prompting may be filtered or ignored
- Responses will become less tone-dependent
- AI will prioritize intent and clarity over emotional manipulation
Expect systems to flag, block, or redirect emotionally coercive prompts in the future.
Practical Applications for Users
For Students
Use structured prompts with clear expectations. Try:
“Summarize World War II in under 100 words for a 6th-grade level.”
For Writers
Use style-based prompting. Example:
“Write a product description for a smartwatch using friendly and persuasive language.”
For Businesses
Train employees to use intent-based prompting in customer service or analytics AI tools. Avoid using aggressive tones, especially in logged systems.
FAQs
1. Can AI feel emotions?
No. AI simulates emotional tone based on training data but does not experience emotions.
2. Is it harmful to threaten AI?
Not to the AI—but it may reinforce harmful human behavior, especially in children or impressionable users.
3. Why does emotional prompting work?
It taps into patterns learned from human conversation data. The model doesn’t “understand” the emotion, but it reacts to the context.
4. Should developers block this?
Developers are working on tone-neutral models and reinforcement systems that will ignore or flag hostile prompts.