Stop Using "Please" in Your Prompts (It's Wasting Your Tokens) - Digital Compliance Academy
You are polite. That's nice. But explaining social niceties to a machine is inefficient. Here is why direct instructions get better results in Claude and Gemini.
I see it in every workshop. Someone types a prompt into Claude or ChatGPT:
“Could you please be so kind as to write a summary of this document, if it’s not too much trouble?”
It’s charming. It’s British. And it is a complete waste of your time (and tokens).
The AI does not have feelings. It does not get offended if you are blunt. It does not work harder if you are nice.
In fact, excessive politeness can actually confuse the model. LLMs work by paying attention to the most important words in your prompt. If 40% of your prompt is “fluff” words like please, kindly, appreciate, hope you can, you are diluting the density of your actual instructions.
The Cost of Politeness (Why “Fluff” hurts performance)
- Dilution: The model has a mechanism called “Attention Heads.” These heads scan your input to decide what matters. If you fill the prompt with conversational filler, you are forcing the model to process noise. With smaller local models (LLaMA) or when using constrained context windows, this matters.
- Ambiguity: “If you could possibly…” tells the AI there is an option not to do it. “Write this” tells it to do it. Ambiguity leads to “lazy” responses where the AI summarises the task rather than doing it.
- Tone Leakage: This is the most damaging. LLMs are mimics. If you speak to them like a Victorian butler, they will often reply like a Victorian butler. If you want a punchy, direct executive summary, but you ask for it with flowery language, the AI will mirror your verbosity.
The “Surgical” Approach
You don’t need to be abusive. You just need to be Surgical.
Think of the prompt not as a conversation, but as code written in English.
- Polite (Bad): “Hi there, I was wondering if you could list some ideas for a blog post about AI safety, thanks!”
- Direct (Good): “List 10 blog post ideas about AI safety. Focus on corporate risk.”
The latter is clearer, uses fewer tokens, and leaves less room for misinterpretation.
But what about “Emotional Bribes”?
You might have read research suggesting that telling the AI “It is important for my career” results in better answers. This is sometimes true (it increases importance weightings), but it is different from being polite.
- Useful: “This is critical for a board meeting. Be accurate.” (Sets stakes).
- Useless: “Please do a good job, pretty please.” (Adds noise).
Does this apply to all models?
- ChatGPT (OpenAI): Highly sensitive to tone. Politeness often triggers its “Customer Service” persona, which is verbose and apologetic. Being direct suppresses this.
- Claude (Anthropic): Claude is naturally more formal. Being overly polite makes it even more formal, turning it into a stiff bureaucrat. Direct commands (“Do X”) work best to cut through its safety/politeness filters.
- Gemini (Google): Gemini is a bit more chaotic. Directness helps anchor it.
The 7-Day Challenge
For the next week, I want you to strip every “Please”, “Thank you”, and “Hi” from your prompts.
It will feel rude. You will feel like a bad person. But watch the quality of your outputs go up.
Save your empathy for your human colleagues. They actually care. The robot just wants instructions.