The Difference Between 'Meh' and 'Wow'
The gap between a useless AI response and a brilliant one is almost always the prompt. Here's a repeatable framework that works every time.
The gap between a mediocre AI response and an excellent one is almost always the prompt, not the model. A five-part framework (Role, Context, Task, Format, Constraints) turns any vague request into something specific enough to get genuinely useful output on the first or second try.
The single biggest factor in AI output quality is not which model you use. It's what you type into the box. A well-structured prompt on a free-tier model will outperform a lazy prompt on the most expensive model almost every time.
This isn't theory. In our ZeroShot Studio workshops, we ran the same task with two prompts side by side. The bad prompt got generic filler. The good prompt got output people could paste straight into a slide deck. Same model. Same task. Different words in, wildly different quality out.
What does a bad prompt actually look like?
Let's start with a real example. Here's what most people type:
Bad prompt: "Write me a marketing email"
And here's what actually works:
Good prompt: "You're a senior email marketer for a B2B SaaS company that sells project management tools to agencies with 10-50 employees. Write a 150-word email announcing our new time-tracking feature. Use a friendly, direct tone. Include one clear CTA button. Don't use the words 'excited' or 'thrilled'."
The difference? The bad prompt gives the model nothing to work with. It has to guess your audience, your tone, your product, and your goal. The good prompt constrains the prediction space so the model's output lands close to what you actually need.
When I ran this comparison live with a room of 25 startup founders, every single person preferred the output from the good prompt. Not because AI got "smarter" between the two attempts, but because we told it what good looked like.
Specificity is the single highest-impact change you can make to your AI usage. Every detail you add to a prompt removes a guess the model would otherwise make.
What is the ROLE-CONTEXT-TASK-FORMAT-CONSTRAINTS framework?
This is the framework we teach in every workshop. Five parts, easy to remember, works for any task.
You don't need all five every time, but thinking through each one before you hit enter will improve your results dramatically.
- Role -- Tell the AI who it is. "You are a senior financial analyst" produces different output than "You are a social media intern." The role shapes vocabulary, depth, and assumptions.
- Context -- Give background. Who's the audience? What's the situation? What do they already know? Context is where most prompts fail, because most people forget that the AI knows nothing about their specific situation.
- Task -- State exactly what you want. Not "help me with marketing" but "write three subject lines for a product launch email." Precision here directly correlates with output quality.
- Format -- Specify the shape of the output. Bullet points? Table? 200 words? Email format? If you don't specify, you get whatever the model's default is, which is usually a wall of text.
- Constraints -- Set boundaries. Word count limits, words to avoid, tone requirements, things to exclude. Constraints prevent the model from drifting into generic territory.
Here's the framework applied to a real business task:
You are a business analyst with 10 years of experience in the Australian retail sector. I run a 12-person online clothing store doing $2M annual revenue. We're considering expanding into homewares. List the top 5 risks of this expansion and suggest one mitigation strategy for each. Format: numbered list, each item has a bold risk name, one-sentence description, one-sentence mitigation. Keep it under 300 words. Focus on risks specific to small businesses, not enterprise-level concerns.
Try this structure with your own task. The output quality will noticeably improve.
[IMAGE: The five RCTFC framework components shown as building blocks]
- Type: diagram
- Filename: rctfc-framework-blocks.png
- Alt text: Five coloured building blocks labelled Role, Context, Task, Format, and Constraints showing the prompt engineering framework
- Caption: Five blocks. Every prompt gets better when you think through each one.
What are the most common prompting mistakes?
After running workshops with over 100 founders and executives, the same five mistakes come up repeatedly:
-
Being too vague. "Help me with my business" gives the model nothing. Be specific about what, for whom, and in what format.
-
Skipping context. The model doesn't know your industry, your team size, or your budget. If you don't say it, it'll guess, and it'll guess wrong.
-
Asking for too many things at once. "Write my business plan, marketing strategy, and financial projections" in one prompt overwhelms the model. Break complex tasks into steps.
-
Not specifying format. If you want a table, say "format as a table." If you want bullet points, say so. Default output is rarely the format you actually want.
-
Accepting the first output. AI is iterative. The first response is a draft. Treat it like one and refine from there.
The best prompt engineers aren't wizards. They're just specific about what they want and willing to iterate when the first result isn't perfect.
What are three power moves that level up any prompt?
Once you've got the basics, these three techniques separate the people who get useful output from the people who don't. They work because they give the model more to work with.
Power move 1: Give examples. Instead of describing what you want, show it. "Here's an example of the tone I'm after: [paste a paragraph]. Now write the product description in this same style." Models are remarkably good at pattern-matching from examples. Anthropic's prompt engineering guide recommends this as a top technique.
Power move 2: Ask for step-by-step reasoning. Add "Think through this step by step" or "Show your reasoning before giving the final answer." This forces the model to work through the logic rather than jumping to a conclusion, which significantly reduces errors on complex reasoning tasks (OpenAI prompt engineering guide).
Power move 3: Break big tasks into smaller ones. Instead of "Write a 2,000-word blog post about AI trends," try: "First, outline 5 key AI trends for small businesses in 2026. Then I'll pick 3, and you'll write 400 words on each." Multi-step conversations consistently produce better output than single-shot prompts.
How do you iterate when the first result isn't right?
The best prompt engineers expect to iterate. Treat AI conversations like a back-and-forth with a contractor, not a vending machine. Here's a practical iteration loop:
- Send your prompt using the RCTFC framework.
- Review the output. What's good? What missed the mark?
- Give targeted feedback. Not "try again" but "the tone is too formal, make it more conversational" or "good structure, but point 3 is wrong, replace it with information about X."
- Refine and resend. Each round gets closer to what you need. Most tasks take 2-3 rounds.
A workshop participant once told me she'd been using AI for six months and never once sent a follow-up message. She'd just accept whatever came out first. After learning to iterate, she said her output quality "roughly tripled." Her words, not mine, but I believe her.
[IMAGE: Circular iteration loop showing prompt, review, feedback, refine cycle]
- Type: diagram
- Filename: prompt-iteration-loop.png
- Alt text: A circular diagram showing the four steps of prompt iteration: send, review, feedback, refine
- Caption: Expect 2-3 rounds. The first output is a draft, not a final answer.
FAQ
No. Just remember that specificity wins. If you can only remember one thing: tell the AI who it is, what you need, and what format you want it in. The full framework is there for when you want to be thorough.
Yes. The framework is model-agnostic. The principles of specificity, context, and format apply to every major LLM. You might notice slight differences in personality or default style between models, but the prompting techniques transfer directly.
As long as it needs to be. A simple question needs a short prompt. A complex analysis benefits from a detailed one. We've seen excellent results from prompts ranging from 30 words to 500 words. The key is that every word adds useful signal, not filler.
It doesn't meaningfully affect output quality. If it makes you feel better, go for it. The model doesn't have feelings, but the human using it does, and a comfortable user writes better prompts.
Great question, and we cover exactly this in the next post. Custom instructions let you set persistent context so you don't have to repeat yourself every conversation. Post 3: Personalisation & Tone walks through it step by step.
Next up: Making AI Sound Like Your Brand, Not a Robot -- extract your brand voice and teach AI to use it consistently.
This is Post 2 of 7 in the AI for Business free course. Previous: How AI Actually Works