Introduction to AI
Lesson 3 of 8

The Difference Between 'Meh' and 'Wow'

The gap between a useless AI response and a brilliant one is almost always the prompt. Here's a repeatable framework that works every time.

Last updated: 2026-03-27 · Tested against ChatGPT, Claude, and Gemini (March 2026)

What does a bad prompt actually look like?

Let's start with a real example. Here's what most people type:

Bad prompt: "Write me a marketing email"

And here's what actually works:

Good prompt: "You're a senior email marketer for a B2B SaaS company that sells project management tools to agencies with 10-50 employees. Write a 150-word email announcing our new time-tracking feature. Use a friendly, direct tone. Include one clear CTA button. Don't use the words 'excited' or 'thrilled'."

The difference? The bad version gives the model nothing to work with. It has to guess your audience, your tone, your product, and your goal. The good one constrains the prediction space so the output lands close to what you actually need. (If you want a refresher on how that prediction process works under the hood, Post 1: How AI Actually Works covers it.)

When I ran this comparison live with a room of 25 startup founders, every single person preferred the output from the good prompt. Not because AI got "smarter" between the two attempts, but because we told it what good looked like.

What is the ROLE-CONTEXT-TASK-FORMAT-CONSTRAINTS framework?

This is the framework we teach in every workshop and cover in depth across the AI for Business course. Five parts, easy to remember, works for any task.

Think through each one before you hit enter and your results improve dramatically.

  1. Role -- Tell the AI who it is. "You are a senior financial analyst" produces different output than "You are a social media intern." The role shapes vocabulary, depth, and assumptions.
  2. Context -- Give background. Who's the audience? What's the situation? What do they already know? Context is where most prompts fail, because most people forget that the AI knows nothing about their specific situation.
  3. Task -- State exactly what you want. Not "help me with marketing" but "write three subject lines for a product launch email." Precision here directly correlates with output quality.
  4. Format -- Specify the shape of the output. Bullet points? Table? 200 words? Email format? If you don't specify, you get whatever the model's default is, which is usually a wall of text.
  5. Constraints -- Set boundaries. Word count limits, words to avoid, tone requirements, things to exclude. Constraints prevent the model from drifting into generic territory.

Here's the framework applied to a real business task:

Try this structure with your own task. The output quality will noticeably improve.

[IMAGE: The RCTFC framework components shown as building blocks]

  • Type: diagram
  • Filename: rctfc-framework-blocks.png
  • Alt text: Coloured building blocks labelled Role, Context, Task, Format, and Constraints showing the prompt engineering framework
  • Caption: Every prompt gets better when you think through each one.

What are the most common prompting mistakes?

After running workshops with over 100 founders and executives, the same five mistakes come up repeatedly:

  1. Being too vague. "Help me with my business" gives the model nothing. Be specific about what, for whom, and in what format.

  2. Skipping context. The AI doesn't know your industry, your team size, or your budget. If you don't say it, it'll guess, and it'll guess wrong.

  3. Asking for too many things at once. "Write my business plan, marketing strategy, and financial projections" in one prompt overwhelms the model. Break complex tasks into steps.

  4. Not specifying format. If you want a table, say so. If you want bullet points, say that. Default output is rarely the shape you actually need.

  5. Accepting the first response. AI is iterative. The first result is a draft. Treat it like one and refine from there.

What are three power moves that level up any prompt?

Once you've got the basics, these three techniques separate the people who get useful output from the people who don't.

Power move 1: Give examples. Instead of describing what you want, show it. "Here's an example of the tone I'm after: [paste a paragraph]. Now write the product description in this same style." Models are remarkably good at pattern-matching from examples. Anthropic's prompt engineering guide recommends this as a top technique.

Power move 2: Ask for step-by-step reasoning. Add "Think through this step by step" or "Show your working before giving the final answer." This forces the model to walk through the logic rather than jumping to a conclusion, which reduces errors on complex tasks (OpenAI prompt engineering guide).

Power move 3: Break big tasks into smaller ones. Instead of "Write a 2,000-word blog post about AI trends," try: "First, outline 5 key AI trends for small businesses in 2026. Then I'll pick 3, and you'll write 400 words on each." Multi-step conversations consistently produce better output than single-shot prompts.

How do you iterate when the first result isn't right?

The best prompt engineers expect to iterate. Treat AI conversations like a back-and-forth with a contractor, not a vending machine. Here's a practical iteration loop:

  1. Send your prompt using the RCTFC framework.
  2. Review the output. What's good? What missed the mark?
  3. Give targeted feedback. Not "try again" but "too formal, make it more conversational" or "good structure, but point 3 is wrong, replace it with information about X." Post 3: Personalisation & Tone goes deeper on dialling in voice.
  4. Refine and resend. Each round gets closer to what you need. Most tasks take 2-3 rounds.

A workshop participant told me she'd been using AI for six months and never once sent a follow-up message. Just accepted whatever came out first. After learning to iterate, she said her results "roughly tripled." Her words, not mine, but I believe her.

[IMAGE: Circular iteration loop showing prompt, review, feedback, refine cycle]

  • Type: diagram
  • Filename: prompt-iteration-loop.png
  • Alt text: A circular diagram showing the four steps of prompt iteration: send, review, feedback, refine
  • Caption: Expect 2-3 rounds. The first output is a draft, not a final answer.

Frequently asked questions

Do I need to memorise the RCTFC framework?

No. Just remember that specificity wins. If you can only remember one thing: tell the AI who it is, what you need, and what format you want it in. The full framework is there for when you want to be thorough.

Does this work the same way on ChatGPT, Claude, and Gemini?

Yes. The framework is model-agnostic. The principles of specificity, context, and format apply to every major LLM. You might notice slight differences in personality or default style between models, but the prompting techniques transfer directly.

How long should a prompt be?

As long as it needs to be. A simple question needs a short prompt. A complex analysis benefits from a detailed one. We've seen excellent results from prompts ranging from 30 words to 500 words. The key is that every word adds useful signal, not filler.

Should I use "please" and "thank you" in my prompts?

It doesn't meaningfully affect output quality. If it makes you feel better, go for it. The model doesn't have feelings, but the human using it does, and a comfortable user writes better prompts.

What about "system prompts" or "custom instructions"?

Great question, and we cover exactly this in the next post. Custom instructions let you set persistent context so you don't have to repeat yourself every conversation. Post 3: Personalisation & Tone walks through it step by step.


Next up: Making AI Sound Like Your Brand, Not a Robot -- extract your brand voice and teach AI to use it consistently.

This is Post 2 of 7 in the AI for Business free course. Previous: How AI Actually Works

Share