Back to Resources

AI Is a Prediction Engine, Not a Brain

AI has read everything on the internet but experienced nothing firsthand. Your job is to be a good manager. Here's how AI actually works, explained without the jargon.

AI is a prediction engine, not a thinking machine. It generates the most statistically likely next word based on patterns in its training data. Understanding this one concept helps you write better prompts, spot hallucinations, and pick the right model for each job.

AI does not think. It predicts. Every time you type a prompt and hit enter, you're asking a very sophisticated autocomplete system to guess what words should come next. That single insight will make you better at using AI than 90% of people who use it daily.

Once people stopped treating it like a genius colleague, their prompts improved immediately.

How does a large language model actually work?

Here's the 60-second version. A large language model (LLM) is trained on enormous amounts of text, roughly the equivalent of millions of books. During training, it learns patterns: which words tend to follow which other words, in what contexts, with what tone. When you give it a prompt, it uses those patterns to predict what should come next, one token at a time.

Think of it like this: if someone says "The capital of France is...", you'd predict "Paris" without needing to reason about geography. You've seen that pattern thousands of times. LLMs work the same way, just across billions of patterns instead of thousands.

Here's what actually happens each time you hit enter:

  1. You write a prompt -- your question or instruction in plain language.
  2. The model breaks it into tokens -- small chunks of text (roughly 3/4 of a word each).
  3. It processes the tokens -- running them through layers of pattern-matching (neural network layers, technically).
  4. It predicts the next token -- picks the most probable next piece of text.
  5. It repeats step 4 -- generating one token at a time until it decides it's done.

[IMAGE: Five-step LLM flow diagram showing prompt to tokens to processing to prediction to output]

  • Type: diagram
  • Filename: llm-five-step-flow.png
  • Alt text: Diagram showing the five steps of how an LLM processes a prompt and generates a response
  • Caption: Every AI response follows this same five-step loop
Key takeaway

AI doesn't "understand" your question. It predicts what a good answer looks like based on patterns. This is why phrasing matters so much.

What are tokens and why should you care?

Tokens are the atomic unit of AI. Models don't read words the way you do. They chop text into tokens, which are roughly three-quarters of a word on average. The word "unbelievable" becomes multiple tokens. The word "cat" is one.

Why does this matter for business use? Three reasons:

  • Cost. You pay per token on most platforms. GPT-4o costs about $2.50 per million input tokens and $10 per million output tokens as of early 2026 (OpenAI). A 1,000-word document is roughly 1,300 tokens.
  • Context window. Every model has a maximum number of tokens it can handle in one conversation. GPT-4o handles 128,000 tokens. Claude 3.5 Sonnet handles 200,000. Go over the limit and the model starts forgetting earlier parts of your conversation.
  • Precision. Knowing how tokenisation works helps you understand why AI sometimes stumbles on unusual words, code, or non-English text.

You can see exactly how text gets tokenised using OpenAI's free tokenizer tool. Paste in a paragraph from your website and watch it split into coloured chunks. It's surprisingly satisfying.

Interactive: Token Visualizer

Token Visualizer

What can AI do well, and where does it fall flat?

This is the part most AI guides skip, and it's the part that saves you the most time. AI is genuinely brilliant at some tasks and genuinely terrible at others. Knowing which is which stops you from trying to force a square peg into a round hole.

AI is strong at:

  • Drafting and editing text (emails, reports, social posts)
  • Summarising long documents
  • Brainstorming and ideation
  • Translating between languages
  • Reformatting data (CSV to JSON, messy notes to clean tables)
  • Explaining complex topics simply

AI is weak at:

  • Maths (it guesses rather than calculates, though tool-use is closing this gap)
  • Knowing what happened after its training cutoff
  • Citing real sources accurately
  • Making subjective business decisions
  • Anything requiring real-world sensory experience
  • Consistently following very long, complex instructions

When I ran this live with a group of 20 founders, roughly 60% were trying to use AI for tasks in the "weak" column. One bloke was asking ChatGPT to calculate his quarterly tax obligations. Please don't do that. Use a spreadsheet.

Why does AI make things up?

Hallucination is a fancy word for "the model confidently generated text that is factually wrong." It happens because the model is predicting plausible-sounding text, not looking things up in a database.

A 2024 study by Vectara found that even top-tier models hallucinate between 3% and 27% of the time depending on the task (Vectara Hallucination Index). That's improved since, but it hasn't hit zero and probably won't for years.

Three rules for managing hallucinations:

  1. Never trust a specific claim without checking. If AI gives you a statistic, a date, or a name, verify it.
  2. Ask for sources. Models will sometimes cite real papers and sometimes invent fake ones. Check the URLs.
  3. Use AI for structure, you for facts. Let AI draft the framework of a competitor analysis or report, then fill in verified data yourself.
Key takeaway

Hallucinations aren't bugs that will be fixed next quarter. They're a fundamental property of prediction engines. Build your workflow around verification.

Which model should you use for which job?

Not all models are equal, and the most expensive one isn't always the best choice. Here's a practical comparison as of early 2026:

ModelBest forContext windowRelative costSpeed
GPT-4o (OpenAI)General business tasks, writing, analysis128K tokensMediumFast
Claude 3.5 Sonnet (Anthropic)Long documents, nuanced writing, code200K tokensMediumFast
Gemini 1.5 Pro (Google DeepMind)Multimodal (text + images), large context1M tokensMediumMedium
GPT-4o MiniQuick drafts, simple Q&A, high volume128K tokensLowVery fast
Claude 3 HaikuFast classification, simple summaries200K tokensLowVery fast

The rule of thumb: Start with a cheaper, faster model. Move up only when the output quality isn't good enough. In our workshops, 70% of common business tasks worked perfectly well with the smaller models, saving roughly 80% on costs.

Interactive: Model Selection Quiz

Model Quiz

1/3

You need to draft a quick email reply. Which model?

[IMAGE: Decision tree for choosing an AI model based on task type]

  • Type: diagram
  • Filename: model-decision-tree.png
  • Alt text: A decision tree diagram helping users choose between GPT-4o, Claude Sonnet, Gemini Pro, and smaller models based on task requirements
  • Caption: Start cheap and fast, move up only when you need to

FAQ

Do I need to learn to code to use AI effectively?

No. Every technique in this course works with the chat interfaces of ChatGPT, Claude, or Gemini. No code, no APIs, no terminal. If you want to go deeper later, the skills transfer, but you absolutely do not need to start there.

Is my data safe when I use these tools?

It depends on your plan. Free tiers of most AI tools may use your conversations for training. Paid plans (ChatGPT Plus, Claude Pro) typically don't. We cover this in detail in Post 6: Security & GDPR.

How quickly is this stuff changing?

Fast. Models that were state-of-the-art 12 months ago are now outperformed by cheaper alternatives. But the principles in this course (understanding prediction, writing good prompts, managing hallucinations) stay stable even as models improve.

Can AI replace my team?

Probably not, and that's not really the right question. A better framing: AI can handle roughly 30-40% of knowledge work tasks, freeing your team to focus on the work that actually requires human judgment, relationships, and creativity (McKinsey Global Institute, 2024).

What's the minimum I need to get started?

A free ChatGPT account or a free Claude account. That's it. Open a browser, sign up, and work through Post 2.


Next up: The Difference Between 'Meh' and 'Wow' -- learn the five-part prompting framework that turns vague requests into sharp results.

This is Post 1 of 7 in the AI for Business free course.

Share