Back to Maintenance Mode

The Decision Tax: Why AI Coding Drains You Faster Than Writing Code Yourself

AI coding flips your role from creator to evaluator. Every output triggers a micro-decision cascade that compounds across a session in ways traditional coding doesn't. Here's the mechanism, and what to do about it.

A full day of AI coding left me more depleted than eight hours of writing code myself. The git diff was thin, the commits were sparse, and I couldn't figure out where the energy went. Felt like I'd run a marathon but my Strava showed a nap.

That gap is what this post is about. The tiredness is real. The mechanism is just not what most developers think it is.

What's the difference between writing code and using AI?

When you write code yourself, you're in creation mode. You're building a model of the problem, choosing abstractions, making design calls that compound forward. It has a natural flow rhythm. The decisions feel owned.

When you use AI, the mode flips. You're no longer the generator: you're the evaluator. Every output requires a verification pass: is this correct? Does it match what I actually intended? Accept, reject, or edit? You're not building from scratch; you're assessing something else built.

These are cognitively different tasks, and the gap between them is the root of AI coding fatigue. Creation has natural closure: you finish a function, a component, a test. Evaluation doesn't, because every accepted output might still be subtly wrong. That's where the cost hides.

What is the decision tax?

The decision tax is the sum of micro-decisions triggered by each AI output in a session. Each one costs almost nothing. Across a full day, they compound.

A typical AI-heavy session runs through this loop continuously:

Flowchart
flowchart TD
  A["Write prompt"] --> B["AI generates output"]
  B --> C{"Evaluate output"}
  C -->|"Accept"| D["Integrate and continue"]
  C -->|"Reject"| E["Refine prompt"]
  C -->|"Edit"| F["Manual correction"]
  D --> A
  E --> A
  F --> A
  D --> G["Session end: cognitive debt accumulated"]
Rendered from Mermaid source.

Each cycle through that loop is cheap. Fifty cycles is not. You're holding your original intent in one hand while parsing what the model produced in the other. That dual-grip is the tax. Pure coding doesn't load your working memory the same way.

Roy Baumeister's ego depletion research proposed that your capacity for good decisions drains through the day, regardless of how trivial each one feels. Subsequent replication studies have produced mixed results, but the mechanism is contested, not the experience. (Baumeister et al., 1998) The AI coding loop is a decision machine running continuously.

ModePrimary cognitive taskFlow-compatible?Decision load per hour
Writing code yourselfGeneration and designHighLow to medium
AI-assisted codingEvaluation and triageLowHigh
Code review (static)EvaluationMediumMedium

The middle row is the one developers underestimate. Code review feels like work. AI-assisted coding feels like productivity. The cognitive load profile is similar.

Why is evaluation mode resistant to flow?

Flow needs clear goals, immediate feedback, and a challenge that matches your skill level. Creation hits all three when the work is pitched right. Evaluation doesn't: "review this output" is open-ended in a way that "build this function" never is.

Gloria Mark's decade of attention research at UC Irvine documented a consistent finding: getting back into focused work after an interruption takes substantially longer than the interruption itself. (Mark, Attention Span) Every accept/reject call is a micro-interruption: you flip from builder to reviewer and back. In an active AI session, that flip happens every 30 to 60 seconds.

In our ZeroShot Studio setup, we started logging session types after noticing a pattern: AI-heavy days correlated with worse judgment calls in the late afternoon, not just lower energy. The fatigue came from the mode, not the volume.

How does ADHD change the picture?

For developers with ADHD, evaluation mode is hostile territory. ADHD brains are wired for novelty and creation: the dopamine hit from building something new is neurology, not a workaround. Sustained review without clear closure triggers boredom and avoidance loops that look like procrastination from the outside.

The AI coding loop is short enough to feel stimulating (fast outputs, rapid iteration) but the evaluation grind drains the focus reserves that keep ADHD developers productive. It holds your attention while quietly emptying the tank.

If you're a developer with ADHD who finds AI coding sessions more exhausting than flow-state coding, this is probably why.

How do you reduce the decision tax?

The fix isn't avoiding AI tools. It's treating your decision capacity as a finite daily resource and spending it on purpose.

Three things that have worked for us:

  1. Timebox evaluation sessions. Cap continuous AI-assisted work at 90 minutes. After that, switch to creation work (greenfield code, architecture planning, writing) to let the evaluation queue drain before it backs up.

  2. Batch your accept/reject decisions. Instead of evaluating each AI suggestion in real time, generate several outputs and review them together in a dedicated block. Batching converts continuous overhead into discrete review windows with clear closure.

  3. Protect creation-only windows. Two-hour blocks where the AI tools are off. Not a productivity ritual: a calibration tool. Hands on the keyboard, no copilot. You need to remember what building feels like so you notice when you've been reviewing for too long.

The goal is awareness. Once you can name the tax, you can budget it. For specifics on structuring AI-assisted sessions, the AI Workflows zone covers prompt engineering and session design in more depth.

Frequently asked questions

Is AI coding fatigue the same as regular burnout?

Related but distinct. Burnout builds over months from sustained pressure and needs structural change: different work, different conditions, real time off. AI coding fatigue is session-level. It accumulates in hours and recovers with a mode change. You don't need a week off; you need two hours of writing code without a copilot. That changes how you respond to it.

How do I know if the decision tax is affecting me?

Run a rough audit at the end of an AI-heavy session. How much time did you spend writing net-new code versus evaluating AI output? If more than two-thirds was evaluation, the tax was running all day. Second signal: if your judgment calls at 4pm are worse than at 10am and your energy doesn't explain the gap, the pool is empty.

Does getting better at prompting reduce the tax?

Yes. Building a library of prompt templates for familiar tasks (refactoring, test scaffolding, boilerplate) raises the first-pass acceptance rate and compounds over time. The cost spikes on novel problems where you haven't built that muscle yet. Track where you're doing the most editing: those are the prompts worth refining first.


The decision tax is one piece of a bigger picture. If you're thinking about sustainable output as a developer, the Maintenance Mode pillar is where this fits.

More from Maintenance Mode | Follow on jimmygoode.com

Tested with Claude Code v2.1.89

Share