Back to Maintenance Mode

The Decision Tax: Why AI Coding Drains You Faster Than Writing Code Yourself

AI tools don't eliminate cognitive work. They shift it from production to evaluation, and evaluation is more expensive. Here's the mechanism and what to do about it.

What is the decision tax in AI coding?

Last Tuesday I shipped more code than any day this year. Three features, two bug fixes, a refactoring that had been sitting in the backlog for weeks. Claude handled most of the heavy lifting. I barely typed a line of implementation myself.

By 6pm I was cooked. Not the satisfying tired you get after a hard build session where you wrestled with a problem and won. A different kind of tired. Foggy. Irritable. Unable to make even small decisions like what to eat for dinner. I'd produced more output than ever and felt worse than ever.

That gap between output and exhaustion has a name. I call it the decision tax, and if you're coding with AI tools daily, you're paying it whether you've noticed or not.

AI tools don't eliminate your cognitive work. They shift it. Instead of spending mental energy on production, writing the code, solving the implementation puzzle, you spend it on evaluation. Reviewing generated output. Deciding if it's correct. Steering the next prompt. Judging architectural choices you didn't make.

And evaluation is more expensive than production. Not in time. In cognitive load.

Why does reviewing AI code drain you faster than writing it yourself?

Writing code puts you in a production state. You're building, creating, solving. The feedback loop is tight: you write something, it works or it doesn't, you adjust. When it flows, you can stay in that state for hours. Your brain is active but it isn't depleted.

Reviewing AI output is a different cognitive task entirely. Every prompt-response cycle forces a sequence of judgments: Is this code correct? Is it the approach I'd have chosen? Does it handle edge cases? Is it subtly wrong in ways I can't immediately see? Should I accept, modify, or regenerate?

That's not one decision. That's five or six, and they happen every few minutes across a full working day.

The trickier part is what one developer called "reviewing the what without the why". When a teammate writes code, you can ask them what they were thinking. You can trace their reasoning. AI-generated code gives you the output with none of the rationale, which makes the review itself harder and more draining.

In my own ZeroShot Studio workflow, I've noticed the shift clearly. I'll spend an entire morning accepting, rejecting, and redirecting Claude's suggestions. By lunchtime I've made hundreds of micro-decisions and I haven't written ten lines of code myself. The work got done. But the cognitive cost landed entirely on the evaluation side of the ledger.

What does the research say about AI coding fatigue?

The data backs up what a lot of us are feeling. A 2026 BCG study published in Harvard Business Review surveyed 1,488 workers and found that 14% experienced what they called "AI brain fry," defined as mental fatigue from excessive oversight of AI tools.

Workers experiencing brain fry reported 33% more decision fatigue, 39% higher major error rates, and 19% greater information overload. The productivity curve tells the story: one AI tool to two increased output. Three showed diminishing returns. Beyond three, productivity declined.

The split that matters: using AI for repetitive, production-style tasks correlated with 15% lower burnout scores. Using it for tasks requiring constant oversight correlated with the opposite. Same tools, different cognitive load, completely different outcomes.

Then the perception gap. A METR study cited by Rachel Thomas at fast.ai found developers expected to be 24% faster with AI assistance. Actual measurements showed they worked 19% slower. Even after the study, developers still believed AI had made them 20% faster. Thomas calls this "dark flow," a sense of momentum that mimics genuine productivity while your cognitive reserves drain.

After any interruption, it takes roughly 23 minutes to fully regain focus, according to Gloria Mark's widely cited attention research. With AI coding, you're switching between roles: writer, reviewer, architect, debugger, reviewer again. Each prompt-response cycle resets the focus clock.

How can developers manage the decision tax?

The decision tax is a resource problem. You already treat your production systems this way: monitor CPU and memory, set alerts before they hit critical thresholds, schedule maintenance windows. Your brain deserves the same respect.

Batch your evaluation work. After building our content pipeline at ZeroShot Studio, I learned this the hard way. Instead of scattering review across the day, group AI-assisted sessions into focused blocks with recovery time between them. Two 90-minute sessions with a genuine break between them will produce better results than five hours of continuous prompt-evaluate-repeat.

Set a cutoff for high-stakes calls. I don't make architectural choices after 4pm anymore. By that point in the day, the tax has eaten through my reserves and I know the quality of my judgment drops. (I wrote more about this in The No-Decisions-After-6pm Rule.)

Use AI for production, protect yourself during evaluation. The BCG data backs this up: AI tools reduce burnout on repetitive, production-oriented tasks. The fatigue comes from oversight and judgment. So let AI handle the boilerplate, the scaffolding, the mechanical stuff. But when you're reviewing critical logic or making architectural calls, pace yourself.

Monitor yourself like a system. You're making worse calls at 3pm than at 10am. That's not laziness, that's resource depletion. Build your day around it. Hard review in the morning. Mechanical work in the afternoon. I wrote about this in Maintenance Mode: treating yourself like a production system, a framework for scheduling your cognitive work the same way you'd schedule server uptime.

Frequently asked questions

What is AI brain fry?

A term from the 2026 BCG/Harvard Business Review study. Symptoms include a "buzzing" sensation, mental fog, difficulty concentrating, and headaches that appear specifically during AI-heavy work sessions. The study found prevalence varied sharply by role: 26% in marketing but only 6% in legal, suggesting that the amount of creative judgment required determines vulnerability more than raw hours of AI use.

Does the decision tax affect code quality?

Yes, directly. The BCG study found 39% higher major error rates among workers experiencing AI brain fry. Decision fatigue research generally shows that depleted decision-makers default to easier options, avoid hard choices, or make impulsive ones. All three patterns are dangerous during code review.

Is vibe coding burnout real?

Multiple developers and researchers have described the same pattern independently. The METR study found developers believed they were faster while actually working 19% slower. Rachel Thomas calls this "dark flow," a state that mimics genuine flow but depletes cognitive resources faster. The burnout is real. It just looks different because the output metrics stay high while the cognitive cost accumulates underneath.


The decision tax compounds. Like technical debt, you don't notice it until something breaks. A bad architectural call at 4pm. A subtle bug you approved because you were sixty reviews deep. The fix isn't to stop using AI tools. It's to treat your decision-making capacity as the finite resource it is.


Building your own system for managing cognitive load? Start with the framework that treats you like a production system.

Read Maintenance Mode | Subscribe to the newsletter

Tested with Claude Code v2.1.87

Share