
Steve Yegge’s article about programmer burnout (“The AI Vampire”) along with Margaret Storey’s article about Cognitive Debt started an ongoing conversation about programmer fatigue and software quality—two topics that should be linked, but often aren’t. Steve argues that programming constantly with the help of agentic AI leds to burnout; it’s fast, it’s fun, but keeping up with your agents causes mental strain. He recommends programming with agents no more than 4 or 5 hours per day. I could cynically say that most software developers spend at most 20% of their time writing code, which leaves about an hour and a half for wrestling with agents—but that’s beside the point. Yegge’s point about burnout is important, and is in line with what friends have told me. At some point, you have to put the laptop down.
Storey makes a different point. Agentic engineering is great at creating software that works, but that you don’t quite understand. Like humans, agents can generate a lot of spaghetti code. They can “design” convoluted and inappropriate software structures—I hesitate to call them “architectures”; they’re what happens in the absence of architecture. Agents are very capable of creating technical debt—and not the kind of meaningful technical debt that lets you release a product on time with the knowledge that you need to make pay it back with interest. If nobody is looking hard at the code, the debt can grow without bounds, sort of like not checking your credit card balance. What’s worse—and this is Storey’s contribution—while that technical debt is growing, developers are losing track of the design, the structure, the architecture. She calls that “cognitive debt.” You don’t just have problems in the code; those problems are harder to find and fix than they should be because you’re unclear on the structure of the code you’re working with.
Other voices have made similar points. The Sonarsource blog writes about how AI is reshaping technical debt and creating new burdens, new kinds of toil. In “The Mythical Agent Month,” Wes McKinney links the problem of burnout to the introduction of “accidental complexity” and “agent scope creep,” while Tim O’Brien writes that while scope creep isn’t new, AI supersized its growth. And Addy Osmani writes about finding your parallel agent limit, coming to grips with what you’re capable of accomplishing without compromising your work or your life.
Cognitive debt and burnout aren’t new, alas. With or without AI, we’ve all stayed up to 4AM working on a bug that won’t go away or pursuing an interesting idea to its end. Sometimes that’s heroic, but AI threatens to turn it into a lifestyle. AI fatigue is real, as Siddhant Khare writes, and it’s something we need to talk about. When fatigued, it’s tempting to say “this works, it looks good, and it passes our tests” without considering how the code fits into the overall plan. With 10x code generation, you also get 10x the debt load, and that’s being optimistic. When the debt curve goes exponential, strategies for managing that debt are stressed past the breaking point.
The problem with cognitive debt is that it eventually makes new features and bug fixes difficult or impossible. The code has become so convoluted that it can’t be changed. I’ve certainly done that with hand-written code: added a feature without thinking enough about how the new code fit in, added some more code later, and then—when I needed to add a third feature—discovered that I’d created a problem that wouldn’t be simple to fix. The right stuff was there, but in the wrong places because I wasn’t thinking about the overall structure.
That’s a common enough problem with handwritten code; it’s almost always a problem with legacy code where the original developers and maintainers are no longer around. We need to realize that it’s also a problem with AI-generated code, which has been characterized as legacy code from the day it’s written. Somebody or something has to pay down the debt. As Storey writes, “velocity without understanding is not sustainable”: not for humans, not for machines. If you understand the structure of what you’re building, you can steer the AI away from creating a problem in the first place, or you can use it to author a fix. If you don’t understand the structure or can’t describe it to the AI, you’re lost.
Cognitive debt accumulates much more quickly when you’re burned out. Burnout has always been a problem for programmers, especially for those who really love programming: you stay up all night to solve a problem. And, while some programmers resist using AI to write code, those who use AI frequently find that it exacts the same toll: it’s hard to stop. It is its own kind of toil: toil that gives you a sense of accomplishment and fulfillment, but still leaves you empty.
Agents may not be subject to burnout, but the humans who control them are. Agents are quickly becoming more capable, but they still can’t maintain a sense of the shape and structure of a project over the long term. That’s our job. They can pay down technical debt, but only if properly guided; that’s also our job. And we won’t be able to do either if we’re burned out.