Let the Agents Write Bad Code: The Future of Engineering Is Reconvergence
Thoughts on Steve Yegge’s “Six New Tips for Better Coding with Agents”
Steve Yegge gets it. If you are curious about the now and future of software engineering, he just wrote an excellent article “Six New Tips for Better Coding With Agents“ that is worth a read.
His article is brilliant—not because it predicts a future we might get to someday soon (if not already for those with an adventurous spirit), but because it describes a reality that anyone building with AI agents already lives inside. And it exposes the big misunderstanding most engineers still have about what “coding with AI” really is.
Here’s the truth: Agents don’t make software engineering easier. They change the economics so dramatically that the entire discipline becomes something different.
And if you don’t understand that shift, you will fight the tools instead of wielding them.
1. The Myth of “AI writes the code for you”
Most commentary today still frames AI coding agents as junior engineers who write flawed code you then have to fix. This is the wrong mental model. Completely wrong.
The point isn’t that agents produce perfect code. The point is that refactoring, rewriting, and re-architecting have become cheap.
Historically, engineering teams invested heavily upfront in architecture and planning because changing things later was expensive. Now? An agent can rewrite 40 files in 45 seconds while you grab coffee.
You don’t prevent mess in advance. You expect the mess—and then use swarms of agents to reconverge the system toward something better with each iteration.
This is the mental unlock most engineers who “don’t get AI” are missing.
They complain:
“The agent generated bad code.”
But the cost of producing and fixing bad code is near-zero. The new bottleneck isn’t correctness—it’s clarity of intent, dependency structure, and the ability to keep a living system evolving coherently.
Great engineers of the next decade won’t be the ones who avoid entropy. They’ll be the ones who manage entropy at scale.
2. Architecture is no longer fixed — it’s continuously renegotiated
Steve makes a point I think is massively underrated: half of your engineering time should be review and restructuring. That sounds insane to anyone who grew up optimizing for minimal churn.
But once you experience agent-native workflows, you realize how right he is.
I routinely have an agent swarm refactor large swaths of a project overnight.
Then I review the output through multiple perspectives:
How does this fit the emerging architecture?
What implicit decisions were made?
Where should the architecture bend or be simplified?
What should the next iteration of the system look like?
It’s not “fixing.” It’s shepherding a living codebase toward elegance.
Steve pointed out Jeffrey Emanuel’s “Rule of Five” for reviews, which I hadn’t heard of. Effectively the process in your agent driven development cycle is having multiple agents review from different angles, during the planning phase. I’m not sure what prompts he uses, but sounds like some of them are possibly throwing curve balls into the review process intentionally. This is something I’m going to try, since I usually do a manual review with my own eyeballs on the proposed plan, alter it 3-4 times, and then spin up creating epics and tasks for implementation. Occasionally I’ll have a subagent review the plan if it is very complex, but usually I just let the agentts rip at this point. I rely more on subsequent refactorings later.
3. Agent UX: The next frontier of developer experience
One of Steve’s best insights is that we need to stop thinking only about Developer Experience (DX) and start designing Agent Experience (AX) for many of our tools
Agents navigate codebases differently than humans. They need:
Clean, predictable structure
Clear entry points
Documentation that encodes intent
Smaller, more modular files
Fewer ambiguous abstractions
Prompt scaffolding as part of the repo itself
This is not optional. If your repo is organized for humans instead of agents, you’re handicapping your own tools.
This shift mirrors early web development: when browsers got more capable, we had to rethink how sites were structured. Same thing here—except instead of browsers, it’s autonomous collaborators with infinite patience and infinite willingness to refactor.
I’m not sure the right balance here, because I still feel the need to make sure it is human optimized for me to understand how my team’s codebase(s) are structured.
4. Swarms are the real revolution
Single-agent workflows are fine for toy projects. Seriously. If you are still coding linearly, you are an order of magnitude behind in modern development effectiveness.
Steve points out that we are going to need much better development tools to manage the multi-agent orchestration:
Planner agent
Implementer agent
Reviewer agent
Security agent
Architecture agent
Performance agent
Each one with a different cognitive lens.
I manually invoke these periodically, but not as a regular part of the process (beyond planning, implementation experts in different languages, and reviews). This is the closest I’ve seen to how high-performing engineering teams actually think and operate. It just happens at 50× speed.
5. Why this matters for the future of engineering teams
This part gets uncomfortable for some people: The engineers who thrive in this new paradigm are not the ones who were historically celebrated. They are often more of the breadth-oriented engineer, who has an ability to think ahead and understand where the business is going.
The high-leverage engineer of the AI era is the one who:
understands business requirements
makes fast, high-quality tradeoffs
guides architectural evolution
designs coherent agent workflows
zooms out and sees long-term structure
lets go of ego around “my beautiful code”
moves from “producer” to “steward”
Engineering becomes less about “writing software” and more about directing the continuous emergence of a system. It’s a different skillset. And some engineers will adapt brilliantly. Others won’t.
6. Let the agents write the bad code — that’s their job
This is my main takeaway from Steve’s article, and it’s the thing I wish more teams understood:
Perfectionism is a liability in agent-native engineering.
Iteration is the new perfection.
When rewriting and rethinking are cheap, the optimal workflow is:
Let the agents build something rough.
Review from multiple perspectives.
Restructure the system.
Let the agents implement the next iteration.
Repeat.
The end result? Systems that—paradoxically—end up better architected than slow, careful, traditional engineering processes.
Because the system evolves quickly…and continually re-converges. You can now afford what would have been massive mistakes, knowing this is just part of exploration to achieve a better system. You get to use many architectural perspectives instead of one by utilizing domain expert subagents. And, most importantly, speed lets you run more experiments than deliberation ever could.
Final thought
The biggest shift Steve highlights is that AI agents don’t just accelerate engineering—they destabilize the old economics. And once the economics change, the optimal strategies change.
I wrote about this in my last article, perhaps not as eloquently as Steve, but here are some of the new world economics:
Refactoring becomes cheap.
Exploration becomes cheap.
Structure becomes fluid.
Architecture becomes iterative.
Swarms become essential.
Thus, the role of the human becomes more strategic, more architectural, more about shaping taste and coherency than typing, reviewing, and approving individual blocks of code.
If your teams haven’t internalized this yet…now is the time. The ones that do will outpace everyone else by an order of magnitude.


I want to take a big exception to and put an emphasized quote mark on the word “bad”. I have recently managed to have Windsurf writing 99% of my code. Some code got pushed back. I was expecting that as I was pushing the boundary. What surprised me was, however, the codes got push backs were not bad code, but code that stepped on existing, some unspoken, “conventions”. In a legacy system, many conventions were established for good reasons, but out lived their goodness and got stuck. We need to have x% of the code base that is actively worked replaced every year just to un-stuck ourselves, agent or not.