What AI Won't Fix

Writing

What AI Won't Fix

Everyone's obsessing over what AI changes. The more interesting question: what doesn't it change at all?


Slop isn't new. The scapegoat is.

Before Cursor, Claude Code, etc, developers shipped sloppy code. After these tools, developers ship sloppy code. The only difference is who gets blamed.

"AI writes bad code, so don't use AI" sounds reasonable until you think about it for three seconds. It's a false dichotomy. The argument assumes code quality is binary and inherent to the tool—ignoring the human wielding it.

I've worked in growth-stage startups where time-to-ship is hours, not weeks. No code is perfect in that environment. You make trade-offs. You accumulate debt. You ship. This was true in 2019. It's true in 2025.

Slop is slop whether it's hand-stitched or AI-generated. The origin doesn't determine quality. The intention does.

Learn to use the tools well, and you write better code faster. Refuse to learn, blame the tool, and you write the same mediocre code you always did—just slower.

Guardrails still matter. Maybe more.

Agentic tools can send you down rabbit holes. So can a blank terminal and a curious mind.

We've all burned an afternoon chasing an elegant solution to a problem that didn't need solving. Side-quests aren't new—fuzzy definitions of done have always been the real enemy.

AI doesn't fix this. If anything, it amplifies it. More capability means more surface area for distraction.

The fix is the same as it's always been: clear expectations, hard deadlines, and the discipline to stop when something is good enough.

Constraints create better results. They always have. AI doesn't change that equation—it just raises the stakes.

Learning didn't become optional.

AI is an intelligence multiplier. Used well, it's the best tutor you've ever had. "Explain how DNS works." "My company deploys on ECS with Fargate—help me build a mental model of how requests flow." These prompts make you smarter.

But slapping a stack trace into a window and accepting the fix? That's not learning. The problem is "solved," but you never internalized what went wrong.

This isn't new either. Pre-AI, you could pair with a senior engineer who found the bug in thirty seconds. If you didn't ask why, you walked away with working code and zero insight. The senior engineer moved on. You stayed stuck at the same level.

AI makes it easier to skip the learning. It also makes it easier to do the learning. The choice was always yours. It still is.

The constant is us

Tools change. Human nature doesn't.

We still need intention to write good code. We still need boundaries to finish projects. We still reach for something to blame when the work is hard.

AI isn't magic. It's a lever. And levers don't care about direction—they amplify whatever force you apply.

The question was never "Will AI write good code?" It's always been "Will you?"