The Promise and the Problem
Every few months, a new AI coding assistant promises to "democratize development." Junior engineers can ship like seniors. Non-technical founders can build their own MVPs. The barrier to entry has never been lower.
That last part is true. The barrier to entry has collapsed. But here's what nobody talks about: the barrier to quality hasn't moved at all.
Anyone who's watched code come through reviews, deployments, and incident post-mortems can see something different happening now. We're producing more code than ever, and understanding less of it than ever. AI hasn't closed the skill gap—it's widened it while making it invisible.
The Illusion of Competence
The traditional skill gap was obvious. A junior engineer would write code that didn't work. They'd get stuck. They'd ask questions. The learning happened in that friction.
AI removes the friction without removing the gap. A junior engineer prompts Copilot, gets working code, ships it, and moves on. But "working" isn't the same as "correct." It's definitely not the same as "maintainable."
Consider a pull request where someone implements a caching layer. The code runs. Tests pass. But the implementation has no eviction policy, no size limits, and will silently consume memory until the service crashes under load. The engineer who wrote it can't explain why they chose that approach. Because they didn't choose it—they accepted it.
This is the new skill gap: the distance between what AI can generate and what humans can evaluate.
Code Quality Isn't Just Syntax
When we talk about code quality, we're not talking about whether it compiles. We're talking about:
- Does it handle edge cases the AI never considered?
- Will it scale under production load?
- Can the next engineer understand why it was written this way?
- Does it follow the patterns already established in this codebase?
- What happens when it fails?
AI assistants optimize for the first thing you asked for. They don't optimize for the five things you forgot to ask about. Security considerations, resource management, error handling at system boundaries, observability hooks—these require understanding context that exists outside the prompt window.
The engineer who understands these concerns will use AI to accelerate their work. The engineer who doesn't will use AI to ship their blind spots faster.
The Multiplication Effect
Here's the uncomfortable truth: AI doesn't add to your skill level. It multiplies it.
A senior engineer using AI produces senior-quality code faster. They know what to ask for, how to evaluate the output, and when to override the suggestion. They're using AI as a force multiplier on existing expertise.
A junior engineer using AI produces junior-quality code faster. They accept suggestions they don't understand, miss architectural implications, and ship technical debt at unprecedented velocity. They're multiplying zero.
The gap between these two outcomes is growing. Organizations that don't recognize this are accumulating fragility in their systems at a rate that would have been impossible three years ago.
The Documentation Problem
One underappreciated casualty is documentation—not written documentation, but the documentation that lives in engineers' heads.
Before AI, if you wanted to implement OAuth, you spent hours reading RFCs, studying implementations, debugging token flows. By the time you shipped, you understood OAuth. That knowledge became part of your toolkit.
Now you prompt AI for OAuth implementation, copy the result, and ship. When something breaks at 2 AM, you're debugging code you don't understand, written to solve a problem you never fully grasped. The incident takes three times longer to resolve because you're learning during the crisis instead of before it.
We're building systems we can't maintain with engineers who can't explain them. That's not a productivity gain. That's a deferred cost with compounding interest.
Bridging the Gap
This isn't an anti-AI argument. Most engineers use AI tools daily. But knowing when to trust a suggestion and when to question it requires judgment. That judgment comes from years of making mistakes, debugging failures, and understanding systems at their foundations.
The path forward requires acknowledging what AI actually does: it shifts the skill requirement from "can you write this code?" to "can you evaluate this code?" The second question is harder, not easier.
Organizations need to invest in fundamentals more aggressively, not less. Understanding networking, systems design, failure modes, and security principles matters more when you're evaluating AI output than when you're writing code by hand. The engineer who understands why something works can spot when the AI gets it wrong.
Mentorship becomes more important, not less. Someone needs to teach junior engineers what questions to ask, what to look for in generated code, and how to recognize the gaps AI leaves behind. At The SRE Project, that's a core part of what we do—building the judgment that AI can't provide.
Code review practices need to evolve. "Does it work?" was never the right question, but now it's actively dangerous. Reviews need to probe understanding: Why this approach? What are the failure modes? What happens at scale? If the author can't answer, the code isn't ready.
The Real Skill Gap
The skill gap in software engineering has always been about judgment, not syntax. AI has made syntax trivial while making judgment essential.
Engineers who develop strong fundamentals will thrive in this environment. They'll ship faster and with higher quality because they'll use AI as a tool rather than a crutch. Engineers who skip the fundamentals will produce more code with more problems, and they won't understand why.
The question isn't whether AI will change software engineering. It already has. The question is whether we'll adapt our approach to developing engineers to match—or whether we'll keep pretending that faster code generation means better outcomes.
It doesn't. It never did. And organizations that figure this out first will have a significant advantage over those still celebrating their increased commit velocity while their production incidents multiply.
- Kier Fretenborough, Co-Founder, The SRE Project