How Humans Learn When Agents Scale
I’ve been following three perspectives on how to work with AI coding agents:
Phillip Su says the IC role becomes management. You’re orchestrating agents, making architectural decisions, setting priorities. You’re not coding anymore. You’re deciding.
Augment’s Intent says go all-in on orchestration. Specs → agent teams → verification. No code editor. The human’s job is writing intent clearly and verifying results.
Kent Beck says something different. Build a B+ Tree library with agents, but stay obsessive about code quality, complexity, test coverage. “I don’t type much of that code” — but I still care about all of it. You’re an augmented programmer, not a manager or orchestrator.
Here’s the thing: they’re all right, and they’re each optimizing for different goals.
What they’re optimizing for
Beck optimizes for code quality. He uses the agent to save keystrokes, but he still reviews, refactors, and keeps the complexity under control.
Intent optimizes for throughput. Specs → orchestration → verification in a loop. “Quality” is defined less by elegance and more by whether the system meets the spec and passes checks.
Su optimizes for strategic impact. Human time is the bottleneck, so you spend it on priorities and architecture, not implementation details.
If you apply the wrong model to the wrong problem, it feels awful: Beck’s approach doesn’t scale to 100 parallel workstreams; Intent is overkill for a single deep library; Su’s approach can miss the texture you need for tricky algorithms.
The question isn’t “who’s right?” It’s “what am I optimizing for?”
But there’s a deeper problem underneath all three: learning.
The learning collapse
When you write code yourself, learning happens passively:
- You hit a bug and understand language semantics
- You refactor and develop pattern recognition
- You design and internalize system thinking
- You debug and learn how complexity compounds
All of this is accidental learning. Side effects of doing the work.
With agents, all three approaches remove parts of the work:
- Beck’s approach keeps some accidental learning (code review, refactoring) but loses implementation details
- Intent’s approach keeps decision-making learning but loses all code-level intuition
- Su’s approach keeps high-level learning but loses everything below architecture
As agents get smarter — and they will — this gets worse. A smarter agent needs less guidance. Less guidance means less code review. Less code review means less accidental learning.
Eventually you’re writing one-sentence specs and agents are building entire systems. You’re making strategic decisions and never seeing the code. You’re verifying behavior but not understanding how it works.
The uncomfortable truth: you have to deliberately engineer learning now. It won’t happen by accident.
What deliberate learning looks like
If you want to stay sharp as agents scale, you need a plan. A few that actually work:
-
Treat code review as practice. Even when it’s optional, read the diff. Refactor once. Keep a bar. The code might not need the polish, but you need the reps.
-
Do intentional “no-agent” spikes. Once in a while, build something small yourself. Not to ship — to recalibrate your instincts.
-
Turn decisions into measurements. This is what I proposed in “Where Did the Learning Go?”: write hypotheses (“cost down 40%”, “freshness up 10x”), then instrument and measure. Learn from the deltas.
-
Teach. Writing a design doc someone else can implement — or explaining a system to a teammate — forces clarity.
-
Read other people’s intent. As orchestration becomes normal, specs become first-class artifacts. Read them like you used to read source code.
The honest version
A lot of people won’t do any of this — not because they’re careless, but because the default path is insanely efficient.
If you optimize for velocity, agents handle 95% of the work. You go to meetings, write specs, review dashboards. Then one day you notice you can’t really build things without the agent anymore.
This isn’t new. It’s the same arc senior engineers have followed for decades. The difference is the timeline: not over 10 years. Over 2.
The people who stay sharp will be the ones who choose it on purpose. The ones who decide: “I’m spending 20% of my time reviewing diffs,” or “I’m doing a no-agent spike this quarter,” or “I’m going deep on this architecture decision.”
It’s a choice now. It won’t be an accident.
What this means for you
You’re picking your future right now.
If you want to be a manager — lean into Su’s model. Stop writing code. Focus on big decisions. Delegate everything. In a year you’ll be fluent in what matters at scale.
If you want to be an orchestrator — learn Intent’s model. Build spec literacy. Understand verification deeply. Become expert at writing intent that agents can execute.
If you want to stay a programmer — pick Beck’s path. Write code sometimes. Review agent code always. Stay obsessive about standards. In a year you’ll know more about code than most people who ship 10x faster.
Here’s my take: most people will drift toward Su/Intent by default, because it compounds output. Staying “code-close” will become a deliberate, slightly contrarian choice.
All three can work. None of them are automatic. You have to choose.
The honest conversation that’s not happening: as agents get smarter, learning requires intention. Su, Intent, and Beck all assume you’ll figure this out. But many people won’t — they’ll optimize for throughput and only notice the skill decay later.
If you build with agents now, you’re choosing your future. Make it deliberate.
This wraps the series on building with AI agents: where did the learning go, intent and the IDE, making your site agent-friendly, and the human in the loop. The question isn’t which approach is right. It’s which optimization function you’re choosing — and what you’re willing to sacrifice to stay sharp.