Where Did the Learning Go?
Today I split my AI news bot’s pipeline into two tiers. I identified a UX gap, proposed an architecture, directed my coding agent Harry through the implementation, and verified the result against my original requirements.
It was a productive day. Real features shipped. Real problems solved.
And yet, at the end of it, I had this nagging feeling: I didn’t learn anything.
Learning was always a byproduct
When I used to write code myself, I’d accidentally learn things. Nobody sits down thinking “today I’ll learn about Python’s datetime timezone handling.” You learn it because you hit a bug at 2 AM and spend an hour in the docs.
The learning was never the goal. It was a side effect of doing the work. You wanted to build a feature; you ended up understanding a framework.
With agents doing the implementation, that side effect disappears. I still got the output — a working two-tier pipeline with cooldown, no-delta skip, retention compaction. But the accidental learning that used to come with building it? Gone. Harry handled the datetime parsing, the JSON serialization, the shell script wiring. I never touched any of it.
What I actually did
Let me be honest about what my day looked like:
- Observed that the feed felt stale between runs (a UX judgment, not a metric)
- Identified five engineering constraints simultaneously (LLM cost, crawl pressure, snapshot bloat, rate limits, run-id semantics)
- Rejected Harry’s initial 5-point plan as too complex
- Proposed a cleaner two-tier architecture instead
- Defined tier responsibilities and data flow
- Verified the outcome against my original requirements at end of day
That’s system design. Product taste. Constraint reasoning. Delegation judgment.
These are real skills. But they don’t feel like learning because:
- They feel like “common sense” in the moment
- There’s no syntax error telling you you’re wrong
- There’s no “aha, that’s how it works” dopamine hit
- The feedback loop is weeks, not seconds
When you write code and the tests go green, you know something worked. When you make an architecture decision, you won’t know if it was right for months.
The validation gap
I think this is the real issue. Not that the learning is gone — but that the feedback loop is broken.
With code, validation is immediate. Run the tests. See the output. The code works or it doesn’t.
With architecture decisions, validation is slow. Did the two-tier split actually reduce costs? Does the feed feel fresher to users? Is the retention compaction keeping the repo lean? I won’t know today. I might not know for weeks.
When you make decisions that can’t be immediately validated, every decision feels like a guess. And a day full of guesses doesn’t feel like a day of learning.
But here’s the thing: this was always true. Architecture decisions were always slow to validate. The difference is that when I was also writing the implementation code, I got enough small wins (tests passing, bugs fixed, features working) to mask the uncertainty of the big decisions. The micro-learning from implementation created a sense of progress that hid the fact that the macro-decisions were still unproven.
Agents removed the micro-learning. Now all that’s left is the macro-uncertainty. And it feels empty.
The CEO transition
I keep coming back to this analogy: what I’m feeling might be exactly what every engineer feels when they become a manager, a director, or a CEO.
You stop writing code. You start making calls. And you wonder: am I still growing?
CEOs solve problems, give pitches, fight for investments, set product direction, pivot the whole company. All high-level. We don’t expect a CEO to write code or debug memory leaks. Their job is to make the right decisions and hire people who can execute.
Working with AI agents is starting to feel like that. I’m not writing the code. I’m setting the direction, defining constraints, reviewing output, and making judgment calls.
The discomfort is the same discomfort that every individual contributor feels when they transition to decision-maker. The “doing” felt like learning. The “deciding” feels like… nothing. Even when the decisions are good.
Phillip Su made this exact observation — that AI agents don’t just change how we work, they fundamentally kill the IC role itself. To maximize AI productivity, you have to focus on meta-work: priorities, architecture, conflict resolution, constraints. The same work managers do. Su’s conclusion is stark: “the halcyon days of the IC are over.”
I don’t entirely disagree. But I think there’s a way through.
What’s actually different
But the CEO analogy only goes so far. CEOs have feedback loops:
- Revenue goes up or down
- Users stay or churn
- The board says yes or no
- The market validates or kills the product
These loops are slow, but they exist. A CEO can look back after a year and say “that pivot was right” or “that hire was wrong.”
For someone building side projects with AI agents, those loops barely exist. I don’t have DAU metrics for the feed bot. I don’t have cost dashboards comparing before and after. I’m making architecture decisions and flying blind on whether they’re good.
So the problem isn’t just “I stopped writing code.” It’s “I stopped writing code and I have no other way to know if I’m getting better.”
Closing the loop
If the problem is “I can’t tell if what I built is good,” then maybe the answer isn’t to go back to writing code. Maybe it’s to build the measurement.
- Track actual LLM costs before and after the two-tier split
- Monitor feed freshness — time from source publication to appearing in the feed
- Count how many Tier-0 runs get skipped by no-delta detection
- Look at the daily ops summary in a week and see if the numbers tell a story
Turn vague architecture decisions into provable hypotheses. “The two-tier split will reduce LLM costs by 40% while making content appear 10x faster.” Then measure it.
That’s the learning: not the implementation, but the validation. Not “how does Python’s asyncio work” but “was my two-tier architecture the right call, and here’s the data.”
The uncomfortable truth
I think everyone who starts building with AI agents will hit this feeling eventually. The technical learning byproduct disappears, and what replaces it — product sense, system thinking, constraint reasoning — is harder to see and slower to validate.
The answer isn’t to go back to writing code yourself. That would be like a CEO insisting on fixing bugs because it feels more productive than strategy meetings.
The answer is to build the feedback loops that let you prove your decisions were good. Measure the outcomes. Validate the hypotheses. Close the loop.
The learning isn’t gone. It just moved. From “how to implement things” to “how to decide things” — and the second one requires you to build your own test suite.
This is part of an ongoing series about building with AI agents: the news bot, the pipeline architecture, and the human role.