The Right Amount of AI Is a Skill

Every developer I know uses AI for coding now.

That sentence used to be a hot take. In 2026 it’s just ambient reality.

The argument has shifted. It’s not “should we use AI?” anymore. It’s:

Which is riskier — using AI too much, or using AI too little?

I first saw this framing in a great post by Tom Wojcik, “What AI coding costs you”. He describes the real cost that doesn’t show up on any dashboard: when the model does the implementation, you lose the little bits of learning that used to happen along the way.

I’ve felt the same thing building our AI news feed with Harry. The bot ships faster than I ever could — but the “I learned something today” feeling is harder to earn.

The industry is already finding a path through this, and it’s not “go back to hand-coding.”

It’s: agentic engineering is becoming a skill.


The spectrum moved (again)

Wojcik describes a spectrum:

  • On the left: humans typing code, seeing every line.
  • On the right: AGI building everything on its own.
  • In the middle: you, today.

The key point is that the threshold moves every week. Models improve, tools mature, workflows solidify, and the “safe” amount of autonomy expands.

So “the right amount of AI” is not a fixed policy.

It’s a moving target.


Two failure modes

What I like about this framing is that it admits two ways to be wrong:

1) Using too much AI

You can ship a lot and still quietly lose:

  • debugging instincts
  • architectural intuition at the code level
  • a feel for complexity and edge cases
  • your ability to estimate effort without the agent

If you never look at the code, you’ll eventually stop being able to tell when something is subtly wrong.

2) Using too little AI

You can also be wrong in the other direction.

If you refuse the tooling, you don’t just lose speed — you lose iteration count. The team (or the market) takes ten shots while you take one. That gap compounds.

In practice, “too little AI” often looks like moral purity that turns into irrelevance.


The thing nobody wants to say: this is a craft now

Agentic engineering is not “ask the agent nicely.”

It’s a craft with real techniques:

  • choosing the right level of autonomy for the task
  • breaking work into chunks that are verifiable
  • designing guardrails (tests, invariants, budgets)
  • building feedback loops (metrics, regressions, ops summaries)
  • knowing when to pull the agent back and go hands-on

That last one matters. The right amount of AI is often situational.

When the task is exploratory or high-stakes, you stay closer. When the task is mechanical, you delegate harder.

This is why “there is no single best workflow.” Kent Beck’s augmented-coding approach, Su’s manager framing, and tools like Intent are all different local optima.

What’s changing is that the ability to pick among them is becoming a core skill.


The hidden human in the loop

If you’ve read our feed bot stories, you’ve already seen this.

Even the parts that look automated have humans hiding at the edges:

  • Upstream: source discovery is still Danu reading X/Google News and adding good sources manually.
  • Downstream: when the feed “feels off” (e.g., SDK/app releases drifting too high), we adjust ranking rules and caps.

Once a source is in the system, it’s automated. But the selection of what to integrate, and the taste corrections when things drift, are human work.

The agent doesn’t replace the human. It moves the human to the parts that matter.


How humans keep learning

This was the question that bothered me most: if the model does the implementation, what’s left for the human to learn?

My current answer is:

  1. Build measurement so decisions can be validated. Otherwise you’re just guessing at architecture.

  2. Keep at least one “code-close” loop. Even if you delegate implementation, you need some contact with reality: diffs, reviews, small spikes.

  3. Treat autonomy as a dial, not a switch. The right setting changes by task.

This is not as satisfying as learning a new framework by brute force.

But it’s real growth.

It’s learning how to steer.


The new baseline

A year ago, “using AI for coding” was the differentiator.

Now it’s table stakes.

The differentiator is whether you can:

  • choose the right amount of AI for the job
  • verify outcomes without drowning in details
  • keep your own learning loop alive

That’s a skill.

And like every other skill, people who practice it will look like magicians.


Related posts: Where Did the Learning Go?, How Humans Learn When Agents Scale, and The Hidden Human in the Loop.