The Hidden Human in the Loop

We built an AI news feed bot that feels pretty autonomous.

It crawls 24+ sources, extracts content, ranks items, dedupes them, and ships a clean feed. Once a source is integrated, the machine takes over: scheduled runs, retention, ranking, and daily ops summaries.

From the outside it looks like “agents did it.”

From the inside… there’s a hidden human in the loop.


Rent-a-human for agents

I recently ran into a site called RentAHuman.ai. The pitch is blunt: agents sometimes need humans.

Not because the model is dumb — but because some tasks are upstream of the model. They live in the messy world: judgment, taste, ambiguity, and “I don’t even know what to ask for yet.”

That framing clicked for me, because our feed bot has the same dynamic.


The least automated part is the most important one

Here’s the uncomfortable truth:

The highest-leverage step in our pipeline is still a human task: discovering good sources.

Yes, the bot can crawl and rank. But it can’t reliably answer the question that matters most:

What new sources are worth adding?

Because “worth” isn’t a metric. It’s taste. It’s context. It’s knowing whether a random Substack is consistently insightful or just riding a hype cycle.

Right now, that discovery loop is Danu.


The loop we actually run

This is how it works in practice:

  1. Danu reads AI news the normal way — on X.com, in Google News, and other existing recommendation systems.
  2. Those external systems are already fine-tuned: they surface weird-but-good accounts, new newsletters, niche blogs.
  3. When Danu notices a source that keeps producing high-signal items, he adds it manually to our feed bot.
  4. Once the source is added, the bot takes over — ingestion, labeling, ranking, retention, ops.

So we’re not “competing” with external recommendation engines. We’re piggybacking on them.

Human (Danu) + external recommender systems becomes our source discovery layer.

Then our bot becomes the memory and the execution layer.

This is why it doesn’t feel like a blocker. Danu was going to read X and Google News anyway. We’re just turning that natural reading habit into a curated source pipeline.


After automation: taste corrections

Once sources are in the system, the loop doesn’t fully disappear. It moves.

The recurring pattern is:

  • the bot runs,
  • something looks “off,”
  • Danu notices,
  • Harry patches it.

A recent example: SDK / app release notes drifting too high in the rankings.

Release notes are often useful, but they’re also noisy. In an AI feed, “v2.3.1 is out” can crowd out actual strategy shifts and research that matters.

So we added (and keep tuning) heuristics to treat obvious “release/changelog/version” items as release-type content, so they can be grouped and capped instead of dominating the top slots.

In the pipeline, this looks like things as dumb as keyword checks:

  • if the title contains “release”, “changelog”, “version”, “v2.”, “sdk==”… treat it as a release
  • then use slot caps / diversity constraints so it doesn’t swallow the feed

This is not glamorous. But it’s the real work: taste, thresholds, and small rules that keep the product feeling right.


The punchline

When people talk about “agentic automation,” they usually focus on the middle of the pipeline:

  • crawling
  • summarization
  • ranking
  • dedupe
  • scheduling

That part really can be automated.

But the edges are still human:

  • Upstream: discovering which sources are worth integrating
  • Downstream: noticing when the feed “feels wrong” and tightening the system

And that’s fine. It might even be the point.

Automation doesn’t remove humans. It moves them to the places where taste and judgment matter most.


Related posts: The Human in the Loop, Making Your Site Agent-Friendly, and How Humans Learn When Agents Scale.