Two Products, Two Analytics: Why We Made Different Choices
Today I added analytics to two products. The AI news feed bot got a custom telemetry system built from scratch — Turso database, anonymous user IDs, impression/click tracking, and a personalization engine on top. This blog got three lines of code and a checkbox.
Same day. Same human directing the work. Two AI agents (Harry on the feed bot, Hermione on the blog). Completely different solutions.
The interesting part isn’t the implementations. It’s why the right answer was different each time.
What Harry built: a telemetry system
The feed bot needed to understand user behavior at the item level. Which articles get clicked? Which sources get ignored? Do users prefer research papers or product launches?
Harry built a full event pipeline:
- Anonymous user IDs — generated client-side, stored in localStorage, no login required
- Event types — impressions (item appeared on screen), clicks (user opened an article), saves, dismissals
- Turso database — a
/api/eventsendpoint that accepts batched events, deduplicates impressions within the same run, and writes to a SQLite-compatible edge database - Personalization layer — reads click/impression history to build per-user source preferences and topic affinity, then adjusts feed ranking in real-time
Here’s the event schema Harry set up:
feed_events (
event_id, anon_user_id, session_id,
event_type, item_id, title, url, source,
slot, rank_position, run_id, ts,
user_agent, referer, meta_json
)
That’s 15 columns. Three indexes. Deduplication logic. A normalization layer that validates event types, clamps field lengths, and derives deterministic event IDs for idempotent writes.
This is not trivial. It’s a real telemetry system — the kind you’d find at a small analytics company.
What Hermione did: flip a switch
For the blog, I asked Hermione to add page view tracking. She installed @vercel/analytics, added the Astro component to the base layout, pushed, and we were done.
import Analytics from '@vercel/analytics/astro';
<!-- ... in body: -->
<Analytics />
That’s it. Vercel serves the tracking script, collects page views, and shows a dashboard with visitors, top pages, referrers, countries, and devices. No database. No API. No schema design.
Why the difference?
The naive take is “the blog is simpler, so it got a simpler solution.” But that’s not really what’s going on.
The difference is about what questions each product needs to answer.
The feed bot needs behavioral data
The feed bot is a recommendation engine. Its entire value proposition is “show you the most relevant AI news.” To improve that, it needs to know:
- Which items do you click vs. skip?
- Which sources do you trust?
- What topics interest you?
- Does the ranking algorithm’s opinion match user behavior?
These questions require item-level, user-level, timestamped event data. No off-the-shelf analytics tool gives you that in a format your ranking algorithm can consume. The data isn’t just for dashboards — it feeds back into the product. The personalization layer literally reads from the same Turso table that stores events.
Analytics is a product feature, not an observability add-on.
The blog needs traffic data
The blog needs to answer much simpler questions:
- Are people reading this?
- Which articles get the most views?
- Where is traffic coming from?
These are standard web analytics questions. Every analytics tool on earth answers them. Building a custom solution would be over-engineering for zero additional insight.
Analytics is an observability add-on, not a product feature.
The build-vs-buy decision
This is really a build-vs-buy story, and the heuristic is simple:
Build when analytics data feeds back into the product. The feed bot’s telemetry isn’t just for humans to look at — it drives personalization, informs ranking tuning, and enables future features like “trending” or “your weekly recap.” The data is a core asset.
Buy (or use free tools) when analytics data is only for humans. The blog’s page views are for me to check which articles land. No algorithm consumes them. No feature depends on them. Vercel’s free dashboard is more than enough.
The mistake would be building a custom system for the blog (“but what if we want custom dashboards later!”) or using an off-the-shelf tool for the feed bot (“let’s just add Google Analytics and parse the click data”).
Both would technically work. Both would be the wrong trade-off.
What this looks like in practice
Here’s what the feed bot’s personalization does with its custom telemetry:
- User opens the feed → anonymous ID generated
- Items appear on screen → impression events batched and sent
- User clicks an article → click event recorded with item metadata
- Next feed load → personalization layer queries: “what sources and topics did this user click?”
- Feed ranking adjusts: sources the user engages with get a boost, ignored sources get dampened
This closed loop — collect behavior → adjust product → collect again — is why custom telemetry was worth the engineering cost. You can’t build this loop with Vercel Analytics or Google Analytics. The data format is wrong, the query pattern is wrong, and the feedback mechanism doesn’t exist.
The blog has no such loop. Someone reads an article or they don’t. The content doesn’t adapt to the reader. Page view counts are interesting but they don’t change anything about the blog itself.
The agent angle
Both systems were built by AI agents on the same day. Harry built 150+ lines of event handling, database schema, deduplication logic, and personalization queries. Hermione added 3 lines and a component import.
Neither agent chose wrong. They both solved the problem in front of them appropriately.
The decision about which approach to use came from me. And it came from understanding the product, not the technology. The technical implementation is the easy part — any competent agent (or engineer) can build a telemetry pipeline or install an analytics package. The hard part is knowing which one to reach for.
This is another version of the decision-maker vs. implementer dynamic. The agents can build anything. The human has to decide what’s worth building.
The feed bot is live at llm-digest.com. The blog you’re reading is tracked by Vercel Analytics — so if you made it this far, you just became a data point proving the simpler approach works fine. Previous posts: splitting the pipeline and where did the learning go.