← Case Studies

The Claude Brain

How I built an AI-powered product intelligence system from scratch, used it to make faster and better-evidenced decisions, and what I learned about the limits of a tool only one person can use.

TL;DR

  • As the only PM across the entire product line, customer feedback was coming in from five sources that didn’t talk to each other — synthesizing it was eating hours I didn’t have
  • Built an Obsidian/Claude Code knowledge base with automated Reddit and review scraping, a Mixpanel MCP integration, and slash commands for the most common workflows
  • Used it to run a data-backed roadmap pivot in an afternoon that would have taken days manually
  • The system’s biggest limitation: only I can use it, which means every insight still flows through me

The Situation

As the only PM across the entire PLACE product line, I had a bandwidth problem that wasn’t going away.

Customer feedback was coming in from everywhere: Home Depot reviews, Reddit discussions, in-app surveys, support tickets, user interviews, Mixpanel telemetry. Each source lived in a different place. None of them talked to each other. Synthesizing across all of them to answer a single question, “what do users actually think about the nightlight feature,” meant manually checking five different places, holding the relevant pieces in my head, and writing up a summary that nobody else could verify or build on.

Do that ten times a week across a twenty-person cross-functional team and you understand the problem. I needed a system.

What I Did

The origin story

I didn’t set out to build something sophisticated. After launch, feedback started trickling in and I needed a way to organize it. I started with an Excel spreadsheet. That became unmanageable within a few days.

I looked at off-the-shelf tools. None of them fit. They were either too generic, too expensive for a single PM, or couldn’t be tailored to the specific way I needed to work. It became clear that building something custom was the only real path.

Around that time, Claude Code was gaining traction. I saw a LinkedIn post from Teresa Torres about pairing Claude Code with Obsidian as a knowledge management system. I barely got past the headline before I started building.

The architecture

The system is built on an Obsidian vault synced via OneDrive, with Claude Code running on top as the AI layer. All data is stored as markdown files with YAML frontmatter. No database, no backend, no deployment pipeline. Everything runs locally.

The AI layer isn’t a traditional RAG system with vector embeddings and semantic search. Claude Code reads files directly from disk. The intelligence comes from a project-level configuration file that defines an evidence hierarchy (direct user feedback outweighs Reddit discussions, which outweigh competitor analysis, which outweigh general market research), tells the AI where each data source lives, and provides product context so every query is grounded in how PLACE actually works.

It’s deliberately simple. Simple meant I could build it, maintain it, and extend it myself.

What I built on top of it

Reddit and review monitoring. A Python script runs on a schedule, searches twelve subreddits for relevant discussions, applies keyword and confidence scoring, and saves qualifying posts as structured markdown files with full metadata. A separate script does the same for Home Depot reviews across all four PLACE models. Both feed automatically into the knowledge base. Over 1,000 Reddit threads scraped so far, with roughly 300 of the highest priority have been processed into structured insight files following Teresa Torres’s opportunity solution tree framework.

Mixpanel integration. A custom MCP server wraps Mixpanel’s API and connects it directly to Claude Code. Eight tools: event segmentation, retention analysis, funnel queries, event comparison, property exploration. The server caches the full Mixpanel event schema at startup and uses fuzzy matching so when I ask about “smoke alarm” it finds “Smoke Alarm Triggered” without needing exact syntax. The practical effect: I can ask analytics questions in plain language without switching context to the Mixpanel UI.

Slash commands for the most common workflows. Three in regular use:

  • /generate-insights — finds unprocessed Reddit threads, spawns sub-agents to analyze each one, generates structured insight files with exact user quotes linked back to the source. Runs 2-3 threads in parallel to avoid context window issues.
  • /prd — takes a Discovery Workshop output and generates a structured PRD following our internal format, calibrated to IoT and hardware constraints.
  • /deck — full slide deck authoring built on Marp, with a library of reusable slides, two themes, and export automation to PowerPoint and PDF. The GTM strategy, quarterly business reviews, and internal business cases have all been built with this.

The roadmap pivot that made the investment obvious

The clearest demonstration of what this system does was a roadmap reprioritization I ran after an in-app survey.

The survey asked users to rank roughly ten candidate features by interest. The results were surprising: features that had been near the top of our internal priority list came in at the bottom. What scored high were features that the market-leading predecessor had and we didn’t.

With the knowledge base, I was able to cross-reference those results against Reddit discussions in minutes, pulling exact user quotes about the missing features. I ran another analysis of thousands of competitor reviews to quantify how well-received those features were elsewhere. The result was a multi-source evidence case for a roadmap pivot, built in an afternoon.

Without the system, that analysis would have taken days and been qualitative. With it, I had specific evidence from verified users, Reddit power users, and competitor reviews, all weighted by the evidence hierarchy, all pointing in the same direction. I changed the roadmap.

What Changed

  • Customer feedback from disparate sources is centralized and queryable rather than siloed on various platforms requiring manual effort
  • Roadmap decisions are grounded in multi-source evidence rather than internal assumptions
  • Analytics questions get answered without context-switching to the Mixpanel UI
  • PRD generation from workshop output takes minutes instead of days
  • Competitive intelligence runs continuously rather than periodically
  • 286 structured insight files generated from over 1,000 scraped Reddit threads, representing a body of customer knowledge that would have taken months to build manually

What I’d Do Differently

The system grew up organically and that shows in its biggest limitation: only I can use it.

All of the customer knowledge, the evidence, the product context lives in my local Obsidian vault (backed up on OneDrive) and runs through my Claude Code instance. I am the only access point. Every insight has to flow through me. Every strategic recommendation I make is backed by evidence that everyone else would need to tediously and manually seek out.

There’s a real tension here. Moving fast with AI tools means blazing a trail. The downside is you can leave everyone else in the dust. When Claude is your thought partner and the full context lives in your personal AI brain, you develop a working model of the product that’s increasingly hard to share with people who don’t have that context. Buy-in gets harder when your reasoning is grounded in a system nobody else can interrogate. I think this is a common early failure mode for AI-assisted work.

I’m starting to think the fix is simpler than rebuilding anything: move the raw data and core context files into a shared repository, document the evidence hierarchy clearly enough that the reasoning is legible without me as the interpreter, and think of the knowledge base as an organizational asset rather than a personal one. The goal isn’t to make everyone use Claude Code. It’s to make the underlying knowledge accessible whether or not they do.