The meta case study: how I used an AI-powered workflow to structure 2.5 years of PM work into a portfolio, and what it reveals about how I work with AI tools.
This portfolio was built using the same AI-powered workflow described in Case Study 06. If you’re reading these case studies and thinking “these are well-structured and specific,” here’s how they got that way.
I had 2.5 years of PM work to showcase, spanning 11 distinct roles, but almost none of it was written down in portfolio-ready form. I had internal documents, a performance review, two separate letters from my manager advocating an additional bonus for me (approved by the CTO), and a lot of stories in my head. Starting from a blank page felt impossible.
I used Claude Code (Anthropic’s CLI tool) as an interview partner, writing collaborator, and project manager for the entire portfolio build. The process looked like this:
The first conversations established the portfolio’s framing. Claude helped me articulate something I’d been struggling to express: that having filled 11 different PM roles in 2.5 years wasn’t a lack of focus. It was prioritization skill. The reframe:
“I figured out which mattered most at any given moment, did what was needed to unblock progress, and moved to the next bottleneck.”
We mapped the 11 roles to 7 case study groupings organized by TYPE OF PM WORK, not by project. This was a deliberate choice to show breadth as a feature, not a bug.
I can’t write from a blank page. I can talk through stories all day. So we treated it like an interview.
Claude asked me targeted questions. I talked through the answers in my natural voice. Claude saved my raw words as blockquoted transcripts, with structural notes underneath. This preserved my authentic voice while building the raw material the case studies needed.
Example of the interview dynamic:
When I gave vague answers (“all of the customer feedback helped shape the roadmap”), Claude pushed back: “Give me one specific example where a Reddit thread changed what you built.” That’s good interview technique, and it forced me to surface the roadmap pivot story that became the centerpiece of Case Study 06.
Here’s where it gets recursive. My product knowledge base (Case Study 06) is also built on Claude Code. So I had one Claude Code instance building this portfolio and another Claude Code instance sitting on top of two years of proprietary product data — survey results, internal strategy decks, telemetry, competitive analysis, customer interview transcripts.
Those two systems could talk to each other directly. They didn’t, because I chose to be the man in the middle. The knowledge base contains nine months of proprietary product data that doesn’t belong in a public portfolio. But the portfolio needed the strategic thinking that the knowledge base was built on.
The solution was a human airlock: me.
When I needed technical details about the knowledge base’s architecture, I asked the knowledge base to describe itself and passed the output to the portfolio Claude as raw material. When I needed to surface the GTM strategy without exposing proprietary specifics, I wrote a structured prompt to the knowledge base Claude with explicit generalization rules:
“Describe strategic patterns, not confidential details. ‘Establish direct consumer channel for full-funnel visibility’ yes. Specific platform or vendor names, no. Revenue figures, unit targets, internal cost data: no.”
The knowledge base Claude generated a summary. I reviewed it for anything that crossed the line. What passed the review went to the portfolio Claude as source material. What didn’t stayed on the other side of the airlock.
The first pass at the knowledge base self-description missed half the system (the slash commands, the THD scraper, the presentation system). So I asked for a supplemental pass. The portfolio Claude caught the gap because it knew what questions still needed answering, and prompted me to go back for more.
Two AI systems, one body of proprietary knowledge, one public artifact. The human in the middle decides what moves between them. That’s not a limitation of the workflow. It’s the point.
Early on, Claude was paraphrasing my interview answers into structured bullet points. I caught it:
“Are you saving my raw transcript as well? I want the case study to be in my own words as much as possible, so if you’re just taking notes we might lose that.”
From that point forward, every answer was saved verbatim as a blockquote first, with notes and interpretation separate. This feedback loop is itself an example of how I work with AI tools: I don’t just accept the first output. I shape the process until it produces what I actually need.
When Claude generated a prompt to have the knowledge base pull specific survey data, feature rankings, and internal metrics for the case study, I flagged it:
“That level of specifics, at least for the Refiner survey, should not be included. This is proprietary info. Is the answer I just gave a good level of detail in itself?”
The answer was yes. The case study needed the story (engineering’s top priority was users’ bottom priority), not the data (specific feature rankings). Knowing where to draw that line is part of the PM skill set, and it applies to working with AI tools the same way it applies to working with any collaborator.
What follows is a chronological record of the portfolio build process, preserved from the conversation context.
Work completed:
Tools and process:
Steven’s raw words (on creating this document):
“I also want you to start documenting this whole build process as a meta case study — a case study on how I used Claude to build my case study portfolio. Perhaps you should start that documentation right now, since we’re at the start and you can see the brunt of what we’ve done in your current context. This message itself should be saved off as evidence. Hello future employers!”
This isn’t a story about AI replacing the PM. Every decision point in this process required human judgment:
What Claude did was eliminate the mechanical barriers: organizing notes, maintaining context across sessions, pushing for specificity when I got vague, and keeping the project moving forward without a project manager.
That’s the same thing my product knowledge base does for PM work. It’s the same thing the /deck command does for presentations. The pattern is: identify a workflow bottleneck, build just enough tooling to remove it, and get back to the actual work.
Work completed:
Process notes:
Work completed:
Process notes: