How I built a go-to-market strategy for a connected safety device from scratch and used customer intelligence data to resequence where the organization invested its energy.
Several months after launch, PLACE wasn’t selling the way leadership had expected. The frustration was real. The default answer was to spend more on marketing. The problem was that nobody could agree on how much to spend, on what, or why it would work when the initial spend hadn’t.
Leadership was simultaneously frustrated by low sales and unconvinced that more marketing investment was the right lever. That tension created a vacuum where nobody was doing anything, and the product was drifting.
Meanwhile, PLACE’s aggregate ratings across retail reviews and CSAT surveys sat around 3.4 stars. Not catastrophic, but well below the threshold where marketing spend compounds positively. In a product category where consumers rely heavily on star ratings and reviews, every dollar spent on awareness at 3.4 stars is partially wasted because the product experience isn’t backing up the promise. Worse, in the leaky bucket scenario, pouring more water in without patching the holes doesn’t just fail to help. It accelerates negative word of mouth and makes the ratings problem harder to fix.
The market had also just handed us a real opening. Google discontinued the Nest Protect, the default smart alarm recommendation for years. Millions of installed units were expiring with no obvious successor. PLACE had many of the same features and several the Nest didn’t have. But a market opportunity doesn’t help if the organization is stuck arguing about whether to spend more money on a channel that was already returning roughly dollar-for-dollar with no insight into why.
Before I could argue for a strategy, I needed data. Over the preceding year I had assembled a customer intelligence system: automated monitoring across forums and community discussions, systematic analysis of competitor reviews at scale, in-app surveys of verified users, and product telemetry that let me cross-reference actual device behavior against customer complaints.
The competitive analysis was where a lot of the insight came from. When the Nest Protect was discontinued, I modeled the replacement wave: how many units were installed, when they would expire, what the annual addressable replacement market looked like. The number was real but bounded. The Nest Protect discontinuation was a window, not a long-term strategy. Being the defacto Nest Protect replacement would keep us afloat and be successful in the short term, but we would eventually need to cross the chasm.
Then I went deep on what those displaced customers actually valued. I analyzed thousands of reviews, looking at what drove satisfaction versus what drove the negative reactions for competitor’s products. Then I looked at the feedback we were receiving. The pattern was clear: PLACE wasn’t being evaluated on its own merits. It was being scored against the Nest Protect’s feature set. Every feature we were missing was a point deducted, not a neutral absence. Some of those features seemed incremental from an engineering perspective. From the customer’s frame of reference, they were table stakes.
Without this analysis, the strategy would have defaulted to intuition and whoever was loudest in the room.
The argument I needed to land was this: if someone asks leadership for $5M in marketing budget with no mechanism to prove returns, they get laughed out of the room. But if you can show up with receipts — “we spent $100k on this channel, with this target customer, with this messaging, and saw X revenue and Y profit” — you have something to stand on. The problem was we didn’t have receipts, and we didn’t yet have a product that could generate them.
Sales are a function of marketing spend, messaging quality, and product quality together. Scaling spend before the other two are right doesn’t just waste money, it actively damages the brand. I framed it as a flywheel versus a death spiral. Above a certain product quality threshold, every marketing dollar multiplies. Below it, every dollar spent on awareness accelerates negative word of mouth and digs the ratings hole deeper.
At 3.4 stars, we were below that threshold. The leaky bucket metaphor became useful shorthand: you don’t scale water volume until you’ve patched the bucket.
This led to a gated strategy with explicit criteria for moving from one stage to the next:
Gate 1: Fix the product. Get the aggregate rating from 3.4 to 4.0 stars before anything else moves.
Gate 2: Build full-funnel visibility. Establish a DTC channel — without it, there’s nothing upstream to learn from.
Gate 3: Run small bets. Run parallel experiments across four distinct audience segments to generate proven ROI signal before committing budget.
Gate 4: Scale what works. Once we have receipts, go back to leadership and ask for significantly more.
I also defined what we were not. The smart home enthusiast community was our loudest early adopter segment and they wanted deep platform integrations. They were vocal, they wrote reviews, they had influence in the communities where potential customers asked for recommendations. But they were a niche. I reframed them as a low-cost hedge: a modest development investment to retain their advocacy while making clear that the main bet was on safety-motivated households who would never configure an automation in their lives.
We are currently in Gates 1 and 2.
Build the GTM strategy in tandem with the product, not eight months after launch.
At a company with an established PM function, product strategy and go-to-market strategy grow up together. They inform each other. Who you’re selling to and how you plan to reach them shapes what features you actually build, because those decisions tell you what users will find valuable before you’ve committed to building anything. The feedback loop runs both directions.
At Gentex, the ship had sailed on features long before any actionable feedback existed, and before I’d joined the company. By the time I was building the GTM strategy, the product was already in market. I wasn’t shaping the product with the strategy. I was retrofitting a strategy onto a product that was already defined. Those are very different problems.
There is a layer of serendipity here that complicates the lesson. Nobody could have realistically predicted that the market leader would be discontinued two months before our launch. That timing created an opportunity that no amount of upfront strategic planning would have generated. Capitalizing on it still required the strategy work, and it still mattered. But there’s a meaningful difference between reemphasizing aspects of a strategy you already have and building an entirely new one from scratch after the product is already in market. The former is nimble. The latter is catching up.
The honest version of this case study is: we did important strategic work under real constraints, it produced real results, and the constraints themselves were a failure mode worth naming.