Start with data + insights
This part of the book is almost a pure sell for a product like Segment (though doesn't mention us, what a shame!), and calls out the idea of starting with data. The team should be collecting data on all important parts of the funnel, and then turning those into insights. Each week if there are questions, it's worth digging into the data to understand how users are behaving, and what's new.
Once you have the data... chances are that it's going to spark a lot of different ideas from people on the team. The trouble is that the best ideas can come from anywhere... teammates, customers, and even investors. You need some way of collecting those ideas, whether it's a spreadsheet, google form, or even a physical suggestion box.
Prioritize with ICE
Once the ideas start flowing in, it's time to prioritize them. The book discusses a bunch of different frameworks, some from Brian Balfour, some from other growth leaders. The one they recommend is ICE: Impact, Confidence, Ease. New ideas are scored from 1-10 on:
impact: how much impact will this have on the business?
confidence: how sure are we that the idea will have the desired impact?
ease: how easy is it to make the change? is it a 20-minute change or a 3-month change?
An example scoring would be the new homepage. Impact is high since it's the first touch for tens of thousands of visitors per month (8). Confidence is medium since intuitively we think there's lots ability to move towards higher impact given industry benchmarks, but that previous homepage changes haven't moved us closer to these benchmarks (5). Ease is low since it's a big project with a lot of iterations (2). ICE Score: avg(8+5+2) = 5
The general idea is to let us prioritize lots of lead bullets (little wins with high confidence and high ease score) with a few bigger bets (high impact, but lower confidence and ease), sorting by the ICE score.
Then, we run our experiments and watch the metrics!
Ship/Kill and Reflect
After running the experiment, we can examine the results on a weekly basis to understand the effect the experiment has. The only ground rule is that the 'control' should always win if there's no meaningful change. We can look back at the original ICE score to see how calibrated we were when it came to scoring in the first place. Ideally our confidence should go up for 'like' experiments which prove to have an impact over time.
As an added superpower (unique to Segment), we can share the results of these experiments as first-class examples of using Segment data to grow your business. I have a feeling these will be like catnip for growth teams, and fill an important question in customers' minds: how do I actually use data to grow my business?