Tip
Bangaly Kaba talks about the identify-justify-execute anti-pattern: “We’ve all had a moment where we worked on something with a team, super excited, finally it launches, we celebrate. We go back the next day and look at the metrics and the metrics are flat. Why did this happen? You built something that you thought was going to be a good idea, but you really didn’t understand key components of what people really needed. Someone says, ‘Hey, this would be great to build.’ And you identify that, then you go pull data to go justify why that would be great to build. Call that identify, justify, execute. First you have to really understand from first principles what is actually going on. So understand, identify, execute.”
Turns out AI product development works the same way.
Your team wants to ship an AI-powered search feature. The PM says: “Users complain search is slow—let’s add AI to make it faster!” Everyone loves it. Engineering builds a beautiful vector search implementation. You ship it. Metrics are flat. Turns out users weren’t complaining that search was slow—they were complaining they couldn’t find what they needed because the taxonomy was broken.
Younger product leaders start with a solution, then find data to justify it. They see a complaint, jump to a fix, and ship fast. They haven’t shipped enough products to market to know that most “obvious solutions” solve the wrong problem. The velocity feels great until you look at the results.
You’ve seen this movie before. In 2016, your team shipped a recommendation engine because users said they wanted personalization. Flat metrics. The real problem was the catalog was too small. In 2019, you shipped faster load times because users complained about speed. Flat metrics. The real problem was confusing navigation.
You know the pattern: when metrics are flat after launch, you solved the wrong problem. So before you let the team build “AI-powered search,” you insist on understand work first. What exactly are users trying to do when they search? Where do they give up? What alternatives do they try? You spend two weeks just watching session recordings, running user interviews, analyzing search queries.
The answer emerges: users search for “Q3 financial model” but your taxonomy files it under “Finance/2024/Models/Q3” so search doesn’t surface it. They don’t need AI search—they need better tagging and synonym matching. You ship that instead in one week. Massive metrics lift.
This judgment—knowing to spend time understanding the problem before identifying solutions—comes from watching enough “obvious solutions” fail. Junior product leaders optimize for shipping speed. You’ve learned that shipping the wrong thing fast is worse than shipping the right thing slow. That wisdom comes only from seeing both patterns play out repeatedly over decades.
Context
Bangaly Kaba was early growth PM at Facebook (friends/people recommendations), Head of Growth at Instagram (grew from 440M to 1B+ users), VP of Product at Instacart, and now Director of Product at YouTube. His “understand work” framework came from watching teams ship features that looked great but had flat metrics.
The pattern: teams that start with “understand what’s really happening” ship fewer things but have dramatically higher win rates (Instagram had 60-70% of experiments ship positive). For experienced executives managing AI product development, this pattern recognition is critical—you’ve shipped enough products to know the cost of solving wrong problems fast.
That comes from seeing the full cycle repeatedly.