TBM 9/53: Some MVP and Experiment Tips
Note: This week’s newsletter presented a dilemma. Yesterday, I wrote some tips for a teammate and got a bit carried away. I figured the tips might be more broadly helpful, so I published on cutle.fish. And then I realized I should have waited to send this out with the newsletter. In the spirit of pacing myself, I am going to cross-post instead of writing two posts. Apologies if you’ve already seen this.
I sometimes find myself emailing/sharing advice lists. Here is one related to experiments (used somewhat interchangeably with MVP) that I sent out today. It is not about experiment design. Rather, I focus on a situation where a team is spinning up a lot of experiments (for various reasons) and is encouraged to experiment, but may be struggling with making it all work.
Learn early and often. We should not be afraid to try small experiments to learn, and we should not be afraid to release things early and often to gather feedback and iterate. A good rule of thumb is that you should release before you are comfortable, and make sure you are prepared to learn. Our ideas may seem precious, but it is critical (and humbling) to get things into the world. Challenge big batches of work like crazy. Can we achieve 90% of the outcomes with 10% of the work? Can we learn 90% of what we need to learn with 10% of the work? Or nothing “shipped” at all? Do we have the requisite safety to embrace “failed” experiments?
Going “faster”. There are only two real ways to go faster…reduce the size of “batches” and/or do less at once. Adding people tends to make things slower in the near-term (and sometimes the long-term). Busy-ness does not equal flow. For that reason, really plan on focusing on your experimentation efforts. Limit your experiments in progress. Our goal is high cadence not high velocity (there is a difference...imagine a cyclist going up a hill in an “easy” gear vs. a “hard” gear).
Partner. Having partners in your experimentation efforts is critical. They help you de-bias, help you challenge your assumptions, and help you hold each other accountable to working small and learning quickly.
Take experiments seriously. Be diligent about framing your MVPs and experiments. How will you measure this effort? How will you reflect on progress? What are your pivot and proceed points?
Consider blast radius. We should be open to the idea that we cannot control everything, and that on a daily basis there’s a ton going on that we will not know about. Someone might do something that impacts your world, and that is OK . Assuming positive intent is critical. That said, for the person running the experiment it is vital to be sympathetic to the blast radius of your work (and perception of your work). Communicate. Give people some notice. And commit to points #7-10 below.
Kill Your MVPs. A good rule of thumb is that you should be able to kill your MVP. It should not create promises or commitments. It should not create dependencies. It is largely a throwaway vehicle for learning. The risk is a million MVPs that just create cognitive overhead, high cost to maintain, and serve as a distraction for the team. Consider this. If there’s not a 50% chance of your MVP “failing”, there’s a good chance you aren’t taking enough risk to learn new things.
No side-channels. Be cautious about creating a “side-channel” of unofficial work --.the stuff you really want to be doing, but you are battling business as usual. Why? 1) You’re keeping your teammates in the dark, and 2) you will burn out! There are only so many hours in the day. How does this relate to experiments? Try to elevate your experiments to first-class visible work. Ask teammates to hold you accountable.
Leave room to incorporate learning. Pushback again MVPs is often rooted in pragmatic fear. People never feel they are able to go back and refine what they release (integrate learnings, refine, etc.) So they increase scope as a form of craft-preservation. By creating a large batch of work, we assure ourselves some ability to get it right. What to do? The lesson here is that MVPs should be considered as an integral part of a larger stream of value creation. They are not an excuse to cut corners and bail on the effort. The goal is to generate learning and INCORPORATE that learning...not to ship and move on leaving a ton of unrefined work. Key insight: advocate for the larger value stream and learning objective first. After that, running experiments is easy/easier.
Not a workaround. When we feel thwarted (perhaps other people are super busy and can’t assist), it is tempting to spin up individual work that we can completely control. This is natural and very tempting as we are wired for forward momentum. But without a structured approach to learning you run the risk of just making yourself more busy, adding even more work in progress, and potentially working in opposition to your teammates. Instead, is there a way to help unblock your teammates?
Scaling Up and Out. Scaling up/out successful experiments requires collaborating with others, thinking about impact across teams, and “formalizing” the bet. Often, people complete an experiment on the small and jump immediately to scaling it up/out in isolation. The preferred approach is take what you learned, and then attempt to frame an integrated program around that. Not all successful MVPs are good candidates for scaling up/out.
Tidbits:
Video from my MTP Engage talk (titled The Beautiful Mess) is up!
I’ve recommend Team Topologies a bunch of times this week. Buy or borrow it.
I really respect how Marty Cagan talks about OKRs in this post