Over the years, I have encountered many process experiments. Sometimes I have been the one doing the experimenting, but most of the time I (and my team) have been the test subject. Someone (often more senior, but not always) has an idea about how to improve something. And that something involves my team.
Being in the thick of that dynamic can be difficult. It can be hard to tell what is really happening. At times, it can feel like you are being experimented ON.
In my coaching calls at the Amplitude, I observe this dynamic with the benefit of some emotional distance. Someone is planning an experiment, or is in the midst of an experiment. Someone is grappling with a coworker's experiment. Or both. Most of the time, it isn't called an experiment. Rather, it's the new process or new way we're doing things.
In these calls, I've noticed a common pattern. First, the Why is missing, unclear, or lacks focus. And second, the people involved are not invited as co-experimenters.
There's a huge difference between:
OK. So here is the new OKR process. OKRs are a best practice, and management thinks they'll be a good idea.
Leadership has decided on the new success metric. Here it is.
In yesterday's workshop, we decided to try [specific experiment] to address [some longer term opportunity, observation, or problem].
We described the positive signals that would signal progress. They include [positive signals]. We described some signals to watch out for. We agreed that if anyone observes [leading indicators of something harmful or ineffective], they should bring that up immediately.
We agreed to try this experiment first over [other options] because [reasons for not picking those options]. Those were good options, and we may revisit them in the future.
[Names] offered to be practice advisors. They've tried this before, so use them as a resource. With your permission, I'm asking [Name] to hold us accountable to giving this a real shot. They aren't directly involved in the team, and they are unbiased.
We noted that this is a leap of faith. It isn’t a sure thing. We may very well experience [challenges] in the short term. Let's make sure we support each other by [tactics to support each other].
In a quarter, we'll decide whether to pivot or proceed. If we proceed, we'll work on operationalizing this, but that is not a given. As we try this, consider opportunities for future improvements.
Does this sound right to everyone?
The difference is stark. Yes, the second approach takes longer (at first, and maybe not, see below). Yes, it is more involved and messy. But let's face it: no one likes being the subject of random experiments. Even CEOs.
The second option is powerful and resilient. The first options are fragile.
There's another benefit here. I mentioned that the first option takes longer. Even that is debatable. People are more likely to give experiments a shot when the Why is clear, and when they get involved. But they don't necessarily need to design the experiment (though I think that can help). The experiment has a beginning, middle, and end. What's the worst that can happen? You go a quarter and recalibrate.
So I would argue that the second option — especially if you build up a track-record of keeping your promises — can actually be faster, and make it easier to get buy-in.
That’s it for this morning. I hope these short posts are helpful. Good lesson for me: putting toddler to sleep often means dad falls asleep. Cutting this post close!
A couple links:
Always humbled when I read things by people working in government. I love this term Boring Magic by Steve Messer.
I’m going to keep plugging the North Star Playbook because…well, I’m proud of it! I finished something longer than a post! I’ve been doing AMAs with teams who read the book, and that has been super rewarding. See here.
Design’s Unsexy Middle Bits by Christina Wodtke.
This Miro board of change experiments is super interesting (to me). I did an activity with 80+ companies at an Amplitude event. It was wild seeing the variety of things that “worked” and “didn’t work”.