Are you measuring to learn, or measuring to incentivize, justify, and manage? Both needs are valid in context, but teams (and frameworks and processes) often confuse the two.
Take a quarterly goal. Once a goal is set, consider what happens to the perspective of the team. A week in, do they challenge the validity of that goal? Do they pivot? Consider a team spending weeks and months converging on the perfect success metric. Great, you've defined success, but the reality is that the metric is a hypothesis. It encapsulates dozens of beliefs. Understanding success is a journey, not an end-point, and manufacturing the definition of success can set you back.
That doesn’t mean goals — and measuring to check and understand progress towards goals — aren’t effective. But it is important to be realistic about what we are hiring goals (to overburden jobs-to-be-done) to do.
I try to remind teams that if you're 100% certain about something, there's a risk you are in a commodity business. But how about A/B tests? "We need proof!" "We should apply scientific principles!" "Facts not hunches!" A/B and multivariate testing is appropriate in context, but not all contexts by any means. Truth be told, some companies known for their A/B testing acumen (though I’m sure they are printing money) offer crappy experiences and chase local optimums.
I say this as someone very passionate about product analytics, measurement, and data literacy. At Amplitude, our most effective customers use an array of approaches. The key: use the right approaches for the task at hand.
The same inertia pops up when the team needs 100% confidence before pursuing a strategy. Someone wants PROOF. Data as a trust proxy. When you dig and ask about risk-taking in general, you find a classic juxtaposition. There's a tension. The org empowers some people to take risks -- "big, bold risks!" -- and requires other people to provide PROOF "so what is the value of design, really?" There's a veneer of rationale decision making, which would incorporate uncertainty, acknowledge priors and encourage a portfolio of bet types.
Being data-informed and data literate (both qualitative and quantitative data literacy) is itself a learning journey. It is iterative. You ask questions and refine those questions. You figure out you are measuring the wrong thing. You refine. You nudge uncertainty down, and then hit a false peak. "Oh no, that turns our mental model on its head!"
The action item...chat about the differences between measuring to learn and measuring to incentivize, justify, and manage. Are you making the distinction clear?
I am running a version of my lab hours for APAC. The first session was yesterday, and the second session will be on May 5th. Learn more here.
Are you an instructional designer? Know one? I’m hiring someone for a project.
Great reminder of the boundaries of measurement (and learning)
Great post! Thanks for sharing!