Discover more from The Beautiful Mess
TBM 45/52: Taming Model Malpractice
Join me October 13th for a talk on Drivers, Constraints, and Floats.
Customer profiles, ICPs, segments, personas, jobs-to-be-done maps, customer health models, product health models, competitive ecosystem maps, 2x2s, 3x3s, maturity models, retention models, prioritization models, Kano Model, Porter's Five Forces Framework, Moore's technology adoption lifecycle curve. Models, models, models!
I'm back home after a team offsite, and my head is swimming in models.
It got me thinking about the models we use and how we use them.
Obligatory George Box quote:
All models are wrong, but some are useful.
George E. P. Box
Let's start expanding on that statement.
All models are wrong, but some are useful; and
Usefulness is relative to the person using the model
We have to consider the "job" of the model.
A model designed to help a marketer allocate their budget optimally will probably do a less-good job helping a designer inform the design of a feature (for example). And the model designed to help the designer will probably do a less-good job at painting a picture of the competitive landscape. The job matters.
The rightness and wrongness of a model are in service to the "job" of the model. A highly simplified and "wrong" model might be perfect for making high-level (and recoverable) product investment decisions in a hurry. Making the model "righter"—more accurate, more resembling the real world—will have a negative effect. It will slow things down by presenting too much information and increasing cognitive load. But the same model will fall flat at non-recoverable "one-way door" decisions with many dependencies.
In that sense, there are better and worse models—with varying degrees of rightness.
There are two opposing antipatterns I've noticed in companies when it comes to model-use:
Using too many models
Using too few models (for too many jobs)
With the first antipattern, we hear things like "no one speaks the same language" and "I have to spend all of my time translating between different models!" I have observed companies with a product health score (Product), customer health score (CS), AND a churn prediction model (Finance). They all did their respective jobs well, but it was hard to context switch.
The second we hear things like "we try to use the same words for everything," "that doesn't help me," and "well, that's the official way, but we kind of ignore it." The flipside example is a neat-and-tidy set of global customer personas that no one used or cared for.
Too many models ensure the models are doing specific jobs at the expense of broad understanding. Too few models ensure people use the same language at the expense of doing jobs well. It's a balancing act. Job-fit models vs. collaboration-fit models—both "useful" in context.
Using too few models (for too many jobs) involves an interesting twist. The local models still crop up. Why? To do the local job, you'll need local models AND a mapping to the global models. So when someone in your company says, "we need consistency between X," they are actually saying, "at the global level, we need consistency between X, but please handle the complexity and adapt locally!" Take the global customer personas mentioned above. Each team had a shadow set of personas they used locally. It was the only way to get any work done.
Put another way: "let's keep it simple" only creates the appearance of simplicity at the global level. People will constantly adapt to their local job (if they are allowed to and can get away with it).
I would add a third antipattern: not being specific about the model's job, even if the goal is general applicability.
But there are two reliable ways to counteract these antipatterns:
The first is a hierarchy of complementary models. For example, a company might decide on a global ICP (ideal customer profile). Each department then expands on that model to account for local concerns. Product adds more in-depth product usage characteristics. Professional services dig into the implementation journey.
The benefit here is that we acknowledge the local models but seek coherence.
The second is what I would call the platform approach. Most models are the application of research and synthesis. You start to get something that looks like this:
Some parts of the "stack" stay very application agnostic. Other parts differ between applications. The benefit here is that you centralize the research—which imparts some consistency and congruence—and "build applications" on top of that research. This approach addresses one of the biggest challenges when maintaining multiple models—consistency where it matters.
Both approaches look for a win/win instead of a tradeoff. Use product (and platform) thinking for the models you use in your company. Focus on the job-to-be-done of each model, and layer models effectively.
Your models work for you, not the other way around.