TBM 410: Dancing With Problems
Read this quote:
We’ve got to make the app easier to use!
If you’ve been doing product for a while, you probably cringed. It’s not specific. It has no diagnostic value whatsoever. It could apply to almost any product in the world. There’s no hint at what outcomes or impacts might be possible as a result of making the app easier to use.
Another quote:
Users take too long to create an initiative.
Better? Not really. Is speed actually the goal? If you make initiative creation shorter, what might you have to sacrifice? OK, you have creation time, but you also have data breadth and depth. Is creation time the problem, or does the user just enter initiatives to check a box and gain nothing from the task? What if entering an initiative thoughtfully was worth the effort?
Users take too long to create an initiative because they’re unsure what information is required, much of the information isn’t readily available, and there’s confusion about what an initiative should represent.
Ah, that gives me some clues, provided there’s evidence to back that up. We have more diagnostic depth in the statement. Someone has done some research! But there’s a gap: I still don’t know why this matters, how much it matters, or what fixing this makes possible.
Someone adds:
This delays planning discussions and forces leaders to spend time clarifying basic context.
Now we’re getting somewhere. But I’m still curious about the human side of this. How do people feel about the process? Do they see it as valuable work, or as bureaucratic overhead? Do they approach initiative creation thoughtfully, or do they rush through it just to get it done? Why are they rushing?
Is this a moment where teams feel like they’re clarifying strategy and alignment? Or is it a moment where they feel like they’re filling out a form to satisfy a system? Who runs the system? What agency does the person feel in this environment?
When creating an initiative is fast and low-friction, teams capture ideas earlier and with richer context. In many tools, information only appears once something becomes official. But if people capture ideas while they’re still forming, the system accumulates valuable context that improves planning and prioritization discussions. As more of these rituals rely on that shared context, the product becomes embedded in how the organization coordinates work, and much harder to replace.
Notice here how we’re now dealing with many layered hypotheses and statements. Someone could have come up with this on a lark, or someone could have put a lot of heart and soul into increasing confidence around every assumption and link. Note how “big” some of the conclusions are. In theory that’s the flywheel, but so much has to come together to make that real, and it could take years for that loop to materialize.
Notice how your logical brain perks up. You start tracking the chain: friction, idea capture, context, rituals, embedment, switching costs. You begin asking whether each step really leads to the next. Does lower friction really lead to earlier capture? Does earlier capture really improve planning? Do those rituals really start to depend on the system?
But then your brain snaps back to the present moment.
What is actually happening today? What are people really doing when they create initiatives? What conversations are happening around those rituals right now? What evidence do we have that any of this chain is even starting to form? The big causal framing was great, but not enough.
So someone steps in and says:
Initiative creation is too heavy because the form asks for too many fields.
That feels good for a moment. Clear root cause. Clear solution. Remove some fields, streamline the form, and initiative creation becomes faster. And then all that good, persuasive narrative materializes, and everyone gets a promotion.
(For a discussion of “root cause”, see…
Your brain relaxes. The ambiguity disappears. The problem feels tractable.
But then the gears start turning again.
Which fields? Why were those fields added in the first place? Were they arbitrary, or did they emerge from real planning conversations that needed more context?
Who relies on that information later? Does removing those fields make planning easier, or does it simply shift the burden downstream?
And if initiative creation becomes lighter, what actually changes? Do teams capture ideas earlier? Do planning discussions improve? Or do we simply end up with more initiatives that lack the context needed to make good decisions?
For a few seconds, the root cause felt obvious. Then it didn’t.
The Point
Here’s the point of this post. There is no perfect articulation of a problem. You’re always swimming between having to answer a whole bunch of questions:
Exploratory
What questions should we even be asking about initiative creation, and what are we really talking about?
Definitional
What exactly do we mean by an “initiative,” and how is it defined in this system?
Contextual
What is the surrounding environment in which initiatives are created?
Descriptive
What is actually happening today when someone tries to create an initiative?
Explanatory
Why does initiative creation feel heavy for some users?
Strategic
Why does this matter, and what would improving initiative creation make possible?
Generative
What alternatives or better futures for initiative creation might exist?
Evaluative
Is initiative creation actually working well today, and how can we tell?
(These question types are from…
You’re constantly moving up and down layers of the stack.
Sometimes you start with the customer’s words. Sometimes you zoom out to see how other actors experience the issue. Sometimes you dig into incentives, habits, or power dynamics. Sometimes you jump forward and ask whether your product can meaningfully influence the situation at all.
(A discussion of problem layers…
The trick is to dance between layers.
A statement that feels solid at one layer can fall apart at another. A root cause at the behavioral layer might not hold up when you look at the surrounding ecosystem. A strategic opportunity might collapse when you ask what you can realistically influence today.
And then your audience and their experience level come into play. Even if you’ve thought carefully about the problem across these layers, you still have to communicate it to other people. And the level you choose often depends on the experience, interests, and responsibilities of the person you’re talking to.
Some people want a very concrete instruction: Build exactly this.
Others want to understand the behavior: Build something that does this.
Others care about the task the user is trying to complete.
Others want to talk about the broader customer problem.
Others are focused on metrics and business outcomes.
And still others are thinking in terms of long-term strategic impact.
(These mandate levels are from…
All of those frames are legitimate. In fact, effort is happening at all of these levels simultaneously inside most organizations. The trick is recognizing which level someone is operating at, and knowing when you need to move the conversation up or down the stack.
This is part of why Richard Rumelt emphasizes how difficult it is to arrive at a real diagnosis.
In Good Strategy/Bad Strategy, he argues that good strategy begins with a clear diagnosis of the situation (he calls it the “crux”). But when you start looking closely at the layers and dimensions we’ve just explored, you begin to see why that’s so hard.
Different actors see different problems. The time horizon shifts. The level of abstraction changes. Some statements are about behavior. Others are about outcomes. Some are about tasks. Others are about systems.
Somewhere in that space lies the mysterious and elusive crux.
I remember an incredible moment when someone dropped a concise insight on a group of leaders.
Look, if we don’t beat our competitors in a new market and gain a sufficient lead, then we’ll be forever fighting an uphill battle because of high switching costs. So the contract value doesn’t matter. We’ll have plenty of time to sell them new products if we land them as a customer. We need to make it easy to get started and to adopt our core offering to create a beachhead.
People shifted nervously in their seats. There was a cool simplicity to the statement. The team had been debating the mix of new products ad nauseam, and this reframing shifted the whole conversation. Bingo! AND STILL, the immediate next discussion was about the problems beneath the problem.
Or another moment in a workshop mixing front-line ICs and the CEO of a big bank in South America. People were debating and debating, and a junior UX researcher calmly shared an insight about the different banking habits of the bankers in the country they wanted to expand into. These weren’t the same bankers they had success with in their home country.
A collective “aha!” And then the same… one point of shared understanding unlocked the next cycle of divergence and convergence.
You’ll often hear the advice that product leaders should define the problem teams should solve.
There’s truth in that. Teams do need clarity about what they are trying to address.
But I’d reframe it slightly. The work of product leadership isn’t simply defining the problem and handing it to the team. It’s creating the conditions where people can engage with the situation from multiple elevations and perspectives.
Sometimes you’re looking at the customer’s stated problem.
Sometimes you’re examining the surrounding ecosystem.
Sometimes you’re digging into incentives and behavioral dynamics.
Sometimes you’re asking what your product can realistically influence today.
Sometimes you’re zooming out to long-term business outcomes.
Each view reveals something different.
Over time, the goal is to expand the web of shared understanding so that people across the organization can move fluidly between these perspectives, test explanations, challenge assumptions, and converge on the few things that actually matter.
Defining “the problem” isn’t a single moment. It is a space to navigate together.







This resonates. One thing I'd add: it's not just continuous across layers — it's continuous across time. Your users are evolving. The competitive landscape is shifting. The context you diagnosed six months ago may not hold today. This is one reason continuous discovery and even high-velocity experimentation, done well, isn't just about shipping faster. It's about staying calibrated to a problem space that's always moving.
💯. There is no perfect approach to any given problem, at any point in time. Exploration is key because — hopefully — it presumes an authentic sense of curiosity.
I like the “dancing” part since I was once a younger version of myself and spent time break dancing in high school and college. Although this particular flavor of dance was very much a solo exercise, you did have to coordinate with the audience and your team to win. And to crush your opponent you had to literally dance with (against) them to uncover the holes and gaps in their strategy and react real-time to exploit them.
Dancing with the problem. Love that.