If anyone is interested in OKRs, the North Star Framework, and/or connecting everyday work to the big picture… I'm doing a free presentation and Q&A on using OKRs with the North Star Framework on Aug 26th.
We'll share the video afterwards to the people who register. This is for my day job.
What traps do teams encounter when they try to visualize work to better manage work in progress?
1. We break something down into smaller pieces and forget the bigger thing.
Example: We break a meaningful initiative down into stories. The team is chugging along, but we lose sight of the bigger picture/bet. In the case of more than one team tackling parts of an effort, we forget that it is one initiative, not 2+ initiatives. Related, we don’t show dependencies.
2. We break team efforts down into individual work, and forget the bigger thing.
Example: To assign work to individuals (many tools don’t support joint owners), we forget that the pieces are NOT independent.
3. We only look at the bigger thing, and miss if we’re working small.
Example: A team insists on tracking the big bet and only the big bet. Meanwhile it is hard to tell whether they are integrating frequently. Working small matters too!
Aside, hopefully you see how these things are related. It is not enough to visualize "work at one resolution. We need multiple resolutions.
4. We finish the same thing multiple times. We count rework twice.
Example: A team “finishes” a story, and gets valuable customer feedback. They want “credit” (ugh, I hate that word). They incorporate the feedback into a new story. In reality, it is the same goal. But there’s always someone who wants “credit” for the first thing, and wants the changes somehow called out as new.
5. We don’t visualize the full value stream.
Example: A team does not visualize work upstream and downstream of code writing. There might be a high PIP -- planning in progress -- or lots of work marked as Done, but not in the hands of customers. There’s a fear that tracking stuff “too early” will “muddy up the system”. There’s a fear that tracking stuff “after finishing” will make it seem like the team is too slow.
6. We don’t visualize queues/lists.
Example: Work is “done”, and moved to a Testing column. But no testing can happen at the moment because the testers are overwhelmed. Instead, we should probably add a “to test” column to our board.
7. We don’t track/visualize where our energy goes.
Example: A team spends about 30% of its time interviewing new team members. Or answering “quick questions”. Or context switching between different work streams. The context switching alone can eat up huge swaths of time.
8. We treat all time as equal.
Example: An hour at the end of a long day is not the same as a fresh hour first thing after your morning coffee.. The big issue here is fatigue. We’re not keeping track of when we’re losing steam.
9. We use generic board columns for different “shapes” of work.
Example: A team uses the same board columns for larger chunks of work, and fixing issues in production. The team tackles this work in very different ways. Related, because they don’t model different work streams, it is hard to understand the impact of unplanned work.
10. We fail to model back and forth collaboration. Items go “backwards”.
Example: A developer and tester are going back and forth on an effort. There’s a development column, and a test column. Items go back and forth between columns, when instead we want to consider a Develop & Test column to highlight that collaboration.
11. We make blocked work disappear.
Example: A team has blocked items and is waiting on another team to address an issue. Instead of making that explicit, the team removes them from their “view”. Or uses a Blocked column instead of adding tokens/indicators to the work and keeping it in progress (albeit blocked).
12. We don’t add qualitative observations.
Example: A team tracks all of the “kanban metrics”. But fails to add color to the day to day. What happened? What did we observe? Often, retrospectives are separated from the actual things that happen. We know vaguely X happened, but forget the context.
Any of this sound familiar?
Hello there, thanks for this!
However I have mixed feelings for point 4, I guess it depends what you're counting. And I guess depending on the company context, it may be useless or very important.
For example,
By doing X , the customer should achieve Y
Y is the problem being solved, X is the solution to solve the problem.
If I integrate feedback & rework, then the new is:
We know that by doing X, the customer didn't achieve Y, but we believe
that by achieving X'' , the customer should achieve Y.
Yes, the problem solved is the same. However the solution to get to that point is not.
X and X'' are similar, but different. Why should we count them as the same thing ?
Hello there, thanks for this!
However I have mixed feelings for point 4, I guess it depends what you're counting. And I guess depending on the company context, it may be useless or very important.
For example,
By doing X , the customer should achieve Y
Y is the problem being solved, X is the solution to solve the problem.
If I integrate feedback & rework, then the new is:
We know that by doing X, the customer didn't achieve Y, but we believe
that by achieving X'' , the customer should achieve Y.
Yes, the problem solved is the same. However the solution to get to that point is not.
X and X'' are similar, but different. Why should we count them as the same thing ?