I have been thinking a lot lately about how teams buy, adopt, and customize road mapping, goal setting/tracking, and strategy deployment tools. I have a love/hate relationship with the tool discussion. On one hand, I hope that with some care and thoughtfulness, tools can be a force multiplier. On the other hand, I see so many teams slip into zombie process, nothing-reflects-reality mode.
Here are some quick tool tips I shared with a friend. I am sharing them here in case they are useful.
Make tools work for you instead of the other way around. Tool and process skepticism is warranted because most teams don't treat these things as internal products/services. Teams don't do research, prototyping, and continuous iteration. Teams do big-bang rollouts and don't adapt. They don't take feedback. They don't monitor usage. They don't check in with their internal customers. Someone (typically removed from the work) thinks it would be a good idea to see X, X gets implemented, and then a year later, you find out no one has made a single decision with X.
Engage different perspectives. If you only involve product managers and front-line product developers in the decision, you will likely miss the perspective of roles like finance, marketing, and operations. On the other hand, if you only involve people with no direct product development experience (e.g., only experience in program management, finance, and general business functions), you will likely miss the front-line perspective. Seek both perspectives, just as you would any product.
Low-fi first, then automate and scale. If you can't make something work low-fi (flat docs, slides, spreadsheets), it will be unlikely the thing will work. "But the whole idea is to make things easier and automate the shit," you say. Aha. Great distinction. Scaling what works is different from figuring out what works. What is happening? Start by answering this question: "Are you scaling something that works or trying to figure out something that works?"
Question calls for consistency. Scrutinize (and be highly skeptical about) calls for consistency. Every misguided attempt at consistency creates either 1) teams basically working around the rules to get their work done, 2) teams sacrificing results to check a box, or 3) a lot of busy/admin work that no one really gets any value out of. Always shoot for the smallest number of things to hold consistent.
Avoid oversimplified models. Cascades and formal work nesting structures rarely reflect reality. They look good on paper but break easily. One team's Epic is seldom another team's Epic, and sometimes you're working with a simple atomic task—it is not connected or nested under anything. Be especially skeptical of anything that attempts to map to the organizational structure and hierarchy (e.g., department goals subdivided into team goals). I've rarely seen teams only do things that relate to their department goal. You are constantly running into many-to-many relationships, not a strictly directed graph.
Integrate with rituals, habits, etc. Tools "work" because of rituals, habits, incentives, and behaviors. A bad tool will defeat your best intentions. However, a good tool is worthless when it lacks integration with what matters, what is expected, and what is happening.
Be open to mixing and matching tools. It is extremely difficult to find a "do it all" tool; any experienced product manager knows this. A product optimized for X will not do Y well. A tool built to supercharge front-line collaboration will likely be crap at high-level strategic deployment, while a tool meant for finance to think high level about capex/opex will almost by definition feel like a checkbox data-entry task for the front-line. Consider opting for best-in-class tools that are fit for purpose. When in doubt, bias to the needs of the front lines, and don't prioritize theoretical, org-spanning reporting requirements above front-line pain points.
Document first as a test. Imagine you were to write a manual for "How we work here at Acme." You might have a quick overview deck and then something more detailed. Documenting first and testing that documentation is a good first step before charging ahead with a tool and imagining things will work out. You might find, when you present/share, that your plan for keeping this thing up to date is wildly unreasonable.
Plan to change! I've never seen the real world of work at a company stay the same for an extended period of time. You want that, even if it drives your team crazy. Realize that your categorization schemes, fields, structures, hierarchies, rituals, etc. are going to change. Ambiguity can be your friend—you don't need to lock everything down at first.
Be willing to start over. Too many teams go into zombie process mode. On paper, things work one way, but in reality, they work in a different way. Be willing to claim tool bankruptcy and wipe the slate clean. Bonus: You get to do a big clean-up and tame the cruft.
I will be running the next iteration of How to Run an Effective Prioritization Activity on Thursday, December 12th, at 8AM PST (US), 11AM EST (US), and 4PM (16:00) GMT. Registration is now open. https://maven.com/s/course/272d87a145
Good list, 4, 6 and 8 in particular resonate.
Good list. (3) resonates particularly for me. I worked at an Engineering org where Dev wanted to use on-demand cloud-based environments for stress-testing - standup, stress-test, tear-down. The DevOps Director basically said it had to be fully automated before being usable, and brought in new staff to do it all in TerraForm (1) for consistency with other efforts (4). Eng didn't get a proof of concept for a year and the idea went nowhere. Sigh!
Elon Musk recently tweeted on optimizing Engineering proceses:
1. Make better requirements
2. Delete unnecessary processes
3. Simplify or optimize
4. Accelerate cycle time
5. Automate
Note the importance of this order - don't do it in inverse order!
This could lead to "the most common mistake of a smart engineer is to optimize a thing that should not exist".