TBM 390: Governance by Principle, Not by Template
I received a question recently:
My company has a highly detailed process involving project codes and program codes, tracking time, highly detailed (and premature) estimates, and lots of very precise reporting with dollar figures down to the dollar. Someone told me this is because of accounting and governance requirements. Can that be true?
I started writing a reply, and it turned into this post. I can think of very few topics more fundamental to successfully operating a technology business, especially one trying to become more product and platform centric.
There are so many misunderstandings and “but we’ve always done it that way”s.
I’ve had people swear up and down that governance framework X explicitly requires projects and programs, or that you can’t “do governance” using durable objects like capabilities, products, value streams, and so on.
It just ain’t so.
Project and Program Coding
Project and program codes exist because companies need governance, not because GAAP or IFRS require them. Financial accounting standards focus only on revenue, expenses, assets, and liabilities. They do not care about internal constructs like “Next-Gen Platform Revamp” or your strategy pillars.
Project and program codes, allocations, etc., are governance mechanisms, not accounting categories.
When teams see overly precise cost figures in reports (for example, “Initiative A: $356,000”), they often assume this level of precision is required for accounting. It isn’t. Outside of capitalization (which is important and does appear in the financial statements), these numbers never show up in the books.
They’re governance artifacts and mechanisms. They are internal models we invented to help us make sense of complex work, not requirements imposed by accounting standards.
Governance Mechanisms
Governance mechanisms fall into two categories: those we self-impose internally and those required externally. But even the “set in stone” external ones are far more flexible than people think. Basel III tells banks what to control, not how to structure portfolios. Government procurement rules set principles such as fairness and auditability, but leave most of the internal machinery to the agency.
The spirit of most governance frameworks is that the company is acting intentionally and responsibly. The intention, design, repeatability, and auditability of the process typically take precedence over the actual details. This is the critical point: intention and defensibility matter more than mechanical adherence to a process.
Procedural vs. Structural Legitimacy
Imagine a company with a rigorous “business case” ritual required before any work begins. It forces premature convergence on solutions, encourages inflated estimates, and produces a dozen teams all claiming credit for the same metric (which is impossible to move by 3,000 percent). The work cuts across 15 teams, triggering massive context switching and draining capacity from more focused initiatives. On paper, the company has followed every rule in its combined self-imposed and externally imposed governance framework. In reality, it has done a disservice to investors, employees, customers, and taxpayers because the process is defensible on the surface but fails any reasonable principled analysis.
The difference is often referred to as the gap between procedural legitimacy and substantive legitimacy. Procedural legitimacy is performative box-ticking. Substantive legitimacy is the degree to which actions genuinely advance the intended policy goals.
I meet a lot of product builders and makers who have a deep discomfort with how their companies govern technology investments. They cannot always articulate it, but they sense they are participating in an exercise in procedural legitimacy that is hurting customers, team members, and investors. “If only investors understood what is happening here, they would flee en masse,” a VP of Product told me recently. The waste is off the charts, yet on paper, everything is being done by the book.
That is procedural vs. structural legitimacy.
What you are feeling is real.
Nothing Says You Have To Use Projects/Programs
Pick governance guidance: OECD, CIPFA/IFAC, ERM, NIST, ISO 38500, ISO 9001. None require projects, programs, stage gates, or portfolios organized around initiatives. But they DO require accountability, evidence, risk management, a documented process, checkpoints, clearly defined decision rights, and traceability.
In theory, nothing prevents a company from adopting a governance model that aligns with how products, platforms, and tech-enabled businesses actually work. Projects are just one possible unit. Capabilities, domains, journeys, product lines, problem spaces, and platforms are often far more durable and better support coherent (and responsible) investment.
Each of these can be:
Clearly defined with explicit boundaries and ownership
Measured with KPIs, OKRs, service levels, risk signals, etc.
“Governed” through repeatable checkpoints and pivot, proceed, pause, sunset decisions
Audited through clear decision trails
Aligned to strategy, investment horizons, and business constraints
Allocated capacity and funding in a transparent, consistent way
Reviewed and adjusted over time as evidence, priorities, and context change
So what stops companies from doing this?
Legacy frameworks from when they outsourced most of their software development or treated IT as a centralized cost center, plus pure inertia
Governance models borrowed from other parts of the business and shaped by constraints that have nothing to do with product or platform work (for example, procurement policies designed for construction or manufacturing projects)
Fear and uncertainty about how to hold teams accountable, along with inexperience around meaningful measurement
A tendency to cling to the false precision of allocated hours to projects, which feels accounting-like even though it is not
Program and portfolio teams that only know how to use projects and programs as the target of investment
Not Just Stodgy Enterprises
Switching gears for a second. This is not just a legacy-enterprise issue. A lot of the hype around the “product operating model” has very little coherence with how many tech companies actually think about investment and allocation.
You still find spreadsheets full of initiatives, capacity, and time-tracking. Finance keeps asking teams to slice the data in new ways: customer segments, horizons, products, value drivers. One day, it is BAU vs. strategic. The next day, it is the Kano model. The day after that, there are five new allocation categories. CTO 5. CEO 20. And sometimes all of it is required at once.
Teams near the “edge” with customers might have an easier time, but all bets are off for platform teams deep in the value chain. They have the same complaints about perverse incentives and the gap between procedural legitimacy and substantive legitimacy.
You also find perverse models where companies pretend there are separate “business units” (with VPs and GMs) for the sake of simplified P&L tracking and governance, even though the underlying work is far more integrated and platform-like than anyone wants to admit.
I mention all of this to to assure you that the issues are pervasive.
Principles
OK. So it is hard for enterprises trying to modernize. It is hard for fast-growing digital product companies. Is this just something companies have to live with? Yes. But they don’t have to keep shooting themselves in the foot.
Just because it is hard doesn’t mean they can’t do a better job.
We should adapt our risk management and investment approaches to our context. Governance is fundamentally context-specific risk management. When the context changes, such as technology, architecture, the structure of teams, or the level of uncertainty, the governance model eventually has to change as well. A governance model built for a slow, predictable environment will not fit a fast, interdependent one.
Software development sits near the dynamic end of the spectrum between static and highly dynamic optimization problems. Everything changes as work unfolds. Markets evolve, customer behavior shifts, dependencies move, architectures adapt, regulations update, and team skills develop. Even the shape of the portfolio itself changes. Treating products and platforms as if they were static projects to optimize is a category error. The nature of the work requires a governance model suited more to dynamic rather than static optimization.
Below, I have listed twelve ways you might need to adapt governance approaches for more product and platform-centric technology investments.
Acknowledge the durable and long-evolving nature of the investment
Reflect the explore → expand → extract lifecycle
Acknowledge the tension between talent fungibility and sustainable teams
Incorporate the mix of inputs, outputs, and leading and lagging indicators
Make causal theories explicit and evolve them over time
Include provisions for platforms and other deep capabilities
Recognize the varied shapes of product work
Support work that truly is project-shaped
Value research and structured risk reduction
Account for shifting moats and evolving differentiation
Treat investment as a portfolio of risks, not isolated bets
Respect the realities of team capacity, cognitive load, and scaling limits
Acknowledge the durable and long-evolving nature of the investment
A product, platform, capability, or domain is not a “project.” It unfolds over years, with compounding effects, shifting constraints, and evolving customer needs. A governance model has to recognize that you are stewarding something persistent, something that will outlive any single initiative, and design oversight accordingly. Governance must reward compounding effects, not just short-term outputs.
Example Mechanism: Annual product or platform “health and trajectory” review that evaluates long-term progress, compounding value, and future posture.
Reflect the explore → expand → extract lifecycle
Every durable product, platform, or capability moves through phases of discovery, expansion, and extraction. Governance has to account for the investment unit’s current position in that lifecycle. What makes sense in “explore”, like loose bets, rapid iteration, and wide uncertainty, is very different from what makes sense in “extract,” where stability, optimization, and predictable performance matter. Oversight must shift with the phase.
Example Mechanism: Lifecycle-based approval paths where each phase has different evidence requirements, metrics, and decision criteria.
Acknowledge the tension between talent fungibility and sustainable teams
You cannot endlessly hot-swap people into and out of a product, platform, or capability without degrading its long-term health. Some continuity is essential for maintaining context, quality, and momentum. At the same time, organizational mobility is real and should be encouraged. A governance model has to balance protecting the stability required for sustained outcomes while allowing individuals to grow, rotate, and pursue new opportunities.
Example Mechanism: Minimum team continuity thresholds that limit rotation and protect core context while still allowing normal mobility.
Incorporate the mix of inputs, outputs, and leading and lagging indicators
Products and platforms deliver value at uneven rates. The outcomes you see today were set in motion months or years ago. Effective governance has to work with this reality. It must incorporate both leading signals (inputs, early indicators, capability maturation) and lagging results (outcomes, impact, financial performance) without pretending they move in lockstep. A good model tracks the entire chain, not just the endpoints (or the starting point, delivery).
Every investment rests on the belief that actions today will create results tomorrow. Early on, that theory may be rough or low-confidence, but it should still exist. Through discovery, learning, and evidence, it becomes sharper and more reliable. A governance framework needs to surface these causal assumptions, test them, revise them, and build confidence over time.
Example Mechanism: A balanced metrics packet for every unit that reports early signals, delivery metrics, and outcome/impact trends together.
Include provisions for platforms and (other) capabilities deep in the value chain
A governance framework cannot be biased toward short-term outcomes or teams closest to customers. Platforms, shared services, and deep technical capabilities generate value on longer, less linear timelines. Their impact is often indirect, compounding, and mediated through many other teams. Governance has to recognize this pattern and ensure these units are not starved simply because their value is less visible or less immediate.
Example Mechanism: A dedicated platform investment lane funded based on leverage, dependency reduction, and long-term operational efficiency.
Recognize the varied shapes of product work
Much of product work is iterative, experiment-driven, and risk-bearing — but not all of it. Some efforts require exploration and optionality, while others are straightforward builds, migrations, or optimizations. A governance framework has to understand these different shapes. It cannot treat all product work as experimentation, nor can it demand linearity where iteration is required. It needs to differentiate and govern accordingly.
Example Mechanism: A work-type classifier at intake that routes exploratory work, migrations, enabling work, optimizations, and compliance differently so each receives governance appropriate to its shape.
Not everything fits the shape of a durable product or platform. Some efforts really do have a clear start and finish, a defined outcome, and a temporary assembly of people. A governance framework cannot force project language onto products and platforms, but it also cannot force product language onto inherently project-shaped work. It needs a range that supports both.
Example Mechanism: A dedicated project lane with lightweight charters, finite scopes, and end-of-life criteria for work with a clear beginning and end, such as migrations, etc.
Value research and structured risk reduction
Discovery is not waste. Research, validation, and risk reduction accumulate into real assets: clearer problem definition, tighter bets, de-risked approaches, and higher-quality decisions. A governance framework has to recognize this compounding value. It should reward structured discovery rather than treating it as a hurdle to “get through” before the real work begins.
Example Mechanism: A structured discovery brief that explicitly documents areas of uncertainty, the potential value of reducing each uncertainty, the learning questions being pursued, the research methods selected, and a review of how each research cycle changed the team’s confidence, options, or direction.
Account for shifting moats and the evolving landscape of differentiation
Not every capability needs to be world-class. Some are strategic differentiators, others are table stakes, and many shift categories over time. Governance has to reflect this strategic reality. It should help leaders understand where excellence matters, where “good enough” is fine, and how moats evolve as markets, competitors, and technologies change. A durable governance model must adapt as the basis of differentiation moves.
Example Mechanism: Twice-yearly capability categorization (differentiating, parity, hygiene, retiring) that directly guides investment levels.
Treat investment as a portfolio of risks, not isolated bets
Every product, platform, and capability has a different risk profile, including uncertainty around time-to-value, technical risk, market risk, dependency risk, and operational exposure. Governance cannot evaluate these units in isolation. It has to see the whole portfolio: the mix of bold plays, safer bets, long-term enablers, short-term wins, and foundational capabilities. Good governance balances the risk landscape, not just individual decisions.
Example Mechanism: A portfolio risk heatmap showing each unit’s technical, market, dependency, and delivery risk profile to guide allocation.
Respect the realities of team capacity, cognitive load, and scaling limits
You cannot simply add people to a team and expect it to go faster. Teams have optimal sizes, natural limits, and real cognitive load constraints. Dependencies multiply as headcount grows. Coordination costs rise. Architecture and team design shape the true throughput. A governance framework has to understand these dynamics. It cannot treat “adding people” as a free lever without considering team topology, inter-team relationships, and the structural limits of how work actually flows.
More coordination is not inherently “bad.” Some of it creates real shared value; some of it is pure waste. Governance frameworks must differentiate the two rather than indiscriminately pushing for “less coordination” or “more collaboration.”
Example Mechanism: Dependency value scoring that flags coordination worth supporting and coordination that should be eliminated or redesigned.
Recognize different team types while accepting that these are models, not permanent installations
Teams have different “shapes”. To use Team Topology parlance, they might be stream-aligned, platform, enabling, complicated-subsystem, and so on. Models like Team Topologies are useful, but they are still just models. In practice, these patterns evolve as products grow, architectures shift, and organizational needs change. A governance framework has to acknowledge this fluidity. Team types emerge through the work; they are not static boxes to install and forget.
Example Mechanism: Annual team topology review that checks whether current team shapes still match the architecture and flow of work.
Flex to both high-dependency and low-dependency work without forcing a single model
Some efforts thrive within tightly coupled, multi-team collaboration. Others do best when a small team can move independently with minimal coordination. A governance framework has to support both patterns. It cannot impose a one-size-fits-all structure that penalizes independence or ignores genuine interdependence. Good governance bends to the work’s dependency profile rather than forcing the work to bend to the framework.
Example Mechanism: Two planning modes, one optimized for autonomous teams and one for coordinated, multi-team work, chosen based on dependencies.
Conclusion
I hope this exploration has been interesting. Reflecting on it, I think the biggest issue is intention and operating from first principles. Many practices in technology operations are the byproduct of decades of muscle memory. No one remembers why we do them or what the real rules are. People check boxes instead of focusing on the intent behind the guidelines.
While these issues persist in most companies, even the “best” product companies, the difference is that those companies at least challenge norms and question the assumption that “we’ve always done it that way.” They may slide back into bad habits, but they continue to challenge the status quo.
Governance can be treated like a product built from first principles, focused on goals like risk management and sound investment decision making. And while most companies will end up with a mix of contextually appropriate approaches, there are very few good reasons to fall back on old language and old structures from a bygone era.




Spot on. Capitalization seems like the one exception people forget about, no?