Somewhere in your organization, you have a completely well-meaning, qualified, empathetic executive—maybe the CEO, perhaps the CFO, someone—screaming (quietly) to themselves, "What the hell is going on?"
It's obvious something is wrong—to everyone—but no one can seem to give them a straight answer on what it is (though everyone has a multitude of theories). And when that executive asks for someone, anyone, to take some sort of accountability for the state of things, they get bombarded with self-serving theories and hand-waving.
Here is what is likely happening:
The more dysfunction in the environment, the harder it is to understand what's going on. The more dysfunctional the environment gets and the more pressure there is to understand what's going on, the more likely it is that the approaches to understanding what's going on will be inadequate and flawed and may even make the situation worse. It is a wicked loop.
Note: By "dysfunction" here, I literally mean the proportion of things not functioning or inhibited from functioning. Sometimes, the dysfunction is overt—low psychological safety and toxic people—but more often than not, it is the accumulation of many smaller things clogging up feedback channels, observability, motivation, etc.
"I'm done with nuance! Just give me a simple frickin' metric!"
For example, I recently met with an engineering team that had opted to use pull requests as their key "objective" metric for performance. Senior leadership had gone with this metric despite the pushback of many people on the team. The counter-proposal was to use a more nuanced approach, combining impact, various inputs, assessments from team leads, etc. However, senior leaders were concerned because the relationship between design and product management was poor, the pressure for a responsible objective metric was high, and trust in the engineering org was low. So the best thing they could come up with that they marginally had some control over and that checked all the boxes (but in reality checked no boxes) was pull requests.
Unsurprisingly, engineers on the team were openly discussing the optics around this metric and how to "game it." Importantly, the senior engineering leaders weren't stupid, but they went with the best-worst option. I can guarantee you that the next discussion, in a quarter or two, will be, "Why is that metric so high, yet we aren't getting anything done?"
I've seen this play out with "keeping promises," product reviews, status checks, revenue per engineer, PRs, throughput, flow metrics, "progress to plan," and a whole bevy of metrics and practices.
One of the biggest myths is that when trust is low, you can somehow engineer a process that will "surface reality." I've fallen for this trap over the years. I imagine that if we "just" visualize work, keep tabs on progress, and "face reality," somehow magically, reality would emerge.
It never emerged.
I'm embarrassed to admit it, but I have also been involved in schemes to essentially "smoke out" teams that weren't doing great work. Leaders kept dreaming up ways to definitively prove, once and for all, that Team X wasn't pulling its weight. This typically meant keeping a lot of things constant and setting up a "game" that everyone agreed was playable, building mutual agreement that the game was fair, and then playing the game.
This never worked, and I'm embarrassed to say I took part.
So what can you do?
Act sooner rather than later (to reset trust) before the shit hits the fan.
Accept that efforts to understand the truth will be unsuccessful unless paired with efforts to regain trust and confidence.
Accept that there is likely no one root cause for your situation. You can still improve things, just not by fixing that one "bug."
Don't shoot the messenger(s). Despite the mess, you have squeaky wheels doing their best to surface the issue. They might not be the most eloquent or diplomatic, but you're losing one of your only paths to reality if you shoot these messengers.
Avoid making the situation worse with trust proxy metrics and efforts to appease other departments with oversimplified versions of reality.
Objectivity does not cure low trust. In fact, in most cases, pursuing objectivity is a byproduct of having less trust in discussing nuance and context.
Trust and psychological safety unlock your use of data.
The Cutler Variation of Goodhart's Law:
In environments with high psychological safety, trust, and an appreciation for complex sociotechnical systems, when a measure becomes a target, it can remain a good measure because missing the target is treated as a valuable signal for continuous improvement rather than failure.
Besides leading to the Cutler Corollary (which I'm going to start quoting!) your observations also underline the doubly seductive nature of metrics, which (falsely) reassure you that you're on a path out of the chaos and (falsely) tell you that there is a "reality" to measure in the first place.
But you don't say how to build the trust and psychological safety required to benefit from the Cutler Corollary. I know what I think about this topic (Jeffrey Fredrick and I wrote a whole book about our opinions on it!) but I wonder what you've seen work?
So, what's this trust thing and where can I buy some?
(I was going with "which metric for trust helps me win?" but that seemed over the top.)
How to earn, cultivate, and encourage trust, and know it when you come across it, is a cool bundle of skills. How to do that efficiently and quickly and with others isn't obvious. It can be a highly political skill but there are many pools of knowledge on the subject to draw from. Maybe starting with Nonviolent Communication.