TBM 414: Legibility and Legitimacy
There is a big difference between legibility and legitimacy.
The terms sound related, and they do share a lineage, but they diverge in meaning. Legibility derives from legere, meaning “to read” or “to gather,” referring to what can be seen, interpreted, and made understandable. Legitimacy, by contrast, derives from lex (legis), meaning “law,” and refers to what is considered lawful, justified, and worthy of acceptance.
Legibility is not neutral, despite the surface-level appeal of transparency or simplicity.
As James C. Scott argues, legibility and power are intertwined. Making systems legible requires selective filtering and simplification—boiling people, activities, and relationships down into forms that can be acted upon. In that sense, legibility enables control.
Legitimacy, by contrast, determines whether that control is accepted as justified.
When someone steeped in technology starts talking about “world models,” treats middle management as an information-processing appendage, and advocates for organizational flattening while remaining largely silent on questions of power and control, it is worth paying attention. Especially worrisome is the familiar pattern: appeals to decentralization and freedom while simultaneously ceding significant control to technology and those who control it. That’s when you really need to pay attention.
And when people take the bait, hook, line, and sinker, and accept it all as new or innovative—seeing it not just as progress, but as a form of personal freedom—that’s when you really, really, really need to pay attention. Because the rhetorical trick is working.
And it shows up clearly in Jack Dorsey’s recent post Hierarchy vs. Intelligence. The post argues:
Organizational hierarchy is a byproduct of information transmission problems
AI enables greater legibility
It assumes legitimacy
It avoids discussing who holds power once the system is (more) legible
At a high level, Dorsey’s thesis is that organizational hierarchy exists primarily to route information, and that AI can now perform that function more effectively. If a system can maintain a real-time “world model” of the company and coordinate work directly, then many layers of management become unnecessary, and decisions can move to the edge.
In the context of legibility and legitimacy, my take is that the post:
repackages control as freedom
relies on legibility
skips legitimacy
Dorsey rehashes a familiar set of ideas:
the company as an intelligence system
persistent frustrations with layers and information distortion
aspirations to flatten organizations and push decisions to the edge
the dream of running the company as a system
None of that is new. See: learning organizations, cybernetics, systems thinking, platform thinking, and industrial and managerial traditions. What’s new isn’t the concept, but the claim of feasibility: that AI can finally operationalize it.
Maybe AI can help make a dent—but by how much, and toward what end? Human flourishing? Or humans as edge “meat” operators in a system designed for ultimate legibility and control? For a discussion on the actual feasibility of a company “world model,” see here.
The underlying philosophy matters. The article references Haier, but treats it as just another structural attempt at coordination that fell short because it lacked the technology. But Haier isn’t just a structure. It’s grounded in a clear philosophy about human value and autonomy. You can agree with it or not, but it defines what “better” means and for whom:
Better = more autonomy at the edge
Better = closer to the customer
Better = humans as accountable units of value creation
Dorsey’s post, by contrast, implies “better” in terms of:
faster decisions
clearer visibility
tighter coordination
less distortion through layers
But those aren’t philosophies. They’re optimizations in service of something. They describe how a system might perform, not what it is ultimately for or who it is serving.
Haier defines what counts as value. Dorsey defines how efficiently value flows.
The promise of AI-driven “intelligence systems” reframes increased legibility as empowerment, while shifting control to those who own the model and sidestepping the question of whether that control is legitimate.
In the classic Silicon Valley ethos, technological progress is often treated as self-legitimizing.The playbook: keep quiet on values, or allude to vague calls for “freedom” and “progress.”
Will we play along?
This is really about whether we choose to accept AI-powered legibility in the control of organizations as legitimate, or whether we ask harder questions.
One of the defining traits of AI is that it becomes a catalyst for whatever people already believe. If you believe in “re-wilding,” you can imagine how AI will help. If you care about efficiency, autonomy, or decentralization, AI promises a step change. Which is why this is as much about questioning the why as it is about any specific proposal. Not just what becomes possible. But what we choose to accept, and for what purpose.



The idea of middle management as mere transmitters of status is a depressing one. If that’s all your managers are doing then sure try to eliminate and automate. But good managers should be coaching, guiding, strategizing, interpreting, leading up, etc. If I ever become a mere status bot send me right back to IC work.
On the point of legibility... whether consciously or not, the way that Dorsey renders middle management's "primary job [as] information routing" does serve the purpose of rationalizing his argument here. I think for the average CEO, the legitimacy of their own power is taken as given, and therefore invisible to them.