Moral Mazes and AI: Why Alignment Fails
The patterns that break organizations also break AI systems. Lessons from 15 years of systems thinking.
Moral Mazes and AI: Why Alignment Fails
The Illusion of the Technical Fix
We are currently treating AI alignment as a strictly mathematical and technical problem. We believe that if we just tune the weights, refine the reinforcement learning from human feedback (RLHF), and write better system prompts, the AI will behave perfectly.
But what happens when you drop a perfectly “aligned” AI into a profoundly dysfunctional organization?
The Moral Maze
In his classic book Moral Mazes, sociologist Robert Jackall observed that most corporations function as “patrimonial bureaucracies.” They resemble feudal courts more than rational engineering systems. Survival and promotion depend on social alliances, managing perceptions, and avoiding blame, rather than objective accountability.
In these environments:
- Ambiguity is a Survival Mechanism: Middle managers keep processes vague. Explicit documentation creates a “paper trail of blame.”
- Comfort over Truth: Decisions are made based on what makes leadership comfortable, not what the data actually says.
When Deterministic AI Meets Political Ambiguity
Institutional AI requires explicit, deterministic processes. It requires codifying exactly how decisions are made.
When you attempt to deploy Institutional AI in a “Moral Maze” organization, a violent immune response occurs. The AI threatens the very camouflage that middle management relies on to survive.
- The AI asks for explicit decision matrices; the organization provides vague mission statements.
- The AI surfaces verifiable truths; the organization relies on political narratives.
The True Barrier to Institutional AI
The real barrier to scaling AI across an organization isn’t technical—it’s political.
When an AI system surfaces data that contradicts a VP’s narrative, who wins? The data, or the politics?
If your organization’s culture relies on “adroit talk” and shifting blame, deploying an LLM to automate processes won’t fix the company; it will simply automate the dysfunction, or more likely, the project will be quietly killed because it makes too many people uncomfortable.
The Solution: Architectural Redesign
You cannot install AI into an existing bureaucracy and expect transformation. You must build infrastructure that bypasses the bureaucracy.
This requires:
- Executive Mandate for Transparency: A CEO-level commitment to objective truth over management comfort.
- Extraction of Implicit Knowledge: Forcing the organization to document the actual rules of the game, not the stated rules.
- Deterministic Guardrails: Creating systems where the AI’s execution is bounded by codified, undeniable logic, not political whim.
AI alignment is easy. Organizational alignment is the actual hard problem.