Why We Love Risk Matrices
There’s a reason risk management has been canonized across federal programs. It works—at least in domains that behave predictably. Whether you’re building a satellite or migrating legacy data, the discipline of risk is indispensable. You can scope it. Score it. Mitigate it.
But that same structure can become a liability when the system you’re managing isn’t just dynamic—it’s adaptive.
AI doesn’t operate within the boundaries that risk matrices were built to describe. And the more we lean on those matrices for assurance, the more we risk governing the unknown with tools built for the known.
Real-World Example: When Risk Scores Miss the Story
In one federal use case, an AI-enabled triage tool was deployed to route citizen benefit requests. The program team executed a textbook governance process: every functional risk was logged, scored, and labeled “low” or “moderate.” Every compliance requirement was met. Testing passed.
And yet, within two weeks of production, certain demographic groups experienced a 27% longer resolution time.
Not because of a failure in risk tracking.
Because of a pattern no one had considered – a historical correlation in the training data that led the model to reroute claims in ways that looked logical to the algorithm but were invisible to human reviewers until the damage was already done.
The system didn’t break its rules. It followed them – into a blind spot.
This is what happens when we treat adaptive behavior as if it’s just a variation of known risk. It’s not. It’s a different species of problem altogether.
When Familiar Tools Become False Comfort
In high-compliance environments, it’s tempting to equate structure with security. The logic goes: “We have a governance framework. We’ve quantified our risks. We’ve reduced uncertainty.”
But what’s really been reduced is our sensitivity to surprise.
We saw this play out years ago in the private sector, where a well-known algorithmic hiring tool penalized resumes based on inferred gender patterns – despite being trained to be neutral. Or where a credit scoring model assigned drastically different credit limits to spouses with nearly identical financial profiles.
Neither of these failures triggered alarms. Why? Because from a risk standpoint, everything looked stable. But these weren’t failures of data. They were failures of assumption.
And the traditional risk model didn’t see them coming – because it was never designed to.
Enter Cynefin and Complexity Theory
This is where complexity theory becomes essential.
Unlike complicated systems, which can be broken down into predictable parts, complex systems involve interactions that can’t be reduced without losing meaning. These systems have emergent behavior. They evolve. They surprise.
The Cynefin Framework, developed by Dave Snowden, offers a lens for understanding this. It separates problems into five domains:
- Clear (best practices apply)
- Complicated (expert analysis needed)
- Complex (patterns emerge post-intervention)
- Chaotic (act → sense → respond)
- Confused/Disorder (no clear categorization yet)
Most federal governance frameworks operate confidently in the clear and complicated domains. AI, by contrast, lives squarely in the complex. That means you can’t rely on root-cause analysis alone. You need sensemaking. Probes. Safe-to-fail experiments. Feedback loops.
You need governance practices that aren’t just prescriptive – they’re adaptive.
From Measurement to Monitoring
This shift requires more than new tools. It requires a new posture:
Risk is about prevention. Uncertainty demands observation.
You still need your risk registers and thresholds. But you also need real-time telemetry on what your models are doing. You need explanation layers. You need to simulate edge cases, track user impact, and monitor for behaviors that haven’t occurred yet – but will.
Most importantly, you need to stop asking, “What’s the likelihood this will fail?” And start asking, “How will we know if it’s failing in ways we didn’t expect?”
Up Next: Part III – Governing in the Fog
In Part 3, we’ll move from principles to practice – exploring how to build systems that sense, learn, and adapt under real-time conditions. Because governing AI isn’t about locking it down. It’s about keeping up with what it becomes.