March 24, 2026
Governance Isn't the Brakes. It's the Steering.
Ryan McCarty
March 24, 2026
Ryan McCarty

There's a version of the AI governance conversation that goes like this: the business wants to move fast, the risk and compliance teams want to slow down, and the job of leadership is to find the right balance between them. Speed on one side, safety on the other. Somewhere in the middle, a responsible decision.
It's a reasonable-sounding model. It's also the wrong mental picture, and the organisations that internalise it tend to build governance functions that are exactly as useful as it suggests. Which is to say: occasionally they stop bad things from happening, and reliably they frustrate everyone in the process.
The problem isn't the people. It's the metaphor.
Think about how brakes actually work. They're reactive. They respond to a situation that has already developed. They operate after the direction of travel is established. And their only function is to reduce velocity. A car with excellent brakes is still going wherever the wheel is pointed. The brakes don't influence that. They just determine how fast you get there, or whether you stop before you arrive.
Applied to AI governance, the brakes model produces a function that sits downstream of every real decision. The model is chosen. The use case is scoped. The data is selected. The system is built. And then governance reviews it and either clears it or flags it. At that point, the cost of changing direction is high, the pressure to proceed is intense, and the governance function is arguing against momentum with nothing but concerns and checklists.
This is not a powerful position. It is also not a safe one. It creates the illusion of oversight while concentrating all of the actual risk-shaping decisions in the period before governance was involved.
Steering works differently. It doesn't slow anything down. It determines where the vehicle goes. And it operates continuously, from the moment of departure, not as a checkpoint at the end of the road.
An AI governance function built on this model looks nothing like the brakes version. It's not waiting to review completed work. It's shaping the questions being asked in the first place. Which use cases are being pursued, and why? What data should this system be allowed to touch, and under what conditions? What does responsible success look like for this deployment, and how will we know if we're achieving it?
These are not compliance questions. They are strategic ones. And the organisations that treat them as compliance questions end up with governance that is technically present and operationally irrelevant.
There's a practical dimension to this worth making concrete. When governance enters a project after the foundational choices are made, it faces a specific and predictable problem: the cost of its concerns is visible, and the cost of ignoring them is not. The engineer who has to re-architect a data pipeline to meet a governance requirement can tell you exactly what that will cost in time and resources. The future liability from a poorly governed AI decision is harder to quantify and easier to discount.
This asymmetry doesn't reflect actual risk. It reflects the timing of involvement. And it consistently produces the same outcome: governance requirements get scoped down, deferred, or designed around, not because anyone made a bad decision, but because the structural incentives all point one direction and governance arrived too late to point another.
Early involvement changes the asymmetry. When governance shapes the architecture before the architecture is built, the cost of doing it right is the cost of building it right the first time. That's almost always lower than the cost of rebuilding it later, and far lower than the cost of a failure that becomes visible after deployment.
None of this happens without a specific and deliberate choice by senior leadership. Not a policy. Not a framework document. A choice about where governance sits in the actual decision-making sequence, and what authority it has when it's there.
The organisations that get this right don't have more risk tolerance or less ambition than others. They have a different mental model of what governance is for. They've stopped asking "how do we make sure governance doesn't slow us down?" and started asking "how do we make sure governance is involved early enough to be useful?"
That's a different question. It produces different structures, different roles, different conversations. It produces a governance function that the business actually wants in the room, because it helps navigate rather than brake.
None of this is an argument against moving fast. Velocity in AI deployment is genuinely valuable. The organisations that learn faster, iterate faster, and deploy faster are accumulating real advantages. The goal isn't to slow that down.
The goal is to make sure the speed is going somewhere worth going. A car with no steering doesn't go faster than one with steering. It just goes wherever the road takes it, until the road runs out.
Governance, built right, doesn't touch the accelerator. It holds the wheel. And in a vehicle moving at this speed, in a landscape changing this fast, that's not a constraint. That's the job.

Dead Reckoning: How Enterprise AI Drifts Off Course Without Continuous Validation
All Industries
March 20, 2026

Lost in Transit: What Logistics Can Teach Us About AI
All Industries
March 6, 2026

A New Chapter Begins: 1touch + Everpure
All Industries
February 23, 2026