Most organisations are approaching the threshold where automated systems outrun human decision cycles. The ones that survive will have installed the right architecture before that moment arrives.
Former RAF Tornado pilot. Programme Director at Visa, PwC, Kyndryl. Thirty years operating in high-consequence environments where governance failures are not recoverable.
The Signal
Trust in fully autonomous AI agents fell from 43% to 27% in twelve months whilst investment increased. This is not a paradox. Organisations are getting close enough to see how these systems actually behave at speed. The governance infrastructure has not kept pace with the deployment velocity.
Knight Capital Group lost $440 million in 45 minutes in August 2012. A trading algorithm deployed incorrectly executed 4 million trades before any human could intervene. The firm was effectively destroyed before the end of the trading day. No governance process operates faster than machine-speed cascades.
Governance Escape Velocity is the threshold at which automated system dynamics outrun human decision cycles. Below it, governance works. Above it, only pre-installed architecture matters. Both are required. Most frameworks provide only one.
— Governing AI at Speed: A Constitutional Framework for the ArtificialoceneKlarna cut 700 customer service roles. Quality dropped, institutional knowledge was lost, rehiring began. Forrester's 2026 data confirms the pattern: 55% of employers who made AI-driven layoffs already regret the decision. The root cause in every case is the same — AI deployed in human-shaped roles rather than AI-shaped holes.
Most organisations are already near the GEV threshold for their highest-velocity AI systems. The question is not whether a GEV event will occur. It is whether the circuit breakers will be in place when it does.
The Framework
Every governance mechanism requires time to operate. The GEV threshold is where that time runs out. The distinction is not between good governance and bad governance — it is between governance and architecture.
Two-Challenge Rule. Escalation protocols. Risk committees. OODA cycles complete before the outcome is determined. Deliberate human response is possible.
The threshold where automated dynamics outrun human decision cycles.
No governance mechanism operates faster than a machine-speed cascade. The automation is not failing. It is executing the design perfectly. That is the problem.
Three Layers
Install circuit breakers on independent infrastructure. Define Response Horizons. Certify architecture against worst-case scenarios. The Strap Protocol: if the circuit breakers cannot pass the stress test, the system does not deploy.
Physics-as-Code constraint tiers operate deterministically — they cannot be overridden by policy or by humans. This layer is the only governance that works above GEV.
Physics-as-CodeIndependent InfrastructureLeadership by Intent replaces SAFe's permission hierarchy. Eleven constitutional articles. Named Authorising Officers discharge regulatory accountability at the pre-deployment gate — not at every subsequent decision.
Senior Managers are accountable for the quality of the authorisation. Legally defensible, regulatorily compliant, and operationally workable.
Leadership by IntentAO ModelFail Operational, not Fail Safe. Isolate the failed component. Degrade gracefully to Minimum Operating Configuration. RAIM+1 redundancy. Cat 3 autoland equivalent for AI systems.
Air France 447 was recoverable. The crew could not recover it because their manual skills had atrophied. The organisation continues. No halts. Hold the controls.
Fail OperationalMOCThe Ask
The framework translates into three board-level decisions. Each is specific, testable, and sequenced. These determine whether your organisation is on the right side of the GEV threshold when the first incident occurs.
Adopt the rule: no high-risk AI system deploys without a circuit breaker that operates above GEV on independent infrastructure. If the circuit breaker cannot pass a worst-case stress test, the system does not deploy. The problems are manageable before departure and unmanageable after rotation.
AI Credit Underwriting is recommended. Test the pre-deployment gate, the deployment certificate, the Two-Challenge Rule, and the Fail Operational simulation against a live Red Zone system. Resolve the Senior Manager accountability question before it becomes a crisis.
Stop asking what can we automate. Start asking what capabilities are we missing because they require machine speed that no human team could provide at the required scale? That inventory is the correct starting point for AI deployment — and the answer to the workforce engagement problem.
Advisory Services
Constitutional framework design for AI-enabled organisations. Circuit breaker installation. Physics-as-Code constraint tiers. Strap Protocol certification. Built before the system runs — not after the first incident.
Authorising Officer model translated for regulated industries. Senior Manager accountability mapped against the EU AI Act and FCA model risk guidance. Evidence that stands up to regulatory scrutiny — not just policy documents that do not.
AI transformation programmes that have lost alignment, stakeholder confidence, or delivery rhythm. Diagnostic, replan, restabilise. Track record includes recovering a failing global Salesforce programme and leading a £3bn divestment technology separation.
Authority gradient management across multi-model AI pipelines. Trust boundary definition at handoff points between autonomous components — where most implementations fall over. Immutable audit infrastructure at machine speed.
Annex III high-risk system classification. Article owner appointment. Pre-deployment gate design. The August 2026 deadline is fixed. Regulatory-as-Code deployment for financial services, healthcare, and critical infrastructure organisations.
Replacing permission hierarchies with doctrine-based autonomy. Marquet's Inversion applied to AI-enabled organisations. For leadership teams ready to move at speed without losing governance integrity.
Track Record
The Framework
Non-negotiable. Recitable from memory. Designed to hold in conditions where there is no time to consult a policy document.
Background
Thirty years operating in environments where governance failures are not recoverable — first in the Royal Air Force, then across Financial Services, Telco, and Aerospace and Defence. The GEV framework is not theory. It is pattern recognition from high-consequence delivery at scale.
Credentials
Resources
Listen to the framework, request the CEO Memorandum for board distribution, or download the full document.
Audio walkthrough of Governing AI at Speed — the complete constitutional framework for AI governance in organisations operating at machine speed.
Listen on Speechify →Three decisions required for board-level AI governance. The GEV framework distilled to an executive brief — formatted for board distribution and regulatory evidence files.
Request the memo →Governing AI at Speed: A Constitutional Framework for the Artificialocene. The complete framework including all five laws, implementation phases, and Fail Operational architecture.
Request the framework →Available for board briefings, conference keynotes, and executive workshops. The GEV framework translates directly to C-suite and regulated industry audiences.
Enquire about speaking →Engage
Available for advisory engagements, board-level briefings, programme recovery, and keynote speaking. Based in Ripon, North Yorkshire. Operating across the UK and Europe.
If your organisation is deploying AI at pace and the governance architecture is not confirmed against the GEV threshold — that is the conversation to have now, not after the first incident.
Technology Partner
The governance framework does not stop at the whiteboard. Advisory engagements requiring the technology layer — multi-model pipeline governance, immutable audit infrastructure, EU AI Act compliance tooling, sovereign data handling — are delivered through MissionOpsAI Foundry.
Where the framework identifies the design requirement, Foundry is the working implementation. That distinction — between a governance document and a governing system — is what closes the gap at machine speed.
MissionOpsAI Foundry →