AI Governance Advisory

Governing AI
at Speed

Most organisations are approaching the threshold where automated systems outrun human decision cycles. The ones that survive will have installed the right architecture before that moment arrives.

Former RAF Tornado pilot. Programme Director at Visa, PwC, Kyndryl. Thirty years operating in high-consequence environments where governance failures are not recoverable.

£3bn
Divestment led at PwC
15yrs
Royal Air Force
$440M
Lost in 45 mins — Knight Capital
Author
Governing AI at Speed — Constitutional Framework for the Artificialocene
Sectors
Financial Services · Aerospace & Defence · Telco · Critical Infrastructure
Clients
Visa · PwC · Kyndryl · BT Group · Leonardo · BAE Systems
Origin
Royal Air Force · University of Glasgow · First Class Honours

The Signal

Why This
Matters Now

Trust in fully autonomous AI agents fell from 43% to 27% in twelve months whilst investment increased. This is not a paradox. Organisations are getting close enough to see how these systems actually behave at speed. The governance infrastructure has not kept pace with the deployment velocity.

Knight Capital Group lost $440 million in 45 minutes in August 2012. A trading algorithm deployed incorrectly executed 4 million trades before any human could intervene. The firm was effectively destroyed before the end of the trading day. No governance process operates faster than machine-speed cascades.

Governance Escape Velocity is the threshold at which automated system dynamics outrun human decision cycles. Below it, governance works. Above it, only pre-installed architecture matters. Both are required. Most frameworks provide only one.

— Governing AI at Speed: A Constitutional Framework for the Artificialocene

Klarna cut 700 customer service roles. Quality dropped, institutional knowledge was lost, rehiring began. Forrester's 2026 data confirms the pattern: 55% of employers who made AI-driven layoffs already regret the decision. The root cause in every case is the same — AI deployed in human-shaped roles rather than AI-shaped holes.

Most organisations are already near the GEV threshold for their highest-velocity AI systems. The question is not whether a GEV event will occur. It is whether the circuit breakers will be in place when it does.

43%→27%
AI agent trust in 12 months — while investment increased
55%
Employers who made AI-driven layoffs already regret it — Forrester 2026
45 min
Knight Capital: $440M lost before any human could intervene
9 sec
Bagram cargo aircraft: time from rotation to point of no return, 2013

The Framework

Governance Escape
Velocity

Every governance mechanism requires time to operate. The GEV threshold is where that time runs out. The distinction is not between good governance and bad governance — it is between governance and architecture.

Below Threshold

Human Governance Works

Two-Challenge Rule. Escalation protocols. Risk committees. OODA cycles complete before the outcome is determined. Deliberate human response is possible.

Policy + Process
Governance
Escape
Velocity

The threshold where automated dynamics outrun human decision cycles.

Above Threshold

Only Architecture Matters

No governance mechanism operates faster than a machine-speed cascade. The automation is not failing. It is executing the design perfectly. That is the problem.

Circuit Breakers

Three Layers

The Architecture

LAYER 1 — DESIGN

Before the System Runs

Governs above GEV

Install circuit breakers on independent infrastructure. Define Response Horizons. Certify architecture against worst-case scenarios. The Strap Protocol: if the circuit breakers cannot pass the stress test, the system does not deploy.

Physics-as-Code constraint tiers operate deterministically — they cannot be overridden by policy or by humans. This layer is the only governance that works above GEV.

Physics-as-CodeIndependent Infrastructure
LAYER 2 — GOVERNANCE

While Operating

Governs below GEV

Leadership by Intent replaces SAFe's permission hierarchy. Eleven constitutional articles. Named Authorising Officers discharge regulatory accountability at the pre-deployment gate — not at every subsequent decision.

Senior Managers are accountable for the quality of the authorisation. Legally defensible, regulatorily compliant, and operationally workable.

Leadership by IntentAO Model
LAYER 3 — RECOVERY

When Something Fails

Transition management

Fail Operational, not Fail Safe. Isolate the failed component. Degrade gracefully to Minimum Operating Configuration. RAIM+1 redundancy. Cat 3 autoland equivalent for AI systems.

Air France 447 was recoverable. The crew could not recover it because their manual skills had atrophied. The organisation continues. No halts. Hold the controls.

Fail OperationalMOC

The Ask

Three Decisions
Required

The framework translates into three board-level decisions. Each is specific, testable, and sequenced. These determine whether your organisation is on the right side of the GEV threshold when the first incident occurs.

DECISION 01 — THE STRAP PROTOCOL

No Red Zone system goes live without a certified circuit breaker

Adopt the rule: no high-risk AI system deploys without a circuit breaker that operates above GEV on independent infrastructure. If the circuit breaker cannot pass a worst-case stress test, the system does not deploy. The problems are manageable before departure and unmanageable after rotation.

DECISION 02 — THE AO MODEL PILOT

Pilot the Authorising Officer model on one high-impact use case

AI Credit Underwriting is recommended. Test the pre-deployment gate, the deployment certificate, the Two-Challenge Rule, and the Fail Operational simulation against a live Red Zone system. Resolve the Senior Manager accountability question before it becomes a crisis.

DECISION 03 — THE SHAPED HOLE AUDIT

Commission an AI-Shaped Hole inventory

Stop asking what can we automate. Start asking what capabilities are we missing because they require machine speed that no human team could provide at the required scale? That inventory is the correct starting point for AI deployment — and the answer to the workforce engagement problem.

Advisory Services

Where I Work

01 — DESIGN LAYER

AI Governance Architecture

Constitutional framework design for AI-enabled organisations. Circuit breaker installation. Physics-as-Code constraint tiers. Strap Protocol certification. Built before the system runs — not after the first incident.

02 — ACCOUNTABILITY

Board & Regulatory Advisory

Authorising Officer model translated for regulated industries. Senior Manager accountability mapped against the EU AI Act and FCA model risk guidance. Evidence that stands up to regulatory scrutiny — not just policy documents that do not.

03 — RECOVERY

Programme Recovery

AI transformation programmes that have lost alignment, stakeholder confidence, or delivery rhythm. Diagnostic, replan, restabilise. Track record includes recovering a failing global Salesforce programme and leading a £3bn divestment technology separation.

04 — PIPELINE

Multi-Model Governance

Authority gradient management across multi-model AI pipelines. Trust boundary definition at handoff points between autonomous components — where most implementations fall over. Immutable audit infrastructure at machine speed.

05 — COMPLIANCE

EU AI Act Readiness

Annex III high-risk system classification. Article owner appointment. Pre-deployment gate design. The August 2026 deadline is fixed. Regulatory-as-Code deployment for financial services, healthcare, and critical infrastructure organisations.

06 — LEADERSHIP

Leadership by Intent

Replacing permission hierarchies with doctrine-based autonomy. Marquet's Inversion applied to AI-enabled organisations. For leadership teams ready to move at speed without losing governance integrity.

Track Record

Delivered at Scale

£3bn
Global divestment technology separation led at PwC — multi-vendor, multi-jurisdiction, under pressure
15yrs
Royal Air Force — Tornado F3 operational aircrew, Qualified Flying Instructor, senior leadership roles
£37m
NPV on strategic data-centre integration for BT/EE critical applications — delivered on plan

The Framework

Five Laws of
AI Governance

Non-negotiable. Recitable from memory. Designed to hold in conditions where there is no time to consult a policy document.

I
Humans hold intent.AI may optimise execution but may never redefine purpose. The moment an AI system begins selecting its own objectives, governance has already failed.
II
Fluency is not accuracy.Treat AI confidence as a warning signal, not a quality signal. An AI that is wrong sounds identical to one that is right. This is the operational risk most organisations are not managing.
III
Governance has a physical limit.Below GEV, deliberate human response is possible. Above it, only pre-installed architecture matters. Design for the threshold — not for the average case.
IV
Authority gradients kill.Make them visible. Name the accountable person. Train the challenge. Tenerife killed 583 people in 1977. It was not mechanical failure. It was a gradient. The same failure mode exists in every AI pipeline with unclear escalation paths.
V
He Tangata. The people, the people, the people.When every institution has access to the same AI tools, the differentiator is the people. Protect their judgment. Maintain their skills. The organisation that wins will be the one whose people retained the ability to step in when the automation failed — and knew what to do. AI amplifies good people. It does not replace them.

Background

The Person Behind
the Framework

Thirty years operating in environments where governance failures are not recoverable — first in the Royal Air Force, then across Financial Services, Telco, and Aerospace and Defence. The GEV framework is not theory. It is pattern recognition from high-consequence delivery at scale.

2025–2026
Programme Director — Visa
End-to-end owner of the Spend Clarity cloud migration. Technical integrity, programme governance, architectural alignment, operational readiness. Seven TPMs across interdependent workstreams.
2024–2025
Director, Consult Partner — Kyndryl
Senior transformation lead. Cloud and infrastructure modernisation for Financial Services clients at enterprise scale.
2024
Head of Strategic Campaigns — Leonardo
Strategic campaigns and high-value bid cycles. Solution architecture in Aerospace and Defence.
2019–2023
Director — PwC
Global data platforms: £2m ARR, £5m+ upsell. £3bn divestment technology separation. Programme recovery. Multi-vendor integration.
2013–2019
Technology Leadership — BT Group
£3.7m annual benefits. EE data-centre integration: £37.3m NPV. Operating model redesign. Process and tooling improvement.
1995–2010
Commissioned Officer — Royal Air Force
Tornado F3 operational aircrew. Qualified Flying Instructor. 15 years across front-line, instructional and senior leadership roles. Foundation of everything that followed.

Credentials

B.Eng Aeronautical Engineering
First Class Honours — University of Glasgow, 1994
Qualified Flying Instructor
Royal Air Force — Pilatus PC9 and Tornado F3
AWS & Azure
Cloud migration and platform delivery at enterprise scale
Financial Services
Visa, PwC, Kyndryl — regulated environment delivery, enterprise governance
Aerospace & Defence
Leonardo, BAE Systems, Royal Air Force
AI Governance Author
Governing AI at Speed: A Constitutional Framework for the Artificialocene (2026)

Resources

The Framework
in Detail

Listen to the framework, request the CEO Memorandum for board distribution, or download the full document.

Engage

Start the
Conversation

Available for advisory engagements, board-level briefings, programme recovery, and keynote speaking. Based in Ripon, North Yorkshire. Operating across the UK and Europe.

If your organisation is deploying AI at pace and the governance architecture is not confirmed against the GEV threshold — that is the conversation to have now, not after the first incident.

LocationRipon, North Yorkshire, UK

Technology Partner

Foundry Platform

The governance framework does not stop at the whiteboard. Advisory engagements requiring the technology layer — multi-model pipeline governance, immutable audit infrastructure, EU AI Act compliance tooling, sovereign data handling — are delivered through MissionOpsAI Foundry.

Where the framework identifies the design requirement, Foundry is the working implementation. That distinction — between a governance document and a governing system — is what closes the gap at machine speed.

MissionOpsAI Foundry →