Most boards think they have an AI strategy. What they have is an AI conversation that has not yet become a governance structure.
The distinction matters. A strategy is a plan for what the executive team intends to do. A governance structure is the mechanism the board uses to evaluate whether the strategy is credible, track whether it is being executed responsibly, and intervene when it is not. Most AI governance frameworks that boards currently hold — the frameworks that arrived via legal teams, consulting advisors, and industry bodies — are compliance frameworks. They tell the board what to document. They do not tell the board how to govern.
I have spent 30 years on the commercial side of technology companies, co-founded a post-quantum cryptography advisory organisation, and built a 24-agent AI system that operates commercially. The governance problems that actually matter are not covered in any compliance framework I have read. They emerge when the system is running under real pressure, with real data, and real consequences. This is a post about where AI governance actually fails — not where the frameworks say it fails.
The presenting problem is never the real problem
In root cause analysis — the methodology I worked with across three companies acquired by IBM and EMC — there is a foundational principle: the failure you can see is almost never the failure that caused the problem. The server crashed. Why? The load balancing failed. Why? The monitoring system did not escalate the alert. Why? The monitoring system had not been updated to reflect the new infrastructure architecture that was deployed six months ago. Why was it not updated? Because the change management process did not include monitoring updates in its scope.
The board sees the crash. The causal chain starts four steps earlier.
AI governance failures follow the same pattern. A board approves an AI deployment. Eighteen months later, the system is producing biased outputs, or the regulatory audit has found documentation gaps, or a key customer has raised concerns about how their data is being processed. The board asks the CEO what went wrong. The CEO describes a technical failure or a process gap. The actual failure is typically at the governance layer: the board approved a deployment without having the oversight structure in place to catch those problems before they became visible.
Why compliance frameworks do not solve this
Compliance frameworks — the EU AI Act, NIST AI Risk Management Framework, the OECD AI Principles — are designed to tell you what is required. They are useful documents. They are not governance documents.
The NIST AI RMF tells you to identify, govern, map, measure, and manage AI risk. The EU AI Act tells you to implement a risk management system, ensure transparency, and maintain human oversight. The Institute of Directors’ guidance on AI governance tells you to understand your AI systems and ensure accountability.
All correct. None of them tells you how a board — specifically a board of non-technical directors in a mid-sized company without a Chief AI Officer — actually exercises oversight when the CTO is presenting a deployment proposal.
The gap is the board’s capacity to evaluate, not the board’s awareness that evaluation is required.
Where governance actually fails
I have identified four failure patterns that appear repeatedly. They are not in the frameworks.
Pattern 1: The board evaluates the team, not the proposal. The CTO is credible. The track record is good. The board approves the AI investment because it trusts the executive team, not because it has evaluated the proposal’s assumptions, risk architecture, or governance requirements. This is not laziness. It is a rational response to an information asymmetry that the board does not have the tools to close.
Pattern 2: Human oversight is a policy, not a mechanism. The board approves an AI deployment on the condition that “there will be human oversight.” What that means in practice — who reviews what, at what cadence, with what authority to intervene — is left to the executive team. Six months into the deployment, “human oversight” is a sentence in the risk register, not a functioning process.
Pattern 3: The board governance question arrives after the investment. The deployment is approved, the infrastructure is built, the contracts are signed. Then the board receives the governance framework — as an after-the-fact document that describes what has been built, not as a decision-making tool that shaped how it was built. Governance frameworks applied retrospectively are documentation exercises, not governance.
Pattern 4: Risk escalation has no board-level definition. The executive team manages AI risk. What constitutes a risk that must be escalated to the board? At what threshold? Defined by whom? Most AI governance structures I have reviewed leave this implicit. The result: the board receives no escalation until the problem is already visible in financial results or press coverage.
What governance looks like in practice
A board cannot make the technical decisions about AI deployment. It should not try. What it can do is approve the governance structure that ensures those decisions are made correctly — and then hold the executive team to that structure.
That means three things.
First: a defined scope for what AI decisions require board-level approval or visibility. Major new deployments? Systems that touch customer data in new ways? Deployments that may fall under EU AI Act high-risk classification? The board should define this explicitly, not leave it to the CTO’s judgement.
Second: a risk escalation threshold. What constitutes a board-level AI governance event? Define it in advance. “Significant harm,” “material regulatory risk,” “customer impact above a defined threshold” — whatever the criteria, they should be written down and approved by the board before the deployment is live.
Third: a review cadence. Not a one-time approval. Quarterly or biannual board-level review of AI deployments against the governance criteria the board approved. Not a technical briefing — a governance briefing. Are the risk management mechanisms functioning? Is the human oversight policy being followed? Have there been events that met or approached the escalation threshold?
None of this requires the board to understand how a large language model works. It requires the board to behave like a board.
The Board AI Governance Framework is a decision-making structure for boards overseeing AI adoption: what questions the board should be asking, what oversight mechanisms to implement, what risk thresholds to set, and how to evaluate whether the executive team’s AI strategy is credible or performative. It is built for the person in the room who needs the governance structure — not the person who needs to build the AI system.
GBP 350. PDF download. Includes a free 30-minute consultation.