The Five Signs a Board Is Not Ready to Govern AI — and What to Do About Each One

I have sat in a number of board meetings as an advisor where an AI-related item has been presented and approved in a way that left me concerned. Not because the directors were negligent — they were not. Not because the AI deployment was dangerous — in most cases it was not, at least not obviously.

What concerned me was the gap between what the board thought it was doing and what it was actually doing. The board thought it was governing an AI deployment. What it was actually doing was ratifying a management presentation — approving based on the quality of the presentation and the confidence it had in the CTO, not based on an evaluation of the governance quality of the proposal.

These are different things. Boards that cannot tell the difference are not ready to govern AI. Here are the five signs.

Article illustration — five-signs-board-not-ready-to-govern-ai


Sign 1: The board has not asked about EU AI Act classification

The EU AI Act’s Annex III high-risk AI system categories are specific and consequential. An AI system that falls under Annex III has compliance obligations — risk management, transparency, human oversight, documentation — that attach not just to the technical team but to the board’s oversight function.

A board that has approved AI deployments without asking “does this fall under Annex III, and if so what are our compliance obligations” has approved deployments without understanding the regulatory dimension of what it approved.

The sign is specific: if no board paper on an AI initiative in the last twelve months has included a written position on EU AI Act classification, the board has not been asking the question.

What to do: Add EU AI Act classification to the standard template for AI initiative proposals. Before any AI deployment reaches the board for approval, require a written position from legal counsel on whether the deployment falls under Annex III. “We do not believe this deployment is high-risk under Annex III because [specific reasons]” is an acceptable board paper. “We are monitoring the regulatory situation” is not.


Sign 2: The AI governance function is one person

A single individual — typically the CTO — who is personally across all AI deployments and who provides informal oversight through their direct involvement is better than nothing. But it is not a governance structure.

The test is simple: if that person left the company tomorrow, what would remain of the AI governance? If the answer is “we would need to reassess” or “the new CTO would need to get up to speed,” the governance is personal oversight masquerading as board-delegated governance. When the person goes, the governance goes with them.

The sign is specific: if the board’s AI governance assurance depends on its confidence in a specific individual rather than on its confidence in a documented, tested governance process, the governance structure is inadequate.

What to do: Document the AI governance structure such that it is independent of the individual currently executing it. Who is accountable for each governance function? Against what criteria? With what escalation path when the criteria are not met? The documentation test: could a new CTO, appointed next Monday, read the governance framework and understand what they are responsible for without a handover?


Sign 3: The board has never received an AI incident report

This is the most revealing sign, and it requires a moment’s explanation.

Every AI system in production produces unexpected outputs at some frequency. The outputs may be minor — a recommendation that does not match expected behaviour, a classification that triggers a manual review, an anomaly in the data quality log. Or they may be significant — a failure that has customer impact, a bias incident, a regulatory compliance gap.

If the board has never received an AI incident report, one of three things is true: the organisation has no AI in production (possible but unlikely), the AI systems in production have never produced any unexpected outputs (extremely unlikely), or the escalation mechanism between the AI operations team and the board does not exist.

The third option is the most common. And the absence of an escalation mechanism is not evidence that the AI systems are functioning perfectly. It is evidence that the board would not be told if they were not.

What to do: Establish a standing AI incident reporting mechanism with a defined materiality threshold. Below the threshold: logged and reviewed by the executive AI governance owner. At or above the threshold: escalated to the board in the next reporting cycle. The threshold should include at least: any incident with customer impact, any incident with regulatory relevance, and any incident revealing a model failure mode not previously identified.


Sign 4: AI literacy varies so much that governance depends on who is in the room

A board with one expert director and seven non-expert directors does not have a collective governance capability. It has one person doing the governance work in meetings where they are present and an approval function in meetings where they are not.

This is the AI literacy distribution problem. The board’s governance quality should not depend on which directors attend which meetings. If it does, the governance structure needs redistribution.

The sign: if the AI expert director was absent from the last two AI-relevant board discussions, what governance actually happened? If the honest answer is “we probably approved things we should have scrutinised more,” the literacy gap is a governance gap.

What to do: Structured AI governance training for all board members, not just the technically inclined ones. The goal is not uniform technical expertise — it is a minimum threshold of literacy across the whole board sufficient for each director to ask the governance questions: “Is there an Annex III assessment? Is the human oversight mechanism specified? Who is accountable and how do we know the mechanism is working?” These are not technical questions. They are governance questions that every director should be able to ask.


Sign 5: The board thinks AI governance is handled because a framework has been adopted

This is the most insidious sign because it involves something that looks like good governance: the board has adopted an AI governance framework.

The problem is that adopting a framework and implementing it are different things. A board that has adopted the NACD AI Governance Guidance, or the OECD AI Principles, or a consultancy-produced AI governance policy — and has filed it in the board paper archive — has done something valuable. Specifically: it has created a compliance artefact. It has not implemented governance.

Governance is the functioning of the oversight mechanisms the framework describes. The framework says: maintain a risk management system for AI. Governance means: the risk management system exists, has a named owner, operates on a defined schedule, produces outputs that are reviewed against defined criteria, and escalates to the board when those criteria are not met.

What to do: For each governance commitment in the adopted framework, confirm that it has a functioning implementation. Not “we have a policy on this” — “we have a specific mechanism, a named owner, a schedule, and a board assurance report.” If the mapping from policy to implementation has gaps, those gaps are the governance failures that will show up as incidents.


The pattern behind all five signs

The pattern is the same across all five signs: the board is managing the appearance of AI governance — frameworks, approvals, briefings, a technically knowledgeable director — rather than the substance of it.

Substance means: specific, functional oversight mechanisms that would catch AI governance failures before they become incidents. Mechanisms that work when the AI expert is on holiday. Mechanisms that produce evidence they are functioning, not just evidence they exist.

The boards that govern AI well have the same frameworks and the same briefings as the boards that govern AI badly. The difference is that they have also built the mechanisms that make the frameworks real.


The AI Readiness Assessment gives boards a structured diagnostic for each of these five gaps — with specific diagnostic questions, a gap assessment, and a prioritised action list. The Board AI Governance Framework provides the governance structures that address each gap.

For independent advisory support on AI governance, contact Steven directly.

Steven Vaile

Steven Vaile

Board technology advisor and QSECDEF co-founder. Writes on AI governance, quantum security, and commercial strategy for boards and deep tech founders.