AI readiness for a board of directors is not the same thing as AI readiness for a company. The company’s AI readiness is about whether the organisation has the data infrastructure, technical capability, and process architecture to deploy AI effectively. The board’s AI readiness is about whether the governance body has the literacy, structures, and oversight mechanisms to govern AI deployments on the company’s behalf.
These are related but distinct questions, and most AI readiness frameworks I have seen conflate them. They assess the company’s readiness to use AI rather than the board’s readiness to govern it.
This is a guide to the second question: how to assess whether your board is actually ready to govern the AI decisions it is being asked to approve.
What board AI readiness is made of
Board AI readiness has three components, all of which need to be present before the board is functioning as an AI governance body rather than an AI approval mechanism.
Component 1: Collective AI literacy at an appropriate level.
“Appropriate level” is the important qualifier. The board does not need to understand how transformers work or how to fine-tune a model. It needs to understand: what categories of AI system create what categories of governance risk, what the EU AI Act’s key obligations are for boards, what meaningful human oversight looks like versus notional human oversight, and what questions distinguish a credible AI risk management plan from a performative one.
Collective AI literacy means the board as a body has this understanding, not that every director is individually fluent. A board with one director who has deep AI expertise and eight who have none has a collective literacy problem: the one expert cannot be in every meeting, cannot ask every question, and cannot represent the board’s governance function alone.
Component 2: A governance structure that works when the experts are not in the room.
This is the formal governance test: if the most AI-literate director on the board was absent from a meeting where an AI deployment proposal was presented, would the board still be able to apply appropriate governance to the decision?
If not — if the governance quality depends on individual knowledge rather than board process — the governance structure is not adequate. It is personal oversight masquerading as board governance. Personal oversight is better than nothing. It is not a board governance structure.
Component 3: Information flows that make AI governance visible to the board.
The board governs what it knows about. If the executive team’s AI deployments are not routinely reported to the board in a format that enables oversight — deployment inventory, incident log, regulatory compliance status, data quality summary — the board is not governing AI. It is receiving updates about AI.
The distinction matters. Oversight requires the ability to evaluate the information received against defined criteria and to take action when the information reveals a gap. “Receiving updates” is passive. Oversight is active.
The five diagnostic questions
These five questions, answered honestly, give a board a picture of where it actually stands on AI readiness. Not where it aspires to stand — where it stands.
Question 1: If the executive team presents an AI deployment proposal at the next board meeting, what criteria will the board use to evaluate it?
Not in principle — specifically. If the board cannot identify the specific criteria it would apply — which Annex III categories it would check against, what human oversight specification it would require, what escalation structure it would expect to see in the proposal — the board will approve on confidence rather than governance.
Question 2: What is the board’s current AI deployment inventory?
Can a director list the AI systems the company currently operates in production? If not, the oversight is incomplete: the board cannot oversee deployments it does not know about.
Question 3: What happened the last time an AI system produced an unexpected output?
Either the board was informed and applied governance, or it was not informed because there was no escalation mechanism, or the unexpected output was not recognised as significant because there was no monitoring for unexpected outputs. The answer to this question reveals the actual state of the oversight mechanism more reliably than any self-assessment.
Question 4: What EU AI Act training have board members received, and when?
The answer “we received a legal briefing in Q3 2025” is a starting point, not a current governance position. The EU AI Act’s enforcement window opens August 2, 2026. Directors with personal liability under NIS2 for cybersecurity governance failures need AI Act literacy because the two frameworks interact. Training received twelve months ago needs to be refreshed against current guidance.
Question 5: Who in the executive team is personally accountable to the board for AI governance, and what is the reporting cadence?
“The CTO” is a delegation, not a governance structure. The answer needs to include: accountable for what, specifically — which aspects of AI governance — reported how, against what criteria, on what schedule.
Reading the results
A board that can answer all five questions with specificity is at a solid governance starting point. It may still have gaps in literacy, structure, or information flows — but it knows where it stands and what it is managing.
A board that struggles with two or more questions has an AI readiness gap that is material in the current regulatory environment. Not catastrophic — these are addressable gaps — but material. The EU AI Act’s August 2026 enforcement window means boards in scope for high-risk AI system obligations need to close these gaps before the deadline, not after it.
The most common profile I see in mid-sized company boards: strong on individual literacy (at least one director who is genuinely knowledgeable about AI), weak on collective governance (the knowledge lives in one person, not in board processes), and fragmented on information flows (AI is reported via the CTO’s general technology update rather than as a standing governance item).
This profile is not a failing board. It is a board that has not yet converted individual knowledge into collective governance architecture. The conversion is not complicated — it requires a governance decision about structures and reporting, not a technology implementation. The barrier is usually that nobody has framed it as a board governance decision rather than a management responsibility.
The next step
Once the diagnostic is complete, the path from AI readiness assessment to AI governance structure is a governance decision sequence:
- Confirm whether any current AI deployments fall under EU AI Act Annex III high-risk classification.
- Establish the board’s collective AI literacy baseline through structured director training.
- Adopt a board-level AI governance framework that defines approval criteria, oversight mechanisms, and escalation structures.
- Establish standing board reporting on AI governance, separate from the general technology update.
- Assign a named executive owner for AI governance reporting to the board, with a defined scope and quarterly reporting cadence.
This sequence is achievable before August 2026 for any board that decides to prioritise it now.
The AI Readiness Assessment is a structured self-assessment tool for boards — a session-length diagnostic that identifies governance gaps across all five components and produces a prioritised action list the board can hand to the executive team. It is designed to be completed in a single board session, without external facilitation, and to produce a concrete output rather than a general assessment.
The Board AI Governance Framework provides the governance structures that the assessment will typically identify as needed — the decision criteria, oversight templates, and standing reporting structures that convert AI readiness into AI governance.
For independent advisory support on board AI readiness, contact Steven directly.