Imagine this: the CTO walks into the board meeting with a deck. The investment case is clear — the AI system will reduce processing time by 40%, cut operational costs, and position the company ahead of two named competitors who have already deployed similar tooling. The pilot data looks compelling. The team is experienced. The ask is a GBP 2 million capital allocation and board approval to move to production.
The board chair asks two questions: “What is the risk?” and “How confident is the team?” The CTO answers both. The board approves.
Eighteen months later, the system is producing outputs that have generated three customer complaints, one regulatory enquiry, and an internal review that found the monitoring process was not functioning as described in the original proposal.
I have seen this pattern across multiple sectors. The board did not fail by being ignorant. It failed by asking the wrong questions — and not knowing what the right ones were.
I write for board-level readers. If you are a Chief AI Officer looking for a technical evaluation framework, this is not that. This is for the non-executive director who sits in the room when the CTO presents the AI deployment proposal and needs to know which questions change the decision.
The questions that get asked
Most boards ask three categories of questions when approving AI investments:
Investment and return. What is the projected cost reduction or revenue uplift? What is the payback period? How does this compare to the alternatives?
Risk at the surface level. What are the risks? What happens if it does not work? What is the fallback?
Team confidence. How sure is the team? Have they done this before? Who else has deployed this kind of system?
These are reasonable questions. They are also insufficient. They evaluate the financial proposition and the team’s confidence. They do not evaluate the governance architecture — and the governance architecture is where deployments fail.
The questions that never get asked
On oversight: “What constitutes a human reviewing this decision, and how long do they actually spend on each review?”
Most AI governance proposals describe “human in the loop” as an oversight mechanism. What this means in practice varies enormously. A human reviewer who checks AI-generated outputs for 45 seconds before approving them is not providing meaningful oversight of a system making consequential decisions about customers. The board should ask for the specific oversight process, the average time per decision, the training the reviewer receives, and what the reviewer is authorised to do when they disagree with the AI’s output.
On failure modes: “What happens to the customer when this system fails, and who decides when we stop the system?”
Every AI system will produce wrong outputs. The question is not whether it will fail — it is who is watching for failures, what the threshold for escalation is, and who has the authority to halt the system. The board should approve these escalation criteria before the deployment is live, not discover them after the first significant failure.
On data: “What data was the system trained on, and does any of it expose us to legal risk we have not yet discovered?”
Training data provenance is a live legal question. Copyright claims against AI-generated content are already in the courts. If the system was trained on data the organisation does not have clear rights to, or on data that includes protected characteristics in a way that could produce discriminatory outputs, the legal exposure is the board’s exposure.
On the EU AI Act: “Does this deployment fall under high-risk classification under Annex III, and if so, what are the specific compliance obligations the board needs to approve?”
This is not optional for companies operating in the EU or supplying into EU markets. If the deployment touches employment decisions, credit assessment, essential services, or biometric identification, the Annex III classification question has a specific answer that the board should have in writing before approving the investment.
On third-party systems: “If we are using a third-party AI model rather than building our own, what are the compliance obligations of the model provider, and how do we verify they are being met?”
General Purpose AI models have their own obligations under the EU AI Act. If the company is building an application on top of a third-party model, the downstream compliance question — what does the model provider guarantee, and what does the company own — matters for board-level liability.
The one question that changes the most decisions
Of all the questions a board can ask about an AI deployment proposal, this is the one that reveals the most about the governance maturity of the proposal:
“If this system produces an output that causes material harm to a customer, what is the process — step by step, role by role — that responds to that harm, and who is responsible at each step?”
Most proposals cannot answer this question at the level of specificity it requires. Not because the team is incompetent, but because the incident response process for AI systems is genuinely hard to design well, and most organisations have not done it. If the CTO cannot walk the board through a specific, role-assigned incident response process in the room, the deployment is not ready for board approval.
This is not an argument against deploying AI. It is an argument for the board asking the question before the deployment is live rather than discovering the absence of an answer after the first incident.
What the board’s role actually is
The board is not responsible for designing the AI system. It is responsible for ensuring that the people who design and operate the system have a governance structure that the board approved, can audit, and can hold them to.
That requires the board to have defined: which AI decisions require board approval, what oversight mechanisms the executive team must implement, what the escalation threshold is, and what the review cadence is. None of these require technical expertise. All of them require the board to have asked, specifically, before the investment was approved.
The deployment proposal is the moment those questions need answers. Not the audit.
The Board AI Governance Framework provides a complete decision-making structure for boards overseeing AI adoption — the questions to ask, the oversight mechanisms to require, the risk thresholds to set, and the criteria for evaluating whether the executive team’s AI strategy is credible or performative.
GBP 350. PDF download. Includes a free 30-minute consultation.