The National Association of Corporate Directors (NACD) published its AI Governance guidance with a clear purpose: give boards a framework for overseeing AI adoption without requiring them to become AI engineers. It is a well-constructed document. It covers the right ground at the right level of abstraction for a board audience.
It also has a specific limitation that is not a criticism of the NACD but a structural reality: it was written for a broad audience, which means it has to be generic enough to apply to a FTSE 100 financial institution and a 200-person professional services firm simultaneously. The result is guidance that is accurate and insufficient — accurate in what it covers, insufficient for the specific decisions that the board of a mid-sized company actually needs to make.
I am not dismissing the NACD guidance. I have read it, I reference it, and I recommend it as a starting point. What I want to describe here is where the starting point ends — and what boards of mid-sized companies need to figure out for themselves once the framework has been read and filed.
What the NACD gets right
The NACD’s AI governance guidance correctly identifies three things that matter at board level.
First: AI governance is a board responsibility, not just an executive one. The NACD is explicit that boards cannot delegate AI governance entirely to the management team. The board must understand the AI strategy, oversee the risk management approach, and provide assurance that appropriate controls are in place. This is the right framing and it is frequently absent from how boards actually approach AI. Many boards receive AI briefings as information items rather than governance decisions. The NACD correctly positions AI oversight as a board accountability.
Second: AI literacy is a board composition question. The guidance recommends that boards assess their collective AI competence and address gaps through director education, advisory resources, or board composition changes. This is the right question to ask. Most mid-sized company boards have strong financial literacy, adequate legal literacy, and — if they are in the right sector — reasonable operational literacy. AI literacy is typically the gap. The NACD correctly identifies this as something boards should address structurally, not just through briefings.
Third: risk oversight for AI is different from risk oversight for traditional technology. The NACD acknowledges that AI introduces risk categories that are qualitatively different from traditional IT risk: model drift, data quality risk, explainability gaps, and regulatory exposure under emerging AI legislation. Boards need an oversight framework that addresses these categories, not just an extension of the existing IT risk framework.
On all three of these, the NACD is correct.
Where mid-sized company boards need to go further
The NACD guidance describes what good AI governance looks like at a level that a Fortune 500 company with a Chief AI Officer, a dedicated AI ethics committee, and a team of compliance specialists can operationalise without too much difficulty. For the board of a GBP 15m-80m company, the NACD guidance needs translation.
Translation 1: Which oversight mechanism is proportionate?
The NACD recommends board-level AI oversight but does not specify the governance architecture. A large company might establish an AI risk committee of the board. A mid-sized company probably cannot staff one. The governance question for smaller boards is: who, specifically, owns AI oversight at the board level, and what are the reporting cadence and escalation thresholds?
The answer is not the same for every company. A professional services firm deploying AI in client-facing work has different oversight requirements from a manufacturing company using AI for process optimisation. The NACD guidance does not help boards answer this question for their specific situation. They have to figure it out.
Translation 2: What does AI risk inventory look like at this scale?
The NACD recommends that boards understand the company’s AI risk landscape. At a large company, this is the output of a formal AI inventory process with dedicated resources. At a mid-sized company, the realistic starting point is: does the executive team have a list of the AI tools being used, where, by whom, and for what decisions?
I have asked this question in board contexts at companies that had a solid sense of their major AI deployments and no idea at all about the AI tools being used by individual business units — often purchased on credit cards rather than through formal procurement. The shadow AI problem is real and it is under-addressed in governance frameworks written for large organisations with formal procurement controls.
Translation 3: How do you assess AI maturity without a dedicated AI team?
The NACD guidance recommends assessing the organisation’s AI maturity and identifying governance gaps. For a company with a Chief AI Officer and an AI team, this is a formal process with established methodologies. For a company where the “AI team” is three software developers and a data analyst, the maturity assessment has to be proportionate.
The specific maturity questions for mid-sized companies are simpler and more practical: Can we explain to a regulator how each of our AI deployments makes its decisions? Do we have a process for catching AI errors before they become consequential? Do we know where our AI systems’ training data came from and whether it has quality issues?
These are not questions the NACD guidance answers for you. They are the questions you need to take to your own executive team.
The EU AI Act dimension the NACD guidance cannot address
The NACD guidance is a US-originated document. It does not address the EU AI Act’s specific governance requirements, which apply to companies with EU operations regardless of where they are headquartered.
For mid-sized companies with EU operations, EU employees, or EU customers, the AI Act’s Annex III high-risk classification and the governance obligations attached to it are material board-level concerns that the NACD framework does not cover. The board’s governance structure needs to be designed to satisfy both the NACD’s general principles and the EU AI Act’s specific obligations.
This is not a criticism of the NACD. It is a recognition that no single governance framework can address all jurisdictional requirements simultaneously. But it is the specific gap most relevant to the boards of mid-sized UK and European companies reading the NACD guidance as their primary AI governance reference.
The practical starting point
For a board that has read the NACD guidance and wants to move from framework to action, the right sequence is:
First, commission a written answer from the executive team to three questions: What AI systems are we operating? Which of those AI deployments make or materially influence decisions that affect customers, employees, or partners? What oversight mechanism do we have for each of those deployments?
Second, review the answers as a board and identify where the oversight mechanisms are informal (a person’s personal attention) versus structural (a documented process with defined escalation paths). Informal oversight is a starting point, not a governance structure.
Third, for any deployment in Annex III high-risk categories under the EU AI Act, commission a specific compliance assessment before August 2026.
The NACD framework is the right place to start. What you do after reading it is the question the framework cannot answer for your specific situation.
The Board AI Governance Framework is designed to take boards of mid-sized companies from framework reading to governance action — with the specific questions, oversight structures, and assessment criteria that translate general principles into decisions your board can actually make. It addresses both the NACD’s principles and the EU AI Act’s specific requirements.
For boards seeking independent advisory support on AI governance design, contact Steven directly.