If you are a founder presenting an AI strategy to your board for the first time, there is a high probability that your instinct about what to include in the presentation is wrong in a specific, correctable way.
The instinct is to present the technology: what the AI system does, how it works, what the technical architecture is, why it is better than the alternatives. The board receives this presentation and responds with a mixture of polite interest and vague approval, or asks questions that reveal they did not understand the key points, or defers the decision pending “further information” that nobody has clearly specified.
The presentation failed not because the technology is wrong but because the board was given the answer to the question “what did you build” rather than the answer to the question “what are you asking us to approve.”
These are different questions, and confusing them is the single most common error founders make in AI governance presentations to boards.
What a board is actually constituted to do
A board is a governance body. Its specific functions are to set strategic direction, provide oversight of management, manage risk on behalf of shareholders or stakeholders, ensure regulatory compliance, and approve decisions that are material enough to require board accountability rather than management delegation.
When a founder presents an AI strategy to the board, the board’s governance functions map onto specific questions — and these are the questions the presentation needs to answer, whether it was designed to or not.
Strategic direction question: Does this AI initiative align with the strategy we have approved? Does it change the strategic direction in ways the board needs to formally approve?
Oversight question: What oversight mechanism does the board have for this AI initiative once it is approved? Who reports to the board on its progress and health, how often, against what criteria?
Risk question: What are the material risks associated with this AI initiative? How have they been identified and assessed? What is the risk management structure?
Regulatory question: Does this AI initiative create any regulatory obligations? Specifically: does it touch the EU AI Act’s high-risk categories? Does it create new data processing obligations? Does it change the organisation’s NIS2 or DORA exposure?
Approval question: What specifically is the board being asked to approve — the concept, the budget, the deployment, the governance structure, all of the above? What does board approval authorise the management team to do?
A founder who answers all five of these questions in a 30-minute board presentation has given the board everything it needs to make a governance decision. A founder who explains the technical architecture, the competitive landscape, and the projected commercial benefits has given the board a briefing, not a governance document.
The oversight gap that boards notice even when they cannot name it
There is a specific gap in most AI strategy presentations that creates discomfort in board members who cannot quite articulate why they are not ready to approve.
The gap is the oversight architecture. The presentation describes what the AI system will do. It does not describe what the board’s mechanism for overseeingit will be once it is deployed.
Boards have a fiduciary obligation that does not end with approval. When they approve a capital investment, they expect quarterly reporting against milestones. When they approve a new hire for a C-suite role, they establish a performance review mechanism. When they approve a significant commercial contract, they receive a summary at each board meeting.
AI deployments are at least as material as any of these decisions, and they carry additional risks — model drift, regulatory exposure, data quality issues — that traditional investments do not. But most AI strategy presentations do not propose an oversight architecture. They describe the deployment and assume the board will ask for ongoing reports if it wants them.
Boards do not want to have to ask. They want to see the oversight architecture as part of the approval request.
The fix is simple: include in your presentation a proposed board-level reporting structure. “Once deployed, the AI system will be reported to the board quarterly via a one-page update covering: deployment health, incident log, data quality summary, regulatory compliance status, and any decisions required from the board.” This takes sixty seconds to add to the presentation and closes the most common source of board hesitation.
The regulatory dimension most founders get wrong
The EU AI Act’s high-risk AI system categories under Annex III are not esoteric. They include AI systems used in employment and workforce management, credit scoring, essential public services, critical infrastructure, and biometric identification.
Many founders believe their AI system does not fall under these categories. Many are wrong — not because they are being dishonest, but because the category definitions are broader than the colloquial interpretation.
“Employment and workforce management” includes AI systems used in hiring, performance evaluation, promotion, and termination decisions. If your AI system helps assess candidate CVs, ranks sales team performance, or recommends resource allocation across teams, it may be in scope.
“Credit scoring and access to financial services” includes AI systems that assess creditworthiness or financial eligibility, not just traditional credit bureaux. If your AI system affects access to financial products or services, it may be in scope.
The governance implication for founders: before presenting an AI strategy to the board, get a written position from legal counsel on whether any proposed deployment falls under Annex III. Present that position paper alongside the AI strategy. The board cannot make an informed governance decision without this information, and the founder who provides it proactively demonstrates governance maturity. The founder who does not provide it will be asked for it — or, worse, will be given approval based on incomplete information that becomes a liability once the deployment is live.
The three slides that matter most
If a founder could only keep three slides from a board AI strategy presentation, they should keep:
Slide 1: What the board is being asked to approve. Not what the AI system does. What specific authorisation the board’s approval provides: the budget, the timeline, the deployment scope, and what is explicitly not included in this approval.
Slide 2: The risk and regulatory position. The top three material risks, how each is being managed, and the regulatory classification (EU AI Act Annex III assessment, data protection impact assessment if required, NIS2 implications if applicable).
Slide 3: The oversight architecture. Who reports to the board on this initiative, how often, in what format, and under what conditions a board decision is required. What the escalation path looks like if a problem emerges.
Everything else — the technical architecture, the competitive analysis, the commercial projections — is context that helps the board evaluate the three core slides. It belongs in an appendix that the board can read before the meeting, not in the presentation itself.
Why the board is not being obstructive
I want to address a frustration I see frequently in founders who have encountered board hesitation on AI strategy.
The board that asks for more information, defers the decision, or requests a revised presentation is not being obstructive. It is performing its governance function. The governance function exists to ensure that material decisions are made with adequate information and appropriate oversight structures in place. A board that approves everything the founder presents without appropriate scrutiny is not a supportive board. It is an inadequate one.
The founder’s goal is not to get the AI strategy approved. It is to get the AI strategy approved correctly — with the board’s governance accountability fully engaged, the oversight structures in place, and the regulatory position confirmed. That foundation makes the deployment stronger, not weaker. The board that asks good governance questions is the board the founder wants in the room when an AI incident happens.
For founders preparing AI strategy presentations for boards, and for boards seeking to establish the governance structures that allow them to evaluate those presentations responsibly, the Board AI Governance Framework provides the decision criteria and oversight templates that both sides of the table need.
For commercial strategy and pitch narrative development for deep tech and AI founders, contact Steven directly.