August 2, 2026 is not a soft deadline.
That is the date from which the EU AI Act’s requirements for high-risk AI systems — Annex III of the regulation — become enforceable. Boards of companies with AI deployments in scope need to have answered the governance question before that date, not on it. The gap between “we are aware of this” and “we can demonstrate compliance” is typically measured in months, not weeks.
I am writing this for board-level readers: non-executive directors, board chairs, and CEOs who are receiving AI Act briefings from their legal and technical teams and are trying to understand what the board’s specific obligation is, as distinct from the executive team’s. This is not a legal analysis. It is a governance one.
What the board actually needs to answer
Most AI Act compliance discussions start with the technical question: does this AI system fall under high-risk classification? That is the right starting point for the legal team. It is not the right starting point for the board.
The board’s question is different. It is: do we have a governance structure that can evaluate and oversee AI deployments — and can we demonstrate that to a regulator?
Those are two separate things, and most mid-sized company boards have neither.
The Act’s Article 9 requires a risk management system for high-risk AI. Article 13 requires transparency and documentation sufficient for human oversight. Article 14 requires human oversight measures that are built into the deployment, not bolted on after the fact. These are not software requirements. They are governance requirements. The board does not implement them — but the board is responsible for ensuring they are implemented.
If the executive team presents the board with a compliance plan that is entirely operational — documentation, testing, technical controls — without a board-level governance structure to oversee it, the plan is incomplete.
Which AI systems fall under Annex III?
The high-risk categories under Annex III are specific. They include:
- AI systems used in employment and workforce management (CV screening, performance evaluation, promotion or termination decisions)
- AI systems used in credit scoring and access to financial services
- AI systems used in essential public services (education, healthcare access, social benefits)
- AI systems used in critical infrastructure
- AI systems used in biometric identification or categorisation
- AI systems that make or assist consequential decisions in law enforcement or migration contexts
If your organisation operates in financial services, professional services that touch employment decisions, healthcare adjacent services, or any regulated sector with significant data handling — the question “are any of our AI systems high-risk under Annex III?” is not rhetorical. It has a specific answer, and the board should know what it is.
There is also the General Purpose AI (GPAI) model layer. From August 2, 2025, the GPAI provisions were already in force. If the company is using a third-party AI model in any customer-facing or consequential internal application, the question of whether that model carries its own compliance obligations matters for board-level liability.
The three things a board needs to have decided
First: classification. The board should have received a written briefing from the legal team on which, if any, of the organisation’s AI deployments fall under Annex III. “We are reviewing this” is not an acceptable board response by April 2026 if enforcement begins in August. The board chair should ask for a written position paper, not a verbal update, and should confirm it has been reviewed by external counsel with EU AI Act expertise.
Second: governance architecture. Has the board approved a governance structure for AI oversight? This does not need to be elaborate. At minimum: who is responsible for AI risk management, what is the escalation path to the board, what is the review cadence, and how does the board receive assurance that the risk management system is functioning? The answer “the CTO is responsible” is not a governance structure.
Third: documentation readiness. Can the organisation demonstrate to a regulator that it has the risk management, transparency, and human oversight mechanisms required by the Act? This is an audit question. If the organisation cannot produce an answer to a competent authority within a reasonable timeframe, the governance structure is not fit for purpose.
What “good enough” looks like for a mid-sized company
The Act is explicitly tiered. A startup deploying a low-risk AI chatbot on its website has different obligations from a financial services firm using AI in credit underwriting. The governance structure for a mid-sized company does not need to replicate what a FTSE 100 company with a dedicated AI governance team implements.
What it does need is proportionality. The key phrase from the Act is “taking into account the generally acknowledged state of the art, including as reflected in relevant technical specifications and standards.” For a mid-sized company, this means: a documented risk management approach appropriate to the company’s AI maturity and exposure, reviewed by the board, with a clear owner in the executive team.
That is achievable before August. The question is whether the board has asked for it.
The governance failure pattern I see most often
It is not that boards are unaware of the EU AI Act. Most boards in the relevant sectors have received at least one briefing. The failure pattern is more specific: the board receives a compliance update from the legal or technical team, treats it as information rather than a governance question, and does not convert it into a decision about oversight structures.
The Act does not require the board to understand how the AI model works. It requires the board to have approved a governance structure that ensures someone with accountability to the board is overseeing how the AI model works, against documented criteria, on a regular schedule. That is a board governance decision, not a technical one.
August 2 is approximately 128 days away at the time of writing. The distance between “we know about this” and “we have a defensible governance structure” is real, and it does not close by itself.
What to do this week
Ask the following question at your next board meeting, and require a written response before the meeting after that:
“Which of our AI deployments fall under Annex III high-risk classification under the EU AI Act? For each one: what is the risk management structure, who owns it, and what does the board need to approve or confirm before August 2, 2026?”
If the executive team cannot answer that question with specificity within two weeks, the board’s first governance action is to commission the assessment that makes it answerable.
The EU AI Act Compliance Guide for Company Directors covers the high-risk classification criteria, the board’s oversight obligations under Articles 9, 13, and 14, what the August 2026 enforcement timeline means for budget and resource decisions, and what a proportionate compliance structure looks like for a mid-sized company. It is a director’s guide, not a legal commentary — written for the person in the room who needs to ask the right questions, not the person who needs to write the technical documentation.
For boards that need independent advisory support through the compliance process, contact Steven directly.