Two of the most significant technology investment programmes running inside mid-sized and large organisations right now are proceeding entirely independently of each other.
On one side: AI infrastructure investment. Organisations are building or procuring AI systems, deploying large language models in customer-facing and internal applications, building data pipelines that feed AI decision engines, and investing in the cloud infrastructure to run them at scale.
On the other side: PQC migration planning. The NIST standards were finalised in August 2024. The NCSC has published migration guidance. NIS2 personal liability for cybersecurity governance failures is now in force. Boards are beginning to commission cryptographic inventories and migration timelines.
These two programmes are being run by different teams, with different governance structures, against different timelines, and without coordinated planning. The intersection between them is where the governance gap lives — and it is a gap that will become visible, and costly, if it is not addressed before both programmes are materially advanced.
Why AI infrastructure creates specific PQC considerations
AI systems at scale are data-intensive. They consume data at rates that are qualitatively different from traditional enterprise applications — training data, inference data, model state, feedback loops, output logs. All of this data flows across network infrastructure, is stored in cloud and on-premise data systems, and is protected by the same cryptographic standards that the PQC migration is planning to change.
But AI infrastructure has two additional specific characteristics that make the PQC intersection particularly important.
The first: AI model confidentiality. A trained AI model is intellectual property. The training process is typically expensive, time-consuming, and competitively sensitive. The trained weights, the fine-tuning data, and the evaluation results are all information that an adversary who can decrypt them in three to five years would find valuable. If the AI model was trained in 2025 on proprietary data, encrypted with RSA or ECC, and that encryption is retrospectively breakable by a quantum adversary in 2030, the model’s confidentiality is compromised — not in 2030, but now, by the Harvest Now, Decrypt Later (HNDL) threat.
For companies making significant AI infrastructure investments today, the PQC migration question is not just a data protection question. It is an IP protection question. The value of the AI system being built depends in part on whether its underlying model and data remain confidential for long enough to generate a return on the investment.
The second: AI systems as cryptographic infrastructure. AI systems increasingly operate cryptographic functions — authentication, key management, data signing. An AI system used for identity verification is a cryptographic system. An AI system used to manage access control is embedded in the organisation’s key management architecture. These systems will need to be included in the cryptographic inventory.
The governance failure I see in most organisations is that the PQC migration programme focuses on the “traditional” cryptographic infrastructure — email, file encryption, VPN, PKI — and does not account for the AI systems being deployed in parallel, some of which have significant cryptographic components. When the PQC migration reaches those AI systems, it will discover that they were not designed with crypto-agility in mind.
The AI Act and quantum risk intersection
The EU AI Act and the EU’s cybersecurity regulatory framework (NIS2, the Cybersecurity Act) are designed to be read together. The Act’s Article 9 risk management requirements for high-risk AI systems and NIS2’s cybersecurity risk management obligations for essential entities overlap in scope for companies that are both deploying high-risk AI and operating essential services.
For the board, this creates a specific governance question: do our AI governance structures address quantum risk? Specifically:
- Is the cryptographic protection of AI model training data and inference data included in the PQC migration scope?
- Are the AI systems that perform authentication or access control functions included in the cryptographic inventory?
- Is the AI infrastructure’s cloud provider’s PQC migration timeline included in the third-party dependency assessment?
Most boards cannot answer yes to all three. This is not because the risks are being ignored — it is because the two programmes are being run by teams that do not have a joint governance structure.
The board-level governance fix
The fix is not complicated. It requires one board-level decision and one executive accountability change.
The board-level decision: confirm that the AI infrastructure investment programme and the PQC migration programme have a joint governance review at least twice per year, at which both programme leads present to the board simultaneously and address the intersection points explicitly.
The executive accountability change: assign one person — ideally the CISO, but in smaller organisations potentially the CTO — specific accountability for the intersection between AI security and quantum risk. This person’s reporting to the board covers both programmes and their interaction, rather than two separate programme updates that the board has to mentally integrate.
Neither of these is a new governance structure from scratch. Both are adjustments to governance structures that probably already exist — the AI governance reporting and the cybersecurity reporting. The change is making the intersection a standing agenda item rather than something that happens when someone in one team notices the other team’s work.
The investment decision implication
For boards approving AI infrastructure investments, the quantum risk dimension adds one governance check to the standard investment approval: does this AI system’s cryptographic architecture include a PQC migration pathway?
This is not a veto on AI investment. It is a design requirement. AI systems designed with crypto-agility — with algorithm configuration that is changeable without a full system rebuild — are more expensive to build and less expensive to migrate. AI systems designed without this consideration are cheaper to build and more expensive to migrate, at a point in time when the migration will be less discretionary.
The board’s governance role in AI investment approval includes confirming that the cryptographic architecture of proposed AI systems is PQC-ready. Most boards do not currently ask this question in investment approval processes. Adding it costs almost nothing and addresses a material risk.
The Quantum Risk: What Directors Need to Know covers the intersection of AI infrastructure risk and post-quantum cryptography in detail, including the HNDL threat to AI IP, the crypto-agility requirements for AI systems, and the combined governance structure for boards managing both risks. The Board AI Governance Framework provides the AI oversight structures that integrate with the quantum risk governance picture.
For independent advisory support on the combined AI governance and quantum risk picture, visit Quantum Security Defence or contact Steven directly.