Panic Is Optional: Governance Lessons From the Navy, Startups, and Quantum Computing

I joined the Royal Navy at sixteen. Shoes too tight, heart in terror, set to secure a qualification in boot polishing and extreme ironing. What I actually got was an education in how functioning institutions govern themselves under pressure — which turned out to be considerably more useful than the boot polishing.

The lesson was not explicit. Nobody sat me down and said “here is the philosophy of institutional governance.” It was delivered through repetition, through drill, and through the occasional incident that reminded you, unambiguously, what happens when the governance structure fails.

The principle, as I have come to understand it after 30 years of applying it in technology companies, a 21-property hospitality business, and a post-quantum cryptography advisory firm, is simple enough to fit on a Post-it note:

Article illustration — panic-is-optional-governance-lessons

Panic is optional. Response is mandatory.

What that means in practice is rather harder to learn than the sentence suggests.


What the Navy taught me about failure modes

I was not a combat engineer. I was the poor bastard running behind the commandos with a toolkit, spare radio batteries, and a rifle I was not entirely sure I wanted to use. The engineering in the Navy context was infrastructure engineering: communication systems, power systems, the technical architecture that keeps a vessel functioning when something goes wrong.

What becomes clear very quickly in that environment is that ships do not sink because one thing breaks. They sink because one thing breaks and the system designed to catch that failure does not function as designed, and the humans operating the catch system do not understand the system well enough to compensate manually when it does not.

Every catastrophic failure is a causal chain. The presenting failure is almost never the root cause. The root cause is typically in the governance layer — the oversight mechanism, the maintenance schedule, the training requirement, the escalation process — that was supposed to prevent the presenting failure from becoming consequential.

I spent subsequent years at companies that built causal analysis and root cause analysis software precisely because this principle is universal across complex systems. RiverSoft, then SMARTS, then Voyence — all in the root cause analysis space, all acquired by IBM or EMC, all built on the same insight: the failure you can see is not the failure that caused the problem.

When I now advise boards on AI governance, I am applying the same analytical framework. Not because AI and telecommunications networks are technically similar, but because complex systems fail in the same ways regardless of domain. The governance structure that prevents catastrophic failure in a complex network is structurally similar to the governance structure that prevents catastrophic failure in an AI deployment. The principles transfer.


What Mad Monkey taught me about scale and governance

In 2011, I invested my last $25,000 as a deposit on a backpacker hostel in Phnom Penh, Cambodia. By the time I stepped back from the business, it had grown to 21 properties, 21 restaurants, 20 bars, and approximately 1,000 employees across 7 countries. We were the market leader in Southeast Asian backpacker accommodation.

The governance lesson from that experience is one that most business school curricula do not teach, because it is uncomfortable.

Governance does not scale by itself. The governance structure that works for 2 properties fails at 10. The structure that works at 10 fails at 20. At each scale transition, you are not improving the existing structure — you are redesigning it from scratch, for a complexity level the previous structure was never built to handle.

The specific failure mode I saw most often — and experienced personally at Mad Monkey during several of those transitions — is what I now call the founder’s governance illusion: the belief that because you understand every part of the business at the current scale, your informal oversight is sufficient governance. It is not. Informal oversight is a personal skill. Governance is a system. When the founder is on holiday, or sick, or distracted by a new property opening, informal oversight disappears. Governance does not.

The transition from informal oversight to formal governance is one of the hardest decisions a founder makes, because it involves giving up something that feels like control and replacing it with something that feels like bureaucracy. What you are actually giving up is the illusion of control. What you are getting is the reality of it.

I apply this directly to AI governance advice. Many of the boards I work with have informal AI oversight — a CTO who is personally across all the company’s AI deployments, who attends every relevant meeting, who would catch a problem if it emerged. That is informal oversight. It is better than nothing. It is not governance. The CTO will eventually move on, or take a holiday, or have too many priorities to be across everything. The governance structure is what continues to function when the informal expert is not in the room.


What quantum computing taught me about pace

When I first encountered the quantum security space seriously, around 2021, my initial reaction was the one that most technically-adjacent non-specialists have: “this is important for people in national laboratories, not for boards of mid-sized companies.”

Where I was highly sceptical just a few months into the subject, I had — and I will use the phrase advisedly — quaffed the Quantum Kool-Aid. Not because I became a quantum physicist. I am not. I write for business readers, and I have linked the relevant technical papers for those who want the science. But because I understood what the governance-relevant facts actually were.

The governance-relevant fact about quantum computing is not the technical timeline for when cryptographically relevant quantum computers will exist. It is the HNDL threat: nation-state actors collecting encrypted data today, right now, with the intention of decrypting it once the capability exists. The future threat is not when quantum computers arrive. The relevant threat horizon is how long your most sensitive data needs to remain confidential.

The lesson about pace: the quantum transition is not an event. It is a migration. The organisations that treat it as an event — “we will address this when quantum computers become capable” — will be late. The organisations that treat it as a migration — “we have a rolling programme to assess and upgrade our cryptographic infrastructure” — will be prepared.

I have seen this pattern before. In the early 2000s, the internet was described as a “future” phenomenon by organisations that were already using it. In 2010, mobile was described as a “future” channel by companies that were already running their businesses on smartphones. Every major technology transition contains a period during which the transition is visibly happening and being described as future. That period is the governance window. The organisations that use it well are ready when the transition completes. The ones that describe it as future are not.

The board’s governance question on quantum is not “when will this affect us.” It is “what are we doing now to ensure we are prepared before it affects us.”


The thread connecting all three

Navy, Mad Monkey, quantum. Three environments with different risk profiles, different timescales, different technical contexts. The thread connecting them, and the thing I find myself returning to most often when advising boards, is this:

Governance is not what you put in place when something goes wrong. Governance is what prevents the things from going wrong in the first place — and when something does go wrong despite the governance (which it will), governance is what ensures the failure is contained, understood causally, and corrected at the right level rather than patched at the symptom level.

The boards that govern technology well are not the ones with the most elaborate frameworks. They are the ones where the directors have internalised the difference between a problem and the governance failure that enabled the problem. Where the question “why did our governance not catch this” is as important as the question “what happened.” Where the response to an AI incident is not just a remediation plan but a causal analysis of the oversight structure.

Panic is optional. Understanding the causal chain is not.


This post is an adapted version of an article published on LinkedIn in 2024. The governance principles described here — causal analysis applied to AI governance, formal oversight vs informal oversight, the pace problem in technology transitions — are the subject of several of the posts on this site. Start with Why Most AI Governance Frameworks Are Compliance Exercises, Not Governance Ones.

For boards seeking independent advisory support on AI governance and technology transitions, contact Steven directly.

Steven Vaile

Steven Vaile

Board technology advisor and QSECDEF co-founder. Writes on AI governance, quantum security, and commercial strategy for boards and deep tech founders.