Forward Projection vs Regression: How Causal Reasoning Changes Board Decisions

There are two directions from which you can apply causal reasoning. You can start from an outcome and work backwards to identify what caused it. Or you can start from a current condition and project forward to identify what it will likely cause.

The first is regression. The second is forward projection. Both are legitimate modes of causal analysis. They answer different questions and they require different inputs. Understanding the distinction makes the difference between a board that governs events and a board that governs conditions.

Most governance frameworks are designed primarily for regression. A failure occurs. The board investigates. The investigation works backwards through the causal chain to identify the root cause. The remediation addresses the root cause. This is useful governance. It is not complete governance. It operates after the failure has occurred.

Article illustration — forward-projection-vs-regression-causal-reasoning-boards

Forward projection asks the harder question: given the conditions that currently exist — the systems we have deployed, the governance structures we have approved, the decisions we are about to make — what will likely occur? If we deploy this AI system, what failure modes are probable given its design, its training context, and its deployment environment? If we have not done this type of analysis before approving an AI deployment, we are governing retrospectively by design.


What regression analysis actually does

When a board receives an incident report, the investigation it triggers is, whether explicitly framed this way or not, a regression analysis. The observed outcome is the incident. The analysis works backwards.

Why did the AI system produce the discriminatory output? Because it was trained on data that contained historical bias. Why was the biased data used? Because the data governance process did not include a bias review at the data selection stage. Why was there no bias review at data selection? Because the governance framework approved for this deployment was written before bias review was a standard step in the deployment checklist.

That is a regression chain. Each step answers “why did the previous step occur?” The chain terminates at the structural condition — the governance framework design — that was the necessary antecedent of everything that followed.

Regression analysis is powerful for understanding specific incidents. Three structural limits reduce its value as a complete governance approach. First, it is inherently retrospective — it only applies after the failure has occurred. Second, the chain terminates when the analysis stops, not necessarily when the root cause is reached; many governance investigations stop two steps short of the structural condition. Third, regression analysis is specific to the incident analysed — it does not automatically generalise to identify other failure modes from the same structural condition.


What forward projection does differently

Forward projection starts not from an incident but from a proposed decision or a current condition, and traces forward through the causal chain to the likely outcomes.

The questions it asks are: if we approve this AI deployment as proposed, what failure modes are embedded in its design? If those failure modes materialise, what is the sequence of conditions that will be required to catch them before they become consequential? Are those conditions currently in place? If not, what is missing?

This is a fundamentally different governance posture. It does not wait for the failure to occur in order to understand the causal chain. It attempts to identify the causal chain before the failure occurs.

I encountered this distinction early in my career in telecoms and military network environments — systems where failures were expensive and, in some cases, irreversible. The value proposition of root cause analysis software was not only that it helped you understand what had broken. It was that it helped you model what would break under different conditions before you changed the configuration. Forward-looking fault analysis. Predictive causal reasoning applied to network architecture decisions.

The transfer to AI governance is direct. When a board is asked to approve an AI deployment, the governance posture is: not “do we trust the CTO’s assessment of the risk,” but “here is the failure chain we have modelled for this deployment — here is the point at which human review catches the failure before it becomes consequential, here is the condition under which the review mechanism fails, and here is what we need to have confirmed before approving deployment.”


Applying forward projection to an AI deployment decision

Let me make this specific rather than abstract.

A board is considering approving an AI system for use in credit assessment. The proposal includes a risk management section that identifies bias, data quality, and regulatory compliance as the key risk areas, with mitigations for each.

A regression-based governance posture reviews the proposal, approves it, and waits to see whether the risk mitigations function as described.

A forward projection posture asks: given this system’s design, its training data, and the deployment context, what is the most probable failure path? Not the most dramatic possible failure — the most probable one given the actual conditions of this deployment.

The forward projection analysis might produce: the most probable failure path is that the model performs accurately for the majority of applications but produces systematically different outcomes for applications from demographic segments underrepresented in the training data. The current review mechanism triggers on individual cases flagged by an outlier detection algorithm. The algorithm is calibrated against the majority population. Applications from underrepresented segments may fall within the algorithm’s normal range and not trigger individual review, whilst exhibiting a statistical pattern that only becomes visible in aggregate.

That is a specific, probeable causal chain. The board can now ask: do we have aggregate monitoring, not just individual case monitoring? What is the minimum period before the aggregate pattern would become statistically visible? Who is responsible for reviewing the aggregate data, at what frequency, and what is the escalation path if an anomalous pattern is found?

These questions are answerable before deployment. If the answers are unsatisfactory, the deployment approval can require specific conditions to be met rather than waiting for the failure to occur in production.


The governance value of counterfactual forward projection

There is a third mode that combines elements of both regression and forward projection: counterfactual forward projection. It asks: if we deploy under the conditions we are proposing, and the failure occurs that we have identified as probable, would our governance structure catch it before it becomes consequential?

This is the governance test question. It is asked forward in time, against a hypothetical failure, with the current governance structure as the variable.

The answer reveals whether the governance structure is adequate for the failure modes that have been identified as probable. If the answer is yes — the governance structure would catch this failure in time — the deployment can proceed with monitored confidence. If the answer is no — the failure would reach a consequential stage before the governance structure catches it — the governance structure needs to change before deployment proceeds.

This mode of analysis is not common in board governance. It requires both the forward projection (what failure is probable) and the counterfactual (does our governance structure catch it). Both are non-trivial. Combined, they constitute the only governance posture that can claim to be proactive rather than reactive.


Where this applies beyond AI

The forward projection principle is not specific to AI governance. It applies to any complex system where failure modes can be modelled in advance.

Post-quantum cryptography migration is an example I use frequently. The regression analysis approach waits for a quantum computing capability to materialise and then works backwards to understand why the organisation was unprepared. The forward projection approach asks now: given the current state of quantum computing development, the organisation’s cryptographic infrastructure, and the data it needs to protect, what is the probable failure timeline if no migration begins? What conditions are necessary for the migration to complete before the relevant capability threshold is reached?

This is a governance question, not a technical one. It does not require the board to understand quantum physics. It requires the board to understand the causal chain from a current decision (migrate or defer) to a probable future outcome (prepared or unprepared), and to evaluate the governance structure against that chain.

This is what I mean when I describe causal analysis as the differentiating skill in governance. Every board has access to regression analysis — post-incident review is standard. Boards that also apply forward projection, and use counterfactual testing to verify their governance structures against probable failure modes, govern a different category of risk.


For boards seeking a structured framework that incorporates forward projection and counterfactual testing into AI deployment governance, the Board AI Governance Framework provides the decision structure and review questions.

The AI Readiness Assessment gives boards a structured starting point for evaluating current governance capability against the requirements of forward-looking AI risk management.

For independent advisory support, contact Steven directly.

Steven Vaile

Steven Vaile

Board technology advisor and QSECDEF co-founder. Writes on AI governance, quantum security, and commercial strategy for boards and deep tech founders.