Using counterfactuals in causal inference

20 Mar, 2023 at 14:01 | Posted in Economics | 8 Comments

What is Counterfactual Thinking | Explained in 2 min - YouTubeI have argued that there are four major problems in the way of using the counterfactual account for causal inference. Of the four, I argued that the fourth — the problem of indeterminacy — is likely to be the most damaging: To the extent that some of the causal principles that connect counterfactual antecedent and consequent are genuinely indeterministic, the counterfactual will be of the “might have been” and not the “would have been” kind …

The causal principles describing a situation of interest must be weak enough — that is, contain genuinely indeterministic relations so that the counterfactual antecedent can be implemented … At the same time, the principles must be strong enough — that is, contain enough deteerministic relations so that the consequent follows from the antecedent together with the principles … What is required is enough indeterministic causal relations so that the antecedent can be implemented and enough deterministic relations so that the consequent (or its negation) follows.

Evidently, this is a tall order: Why would deterministic and indeterministic causal principles be distributed in just this way? Wouldn’t it seem likely that to the extent we are willing to believe that the antecedent event was contingent, we are also willing to believe that the outcome remained contingent given the antecedent event? …

A final argument in favor of counterfactuals even in the context of establishing causation is that there are no alternatives that are unequivocally superior. The main alternative to the counterfactual account is process tracking. But process tracking is itself not without problems … For all its difficulties, counterfactual speculation may sometimes be the only way to make causal inferences about singular events.

Julian Reiss


  1. The U.S. Environmental Protection Agency has a very practical approach:
    Causal Analysis/Diagnosis Decision Information System (CADDIS)
    In particular, the following may be of general interest:
    About Causal Assessment: A Conceptual and Historical Explanation of Our Causal Approach

    • Thanks, Kingsley. Interesting reading!

    • May I observe that based on personal experience with my state’s Adaptive Management Program (one of the phrases in Kingsley’s link; and see, any science is overruled by the politics involved in the various boards and commissions that implement the program and vote on funding priorities, etc. based on individual emotional selfish non-scientific personality whims? Thus does not arbitrary psychology end up determining science outcomes in the end, because implementation of science is a social process subject to all the manipulative tricks in “How to Win Friends and Influence People”? When you operationalize science, don’t you run into wide error margins that allow for many interpretations anyway, and prioritization of one theory becomes purely a political decision?

      • @rsm
        In principle the proper role of science is to inform policy makers about the available options.
        In contrast choices are properly made by democratically elected politicians or their appointees.
        This seems to be the intent in the Washington state Manual (linked in your link).
        If you are dissatisfied you can:
        – Discuss with your elected representatives
        – Join and influence (or subvert) the ruling political party (Antifa?)
        – Redouble your energies and financial contributions to opposition parties
        – Stand for election yourself.

        • “In principle” – but in practice, why did I observe during today’s Cooperative Monitoring Evaluation and Research committee public meeting a field scientist commenting that variability and real-world uncertainty undermine science’s ability to inform at all?
          What if I can lobby my elected officials to fire all the scientists and invest the science budget in financial markets instead, so you could pay loggers not to log?

  2. The passage reads as if Reiss (like most discussants of causal models I see) never heard of the stochastic counterfactual, as discussed for example in VanderWeele and Robins, “Stochastic counterfactuals and stochastic sufficient causes”, Statistica Sinica 22 (2012), 379-392. The concept arises quite naturally and precisely in quantum theories, e.g., in a 2-slit experiment we cannot predict beyond a distribution where a given emitted photon will fall. When we observe its final location, we can say closing a slit would have changed the distribution, but according to the standard Copenhagen interpretation we cannot say where the photon fell was changed or affected by the slit arrangement – after all, it could very well have landed in the same spot under the counterfactual arrangement.

    In the messy complex world of health and social science phenomena, stochastic potential outcomes are in my view a more accurate way to model causal processes than the usual deterministic models, even though here the models are nothing more than heuristic devices to organize and coordinate observations with explanations.

    • As always, context is decisive. To be fair to Reiss, it has to be acknowledged that most of his discussion relates to issues in counterfactual history analysis (à la Fogel’s work on the American railroads, ‘alternate history’, etc.) rather than experimental settings or theoretical physics.

      • My point is precisely that, in such highly uncertain settings as (say) alternate histories, a stochastic counterfactual model is more realistic than a deterministic one. That is because it has at least some allowance for the enormous range of possible worlds after the defining event at which the alternative histories branch apart. This makes stochastic models far less vivid if not useless for enthralling writing, but allows that we can have no certainty about specific counterfactual outcomes – just as in quantum mechanics!

        An example I find vivid: How would history have played out if Hitler had been killed instead of wounded by the shell burst that hospitalized him in 1916? Any single alternative history from then on (as in a novel or deterministic counterfactual) is sheer speculative fiction. History could have turned out far differently in any direction depending on exactly who would have then become the Nazi leader in the 1920s. A more intelligent, historically savvy, militarily competent leader might have been able to more fully exploit Soviet blunders and not declare war on the U.S. when his offensive bogged down at Moscow, thus prolonging WWII enough to develop nuclear warheads for the V2, avoiding unconditional surrender and perhaps even achieving victory in Europe. On the other hand, a less charismatic and less politically talented leader (with respect to German society at the time) might never even have achieved a government takeover by the Nazi party.

        There are endless combinations of possibilities across innumerable dimensions, so at best we can only begin to imagine a distribution of outcomes that could ensue. That recognition does not make for the most entertaining of stories. But then, it may be that an advantage of the stochastic view is to put a brake on tales which, through their emotional appeal and evidence filtering, seem so compelling to some that they inspire tenacious belief (as common in religion, politics, philosophy, statistics, and medicine), in defiance of certain scientific ideals – ideals which admittedly in science as in life are more honored in word than in deed.

Sorry, the comment form is closed at this time.

Blog at
Entries and Comments feeds.