Why statistics does not give us causality

24 Jun, 2019 at 12:28 | Posted in Statistics & Econometrics | 4 Comments

If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship should exist between statistical and subject-matter concerns …

introduction-to-statistical-inferenceWhere the ultimate aim of research is not prediction per se but rather causal explanation, an idea of causation that is expressed in terms of predictive power — as, for example, ‘Granger’ causation — is likely to be found wanting. Causal explanations cannot be arrived at through statistical methodology alone: a subject-matter input is also required in the form of background knowledge and, crucially, theory …

Likewise, the idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods and, especially to ‘practical science’ where the central concern is indeed with ‘the consequences of performing particular acts’. The development of this idea in the context of medical and agricultural research is as understandable as the development of that of causation as robust dependence within applied econometrics. However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that, under the — highly anthropocentric — principle of ‘no causation without manipulation’, the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.

John H. Goldthorpe

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

5cd674ec7348d0620e102a79a71f0063Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

For more on these issues — see the chapter “Capturing causality in economics and the limits of statistical inference” in my On the use and misuse of theories and models in economics.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

4 Comments

  1. Tangentially related, though I thought you’d appreciate Statistics role in killing all life on the planet.

    https://www.mdpi.com/2076-3263/9/6/251/htm
    “One more issue regards the level of data representativeness and the statistics applied to the analyzed data sets to enable conclusions to be drawn. Very often scientists use statistics of the most common probability distribution, a normal (Gaussian) data distribution. Sometimes they even apply these statistics as filters while collecting data without investigating the nature of the raw data first. As a result, they set a data range by removing outliers; sometimes that can be like throwing the child out with the bath water. Indeed, when one sets 1 SD (or 1 σ) as a data filter, this means that all outliers are removed and only 68% of the data is taken into consideration. Many years ago, when we started our investigation, we collected a limited data set; dissolved CH4 was not detected in most samples due to low instrument precision, and in only one sample did we measure a very high concentration of dissolved CH4 (20 µM). Following the mainstream, we removed this sample from our data set and, as a result, we lost at least five years of expertise, because, as we learned later, that single sample was from a hot spot, which we identified in that location five years later. Other authors have made the same mistake as we did, and removed outliers from the analysis [80,100]. We suggest that no statistical filters be set while collecting the raw data; this allows researchers to consider every data point when investigating the nature of the raw data. Before applying any statistics to a raw data set, it is reasonable to test the data using variable statistical tools and available software to understand which distribution fits best and what statistics is appropriate to apply. When measured values vary many fold or even by orders of magnitude, instead of removing the outlying values, it would be appropriate to divide the data into sub-populations and apply other than normal distribution statistics to the data [19].”

    • Or as Zvi Griliches had it: “The cost of computing has dropped exponentially, but the cost of thinking is what it always was. That is why we see so many articles with so many regressions and so little thought.” 🙂

  2. Well , some think,in this way! 🙂 https://www.tylervigen.com/spurious-correlations

    • Says it all 🙂


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.