Do RCTs really carry special epistemic weight?

30 Jun, 2021 at 17:09 | Posted in Theory of Science & Methodology | Comments Off on Do RCTs really carry special epistemic weight?

Mike Clarke, the Director of the Cochrane Centre in the UK, for example, states on the Centre’s Web site: ‘In a randomized trial, the only difference between the two groups being compared is that of most interest: the intervention under investigation’.

Evidence-based medicine is broken: why we need data and technology to fix itThis seems clearly to constitute a categorical assertion that by randomizing, all other factors — both known and unknown — are equalized between the experimental and control groups; hence the only remaining difference is exactly that one group has been given the treatment under test, while the other has been given either a placebo or conventional therapy; and hence any observed difference in outcome between the two groups in a randomized trial (but only in a randomized trial) must be the effect of the treatment under test.

Clarke’s claim is repeated many times elsewhere and is widely believed. It is admirably clear and sharp, but it is clearly unsustainable … Clearly the claim taken literally is quite trivially false: the experimental group contains Mrs Brown and not Mr Smith, whereas the control group contains Mr Smith and not Mrs Brown, etc. Some restriction on the range of differences being considered is obviously implicit here; and presumably the real claim is something like that the two groups have the same means and distributions of all the [causally?] relevant factors. Although this sounds like a meaningful claim, I am not sure whether it would remain so under analysis … And certainly, even with respect to a given (finite) list of potentially relevant factors, no one can really believe that it automatically holds in the case of any particular randomized division of the subjects involved in the study. Although many commentators often seem to make the claim … no one seriously thinking about the issues can hold that randomization is a sufficient condition for there to be no difference between the two groups that may turn out to be relevant …

In sum, despite what is often said and written, no one can seriously believe that having randomized is a sufficient condition for a trial result to be reasonably supposed to reflect the true effect of some treatment. Is randomizing a necessary condition for this? That is, is it true that we cannot have real evidence that a treatment is genuinely effective unless it has been validated in a properly randomized trial? Again, some people in medicine sometimes talk as if this were the case, but again no one can seriously believe it. Indeed, as pointed out earlier, modern medicine would be in a terrible state if it were true. As already noted, the overwhelming majority of all treatments regarded as unambiguously effective by modern medicine today — from aspirin for mild headache through diuretics in heart failure and on to many surgical procedures — were never (and now, let us hope, never will be) ‘validated’ in an RCT.

John Worrall

Taking the con out of RCTs

30 Jun, 2021 at 11:20 | Posted in Economics | Comments Off on Taking the con out of RCTs

rcDevelopment actions and interventions (policies/programs/projects/practices) should be based on the evidence. This truism now comes with a radical proposal about the meaning of “the evidence.” In development practice, where there are hundreds of complex, sometimes rapidly changing, contexts seemingly innocuous phrases like “rely on the rigorous evidence” are taken to mean: “Ignore evidence from your context and rely in your context on evidence that was ‘rigorous’ for another place, another time, another implementing organization, another set of interacting policies, another set of local social norms, another program design and do this without any underlying model or theory that guides your understanding of the relevant phenomena.” …

The advocates of RCTs and the use and importance of rigorous evidence, who are mostly full-time academics based in universities, have often taken a condescending, if not outright ad hominem, stance towards development practitioners. They have often treated arguments against exclusive reliance on RCT evidence, like that the world is complex, getting things done in the real world is a difficult craft, that RCTs don’t address key issues, that results cannot be transplanted across contexts, not as legitimate arguments but as the self-interested pleadings of “bureaucrats” who don’t care about “the evidence” or development outcomes. Therefore, it is striking that it is the practitioner objections about external validity that are actually technically right about the unreliability of RCTs for making context-specific predictions and it is the academics that are wrong, and this in the technical domain that supposedly is the acamedicians comparative advantage.

Lant Pritchett

Just as econometrics, the use of randomization often promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

As Pritchett shows, does a conclusion established in population X hold for target population Y only under very restrictive conditions. ‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

Bradford Hill — comment trouver de la causalité dans des corrélations

30 Jun, 2021 at 09:53 | Posted in Statistics & Econometrics | Comments Off on Bradford Hill — comment trouver de la causalité dans des corrélations

.

How to achieve ‘external validity’

29 Jun, 2021 at 11:54 | Posted in Statistics & Econometrics | 2 Comments

Suman Ambwani on Twitter: "Also some very funny (and oddly specific)  student-generated memes from the course...… "There is a lot of discussion in the literature on beginning with experiments and then going on to check “external validity”. But to imagine that there is a scientific way to achieve external validity is, for the most part, a delusion … RCTs do not in themselves tell us anything about the traits of populations in other places and at other times. Hence, no matter how large the population from which we draw our random samples is, because it is impossible to draw samples from tomorrow’s population and all policies we craft today are for use tomorrow, there is no “scientific” way to go from RCTs to policy. En route from evidence and experience to policy, we have to rely on intuition, common sense and judgement. It is evidence coupled with intuition and judgement that gives us knowledge. To deny any role to intuition is to fall into total nihilism.

Kaushik Basu

Orbáns Ungarn: die Wut der Europäer kommt zu spät

29 Jun, 2021 at 10:20 | Posted in Politics & Society | 1 Comment

Na endlich! Alle die seit Jahren meinen, die EU müsse dem “kleinen Diktator” Viktor Orban die rote Karte zeigen, kamen am Donnerstag auf ihre Kosten. Es war wie auf dem Schulhof, wo sich nach Jahren der Nichteinmischung die Mehrheit endlich den herrschenden Rowdy vornimmt und ihn nach allen Regeln der Kunst zerlegt …

DOWNLOAD: "Hallo, Diktator" - Orbán, die EU und die Rechtsstaatlichkeit  Mp4, 3Gp & HD | NaijaGreenMovies, NetNaija, FzmoviesBloß wo waren sie denn früher? Etwa Mark Rutte mit seinem starken Spruch, dass so ein Land nicht mehr in die EU gehöre? Der Belgier, der den schönen Satz prägte man könne sich Homosexualität nicht aussuchen, aber es gebe keine Entschuldigung für Homophobie. Und schließlich die Bundeskanzlerin, die Orbans Gesetz ebenfalls klar verurteilte.

Gerade sie und ihre Partei haben jahrelang die Hand über Orban gehalten, seine anti-demokratische Entwicklung unter dem Schutz der mächtigen EVP im Europaparlament gedeckt, Kritik von ihm ferngehalten und letztlich seine Machtübernahme in Budapest ermöglicht. Da werden jetzt über das schwulen- und lesbenfeindliche Gesetz durchaus auch Krokodilstränen vergossen.

Orbans Aufstieg stammt direkt aus dem Handbuch des Totalitarismus: Erst werden die Medien aus- und gleichgeschaltet, die Unabhängigkeit der Justiz zerstört, die Institutionen durch korrupte Mittäter unterwandert. Dann wird die Zivilgesellschaft ausgetrocknet und die Freiheit von Wissenschaft und Kultur ausgehöhlt. All dem haben die anderen europäischen Regierungschefs seit Jahren zugeschaut und Orban bei jedem Gipfel freundlich die Hand geschüttelt …

Ungarns gelenkte Medien haben jetzt übrigens verbreitet, George Soros, der jüdische Finanzier und frühere Förderer Orbans, hätte den Streit über das LGBTQ-Gesetz angezettelt. Das ist antisemitische Schmierenpropaganda nach Art nationalsozialistischer Art – da müssten die EU-Regierungschefs bei ihrem nächsten Treffen den Ungarn eigentlich gleich wieder wegen Faschismusverdacht in die Mangel nehmen. Auch über solchen Dreck haben sie viel zu lange hinweg gesehen.

Barbara Wesel / DW

Stefan Löfven borde veta hut

29 Jun, 2021 at 09:21 | Posted in Politics & Society | Comments Off on Stefan Löfven borde veta hut

Grodors plums och ankors plask | LARS P. SYLLPå gårdagens presskonferens — då vår statsminister meddelade sin avgång — upprepade Stefan Löfven flera gånger att det var Vänsterpartiets fel att han tvingats till detta, trots att Socialdemokraterna enligt Löfven gått Vänsterpartiet till mötes (oh ja och på julafton kommer tomten) när det gällde konflikten kring marknadshyror.

Och detta ankors plask och grodors plums tror Löfven att han kan lura i oss.

Man tar sig för pannan! Hur korkade tror mannen att folk är?

Den parlamentariska demokrati vi haft i det här landet i mer än ett sekel bygger på att en regering måste tolereras av en majoritet av riksdagen. Detta har Löfven fullständigt struntat i och som vanligt bara tagit för givet att Vänsterpartiet ska vika ner sig. När nu Vänsterpartiet med rätta vägrar bryta mot ett sen årtionden klart och tydligt deklarerat avståndstagande till marknadshyror, väljer Löfven att skylla den uppkomna situationen på Vänsterpartiet.

Jag säger som Fabian Månsson: Vet hut! Vet sjufalt hut!

Randomizations creating illusions of knowledge

28 Jun, 2021 at 21:46 | Posted in Statistics & Econometrics | 1 Comment

The advantage of randomised experiments in describing populations creates an illusion of knowledge … This happens because of the propensity of scientific journals to value so-called causal findings and not to value findings where no (so-called) causality is found. In brief, it is arguable that we know less than we think we do.

tumblr_mvn24oSKXv1rsxr1do1_500To see this, suppose—as is indeed the case in reality—that thousands of researchers in thousands of places are conducting experiments to reveal some causal link. Let us in particular suppose that there are numerous researchers in numerous villages carrying out randomised experiments to see whether M causes P. Words being more transparent than symbols, let us assume they want to see whether medicine (M) improves the school participation (P) of school-going children. In each village, 10 randomly selected children are administered M and the school participation rates of those children and also children who were not given M are monitored. Suppose children without M go to school half the time and are out of school the other half. The question is: is there a systematic difference of behaviour among children given M?

I shall now deliberately construct an underlying model whereby there will be no causal link between M and P. Suppose Nature does the following. For each child, whether or not the child has had M, Nature tosses a coin. If it comes out tails the child does not go to school and if it comes out heads, the child goes to school regularly.

Consider a village and an RCT researcher in the village. What is the probability, p, that she will find that all 10 children given M will go to school regularly? The answer is clearly

p = (1/2)^10

because we have to get heads for each of the 10 tosses for the 10 children.

Now consider n researchers in n villages. What is the probability that in none of these villages will a researcher find that all the 10 children given M go to school regularly? Clearly, the answer is (1–p)^n.

Hence, if w(n) is used to denote the probability that among the n villages where the experiment is done, there is at least one village where all 10 tosses come out heads, we have:

w(n) = 1 – (1-p)^n.

It is easy to check the following are true:

w(100) = 0.0931,
w(1000) = 0.6236,
w(10 000) = 0.9999.

Therein lies the catch … If there are 1000 experimenters in 1000 villages doing this, the probability that there will exist one village where it will be found that all 10 children administered M will participate regularly in school is 0.6236. That is, it is more likely that such a village will exist than not. If the experiment is done in 10 000 villages, the probability of there being one village where M always leads to P is a virtual certainty (0.9999).

This is, of course, a specific example. But that this problem will invariably arise follows from the fact that

lim(n => infinity)w(n) = 1 – (1 -p)^n = 1.

Given that those who find such a compelling link between M and P will be able to publish their paper and others will not, we will get the impression that a true causal link has been found, though in this case (since we know the underlying process) we know that that is not the case. With 10 000 experiments, it is close to certainty that someone will find a firm link between M and P. Hence, the finding of such a link shows nothing but the laws of probability being intact. Yet, thanks to the propensity of journals to publish the presence rather than the absence of “causal” links, we get an illusion of knowledge and discovery where there are none.

Kaushik Basu

En gigant har gått ur tiden

28 Jun, 2021 at 09:02 | Posted in Varia | 1 Comment

.

Peps Persson 1946-2021.

L’Afrique du Sud — le pays le plus inégalitaire au monde

27 Jun, 2021 at 15:23 | Posted in Politics & Society | Comments Off on L’Afrique du Sud — le pays le plus inégalitaire au monde

.

Testing causal claims

27 Jun, 2021 at 14:03 | Posted in Theory of Science & Methodology | Comments Off on Testing causal claims

.

What does randomisation guarantee? Nothing!

26 Jun, 2021 at 17:43 | Posted in Theory of Science & Methodology | 3 Comments

BJUP interview with John Worrall | Philosophy, Logic and Scientific MethodDoes not randomization somehow or other guarantee (or perhaps, much more plausibly, provide the nearest thing that we can have to a guarantee) that any possible links to … outcome, aside from the link to treatment …, are broken?

Although he does not explicitly make this claim, and although there are issues about how well it sits with his own technical programme, this seems to me the only way in which Pearl could, in the end, ground his argument for randomizing. Notice, first, however, that even if the claim works then it would provide a justification, on the basis of his account of cause, only for randomizing after we have deliberately matched for known possible confounders … Once it is accepted that for any real randomized allocation known factors might be unbalanced — and more sensible defenders of randomization do accept this (though curiously, as we saw earlier, they recommend rerandomizing until the known factors are balanced rather than deliberately balancing them!) — then it seems difficult to deny that a properly matched experimental and control group is better, so far as preventing known confounders from producing a misleading outcome, than leaving it to the happenstance of the tosses …

The random allocation may ‘sever the link’ with this unknown factor or it may not (since we are talking about an unknown factor, then, by definition, we will not and cannot know which). Pearl’s claim that Fisher’s method ‘guarantees’ that the link with the possible confounders is broken is then, in practical terms, pure bluster. 

John Worrall

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that (rather simplistic) view on randomization is that the claims made are both exaggerated and strictly seen false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

The problem many ‘randomistas’ end up with when underestimating heterogeneity and interaction is not only an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural, or quasi) experiments to different settings, populations, or target systems, is not easy. And since trials usually are not repeated, unbiasedness and balance on average over repeated trials say nothing about anyone trial. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalisable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

Randomisation may often — in the right contexts — help us to draw causal conclusions. But it certainly is not necessary to secure scientific validity or establish causality. Randomisation guarantees nothing. Just as observational studies may be subject to different biases, so are randomised studies and trials.

‘New Keynesian’ macroeconomics — worse than useless

24 Jun, 2021 at 17:26 | Posted in Economics | 4 Comments

Mainstream macroeconomics can only progress if it gets rid of the DSGE albatross around its neck. It is better to do it now than to wait for another 20 years because the question is not whether but when DSGE modeling will be discarded. DSGE modeling is a story of a death foretold …

New Keynesian' macroeconomics | LARS P. SYLLGetting rid of DSGE models is critical because the hegemonic DSGE program is crowding out alternative macro methodologies that do work … DSGE practitioners, who with a mixture of bluff and bluster act as gatekeeper, judge, jury, and executioner in all macroeconomic matters, are a block on the road to progress. The roadblock has to be removed. The failed and failing DSGE models have to go if mainstream macroeconomics wants to become a force for the common good again.

Servaas Storm

Servaas’ article is a marvellous final take down of the ridiculous aspirations of New-Classical-New-Keynesian macroeconomic modelling. DSGE models are worse than useless — and still, mainstream economists seem to be überimpressed by the ‘rigour’ brought to macroeconomics by New-Classical-New-Keynesian DSGE models and its rational expectations and microfoundations!

It is difficult to see why.

‘Rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge, no decisive empirical evidence has been presented. DSGE models are nothing but a joke.

keynes-right-and-wrong

Proving things ‘rigorously’ in DSGE models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Mainstream economists think there is a gain from the DSGE style of modelling in its capacity to offer the one and only structure around which to organise discussions. To me, that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

As David Hand tells us — building models based on questionable ontological or epistemological assumptions may be fine in mathematics and religion, but it certainly is not science:

impIf you want absolute truth then you must look to pure mathematics or religion, but certainly not to science … Science is all about possibilities. We propose theories, conjectures, hypotheses and explanations. We collect evidence and data, and we test the theories against this new evidence … It’s the very essence of science that its conclusions can change, that is, that its truths are not absolute. The intrinsic good sense of this is contained within the remark reportedly made by the eminent economist John Maynard Keynes, responding to the criticism that he had changed his position on monetary policy during the 1930s Depression: “When the facts change, I change my mind. What do you do, sir?”

DSGE models belong — as Servaas suggests — in the Museum of Implausible Economic Models. Making adjustments to Ptolemaic epicycles or trying to amend and elaborate on flat earth theories and models is both stupid and tragic. It’s a waste of time and effort. As we all know there are better alternatives.

Economics — a science in need of a paradigm shift

24 Jun, 2021 at 00:38 | Posted in Economics | 6 Comments

The methodology and ideology of modern economics are built into the frameworks of educational methods, and absorbed by students without any explicit discussion. In particular, the logical positivist philosophy is a deadly poison which I ingested during my Ph.D. training at the Economics Department in Stanford in the late 1970s. It took me years and years to undo these effects …

Paradigm Shift in EconomicsModern economics is much like this. It starts by making assumptions which are dramatically in conflict with everything we know about human behavior (and firm behavior) and applies mathematical reasoning to situations where it cannot be applied, quantifying the unquantifiable and coming to completely absurd and ridiculous conclusions. Nonetheless, speaking from personal experience, the brainwashing is powerful and effective. It is a slow and painful process to undo …

Unlike the older generation, for younger and more flexible minds, it is possible to take off glasses manufactured in the Euclidean factory, and put on non-Euclidean glasses. Nonetheless, it is still a disconcerting and uncomfortable experience, which will not be undertaken unless there is some expectation of a great reward for this struggle and sacrifice. The costs of paradigm shift must be paid upfront – one loses the ability to talk to the mainstream when one describes the world using an alien framework. The rewards are in the future, and highly speculative and uncertain. Nonetheless, for reasons explained elsewhere, it seems essential to make the effort – the survival of humanity is at stake.

Asad Zaman

Orban’s Hungary — a shame for all of Europe

23 Jun, 2021 at 16:22 | Posted in Politics & Society | Comments Off on Orban’s Hungary — a shame for all of Europe

.

Why the idea of causation cannot be a purely statistical one

23 Jun, 2021 at 15:32 | Posted in Statistics & Econometrics | 6 Comments

If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship should exist between statistical and subject-matter concerns …

introduction-to-statistical-inferenceWhere the ultimate aim of research is not prediction per se but rather causal explanation, an idea of causation that is expressed in terms of predictive power — as, for example, ‘Granger’ causation — is likely to be found wanting. Causal explanations cannot be arrived at through statistical methodology alone: a subject-matter input is also required in the form of background knowledge and, crucially, theory …

Likewise, the idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods and, especially to ‘practical science’ where the central concern is indeed with ‘the consequences of performing particular acts’. The development of this idea in the context of medical and agricultural research is as understandable as the development of that of causation as robust dependence within applied econometrics. However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that, under the — highly anthropocentric — principle of ‘no causation without manipulation’, the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.

John H. Goldthorpe

Causality in social sciences — and economics — can never solely be a question of statistical inference. Statistics and data often serve to suggest causal accounts, but causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

5cd674ec7348d0620e102a79a71f0063Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: