Instrumental variables — in search for identification

21 Jul, 2021 at 17:52 | Posted in Statistics & Econometrics | Leave a comment

Nick HK (@nickchk) | TwitterWe need relevance and validity. How realistic is validity, anyway? We ideally want our instrument to behave just like randomization in an experiment. But in the real world, how likely is that to actually happen? Or, if it’s an IV that requires control variables to be valid, how confident can we be that the controls really do everything we need them to?

In the long-ago times, researchers were happy to use instruments without thinking too hard about validity. If you go back to the 1970s or 1980s you can find people using things like parental education as an instrument for your own (surely your parents’ education can’t possibly affect your outcomes except through your own education!). It was the wild west out there…

But these days, go to any seminar where an instrumental variables paper is presented and you’ll hear no end of worries and arguments about whether the instrument is valid. And as time goes on, it seems like people have gotten more and more difficult to convince when it comes to validity. This focus on validity is good, but sometimes comes at the expense of thinking about other IV considerations, like monotonicity (we’ll get there) or even basic stuff like how good the data is.

There’s good reason to be concerned! Not only is it hard to justify that there exists a variable strongly related to treatment that somehow isn’t at all related to all the sources of hard-to-control-for back doors that the treatment had in the first place, we also have plenty of history of instruments that we thought sounded pretty good that turned out not to work so well.

Nick Huntington-Klein’s book is superbly accessible.

Highly recommended reading for anyone interested in causal inference in economics and social science!

Causal discovery and the faithfulness assumption (student stuff)

21 Jul, 2021 at 12:35 | Posted in Statistics & Econometrics | Leave a comment

.

For more on the (questionable) faithfulness assumption, cf. chapter six of Nancy Cartwright’s Hunting causes and using them.

Conditional probabilities (student stuff)

21 Jul, 2021 at 12:21 | Posted in Statistics & Econometrics | Leave a comment

.

Which causal inference books to read

19 Jul, 2021 at 11:57 | Posted in Statistics & Econometrics | Leave a comment

Causal Inference Books Flowchart

Source

All suggestions are highly readable, but for the general reader, yours truly would also like to recommend The book of why by Pearl & Mackenzie.

RCT — a questionable claim of establishing causality

16 Jul, 2021 at 11:33 | Posted in Statistics & Econometrics | 2 Comments

Amazon.com: Randomized Control Trials in the Field of Development: A  Critical Perspective eBook: Bédécarrats, Florent, Guérin, Isabelle,  Roubaud, François: Kindle StoreThe ideal RCT is the special case in which the trial’s treatment status is also assigned randomly (in addition to drawing random samples from the two populations, one treated and one not) and the only error is due to sampling variability … In this special case, as the number of trials increases, the mean of the trial estimates tends to get closer to the true mean impact. This is the sense in which an ideal RCT is said to be unbiased, namely that the sampling error is driven to zero in expectation …

Prominent randomistas have sometimes left out the “in expectation” qualifier, or ignored its implications for the existence of experimental errors. These advocates of RCTs attribute any difference in mean outcomes between the treatment and control samples to the intervention … Many people in the development community now think that any measured difference between the treatment and control groups in an RCT is attributable to the treatment. It is not; even the ideal RCT has some unknown error.

A rare but instructive case is when there is no treatment. Absent any other effects of assignment (such as from monitoring), the impact is zero. Yet the random error in one trial can still yield a non-zero mean impact from an RCT. An example is an RCT in Denmark in which 860 elderly people were randomly and unknowingly divided into treatment and control groups prior to an 18-month period without any actual intervention (Vass, 2010). A statistically significant (prob. = 0.003) difference in mortality rates emerged at the end of the period.

Martin Ravaillon

The role of RCTs in development

15 Jul, 2021 at 14:25 | Posted in Statistics & Econometrics | Leave a comment

.

Econometrics — science based on unwarranted assumptions

15 Jul, 2021 at 11:52 | Posted in Statistics & Econometrics | 1 Comment

Machine Learning or Econometrics? | by Dr. Dataman | Analytics Vidhya |  MediumThere is first of all the central question of methodology — the logic of applying the method of multiple correlation to unanalysed economic material, which we know to be non-homogeneous through time. If we are dealing with the action of numerically measurable, independent forces, adequately analysed so that we were dealing with independent atomic factors and between them completely comprehensive, acting with fluctuating relative strength on material constant and homogeneous through time, we might be able to use the method of multiple correlation with some confidence for disentangling the laws of their action … In fact we know that every one of these conditions is far from being satisfied by the economic material under investigation.

Letter from John Maynard Keynes to Royall Tyler (1938)

Mainstream economists often hold the view that criticisms of econometrics are the conclusions of sadly misinformed and misguided people who dislike and do not understand much of it. This is a gross misapprehension. To be careful and cautious is not equivalent to dislike.

Keynes' critique of econometrics — as valid today as it was in 1939 | LARS  P. SYLL

The ordinary deductivist ‘textbook approach’ to econometrics views the modelling process as foremost an estimation problem since one (at least implicitly) assumes that the model provided by economic theory is a well-specified and ‘true’ model. The more empiricist, general-to-specific-methodology (often identified as the ‘LSE approach’) on the other hand views models as theoretically and empirically adequate representations (approximations) of a data generating process (DGP). Diagnostics tests (mostly some variant of the F-test) are used to ensure that the models are ‘true’ – or at least ‘congruent’ – representations of the DGP. The modelling process is here more seen as a specification problem where poor diagnostics results may indicate a possible misspecification requiring re-specification of the model. The objective is standardly to identify models that are structurally stable and valid across a large time-space horizon. The DGP is not seen as something we already know, but rather something we discover in the process of modelling it. Considerable effort is put into testing to what extent the models are structurally stable and generalizable over space and time.

Although yours truly has some sympathy for this approach in general, there are still some unsolved ‘problematics’ with its epistemological and ontological presuppositions. There is, e. g., an implicit assumption that the DGP fundamentally has an invariant property and that models that are structurally unstable just have not been able to get hold of that invariance. But one cannot just presuppose or take for granted that kind of invariance. It has to be argued and justified. Grounds have to be given for viewing reality as satisfying conditions of model-closure. It is as if the lack of closure that shows up in the form of structurally unstable models somehow could be solved by searching for more autonomous and invariable ‘atomic uniformity.’ But if reality is ‘congruent’ to this analytical prerequisite has to be argued for, and not simply taken for granted.

A great many models are compatible with what we know in economics — that is to say, do not violate any matters on which economists are agreed. Attractive as this view is, it fails to draw a necessary distinction between what is assumed and what is merely proposed as hypothesis. This distinction is forced upon us by an obvious but neglected fact of statistical theory: the matters ‘assumed’ are put wholly beyond test, and the entire edifice of conclusions (e.g., about identifiability, optimum properties of the estimates, their sampling distributions, etc.) depends absolutely on the validity of these assumptions. The great merit of modern statistical inference is that it makes exact and efficient use of what we know about reality to forge new tools of discovery, but it teaches us painfully little about the efficacy of these tools when their basis of assumptions is not satisfied. 

Millard Hastay

Even granted that closures come in degrees, we should not compromise on ontology. Some methods simply introduce improper closures, closures that make the disjuncture between models and real-world target systems inappropriately large. ‘Garbage in, garbage out.’

Underlying the search for these immutable ‘fundamentals’ is the implicit view of the world as consisting of entities with their own separate and invariable effects. These entities are thought of as being able to be treated as separate and addible causes, thereby making it possible to infer complex interaction from a knowledge of individual constituents with limited independent variety. But, again, if this is a justified analytical procedure cannot be answered without confronting it with the nature of the objects the models are supposed to describe, explain or predict. Keynes thought it generally inappropriate to apply the ‘atomic hypothesis’ to such an open and ‘organic entity’ as the real world. As far as I can see these are still appropriate strictures all econometric approaches have to face. Grounds for believing otherwise have to be provided by the econometricians.

Trygve Haavelmo, the father of modern probabilistic econometrics, wrote (in ‘Statistical testing of business-cycle theories’, The Review of  Economics and Statistics, 1943) that he and other econometricians could not build a complete bridge between our models and reality by logical operations alone, but finally had to make “a non-logical jump” [1943:15]. A part of that jump consisted in that econometricians “like to believe … that the various a priori possible sequences would somehow cluster around some typical time shapes, which if we knew them, could be used for prediction” [1943:16]. But since we do not know the true distribution, one has to look for the mechanisms (processes) that “might rule the data” and that hopefully persist so that predictions may be made. Of possible hypotheses on different time sequences (“samples” in Haavelmo’s somewhat idiosyncratic vocabulary) most had to be ruled out a priori “by economic theory”, although “one shall always remain in doubt as to the possibility of some … outside hypothesis being the true one” [1943:18].

Continue Reading Econometrics — science based on unwarranted assumptions…

Causal inference in social sciences (student stuff)

8 Jul, 2021 at 11:32 | Posted in Statistics & Econometrics | Leave a comment

.

The main ideas behind bootstrapping (student stuff)

6 Jul, 2021 at 10:50 | Posted in Statistics & Econometrics | Leave a comment

.

Propensity score matching vs. regression (student stuff)

5 Jul, 2021 at 11:37 | Posted in Statistics & Econometrics | Leave a comment

.

Questionable research practices

2 Jul, 2021 at 17:14 | Posted in Statistics & Econometrics | Leave a comment

.

Bradford Hill — comment trouver de la causalité dans des corrélations

30 Jun, 2021 at 09:53 | Posted in Statistics & Econometrics | Leave a comment

.

How to achieve ‘external validity’

29 Jun, 2021 at 11:54 | Posted in Statistics & Econometrics | 2 Comments

Suman Ambwani on Twitter: "Also some very funny (and oddly specific)  student-generated memes from the course...… "There is a lot of discussion in the literature on beginning with experiments and then going on to check “external validity”. But to imagine that there is a scientific way to achieve external validity is, for the most part, a delusion … RCTs do not in themselves tell us anything about the traits of populations in other places and at other times. Hence, no matter how large the population from which we draw our random samples is, because it is impossible to draw samples from tomorrow’s population and all policies we craft today are for use tomorrow, there is no “scientific” way to go from RCTs to policy. En route from evidence and experience to policy, we have to rely on intuition, common sense and judgement. It is evidence coupled with intuition and judgement that gives us knowledge. To deny any role to intuition is to fall into total nihilism.

Kaushik Basu

Randomizations creating illusions of knowledge

28 Jun, 2021 at 21:46 | Posted in Statistics & Econometrics | 1 Comment

The advantage of randomised experiments in describing populations creates an illusion of knowledge … This happens because of the propensity of scientific journals to value so-called causal findings and not to value findings where no (so-called) causality is found. In brief, it is arguable that we know less than we think we do.

tumblr_mvn24oSKXv1rsxr1do1_500To see this, suppose—as is indeed the case in reality—that thousands of researchers in thousands of places are conducting experiments to reveal some causal link. Let us in particular suppose that there are numerous researchers in numerous villages carrying out randomised experiments to see whether M causes P. Words being more transparent than symbols, let us assume they want to see whether medicine (M) improves the school participation (P) of school-going children. In each village, 10 randomly selected children are administered M and the school participation rates of those children and also children who were not given M are monitored. Suppose children without M go to school half the time and are out of school the other half. The question is: is there a systematic difference of behaviour among children given M?

I shall now deliberately construct an underlying model whereby there will be no causal link between M and P. Suppose Nature does the following. For each child, whether or not the child has had M, Nature tosses a coin. If it comes out tails the child does not go to school and if it comes out heads, the child goes to school regularly.

Consider a village and an RCT researcher in the village. What is the probability, p, that she will find that all 10 children given M will go to school regularly? The answer is clearly

p = (1/2)^10

because we have to get heads for each of the 10 tosses for the 10 children.

Now consider n researchers in n villages. What is the probability that in none of these villages will a researcher find that all the 10 children given M go to school regularly? Clearly, the answer is (1–p)^n.

Hence, if w(n) is used to denote the probability that among the n villages where the experiment is done, there is at least one village where all 10 tosses come out heads, we have:

w(n) = 1 – (1-p)^n.

It is easy to check the following are true:

w(100) = 0.0931,
w(1000) = 0.6236,
w(10 000) = 0.9999.

Therein lies the catch … If there are 1000 experimenters in 1000 villages doing this, the probability that there will exist one village where it will be found that all 10 children administered M will participate regularly in school is 0.6236. That is, it is more likely that such a village will exist than not. If the experiment is done in 10 000 villages, the probability of there being one village where M always leads to P is a virtual certainty (0.9999).

This is, of course, a specific example. But that this problem will invariably arise follows from the fact that

lim(n => infinity)w(n) = 1 – (1 -p)^n = 1.

Given that those who find such a compelling link between M and P will be able to publish their paper and others will not, we will get the impression that a true causal link has been found, though in this case (since we know the underlying process) we know that that is not the case. With 10 000 experiments, it is close to certainty that someone will find a firm link between M and P. Hence, the finding of such a link shows nothing but the laws of probability being intact. Yet, thanks to the propensity of journals to publish the presence rather than the absence of “causal” links, we get an illusion of knowledge and discovery where there are none.

Kaushik Basu

Why the idea of causation cannot be a purely statistical one

23 Jun, 2021 at 15:32 | Posted in Statistics & Econometrics | 6 Comments

If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship should exist between statistical and subject-matter concerns …

introduction-to-statistical-inferenceWhere the ultimate aim of research is not prediction per se but rather causal explanation, an idea of causation that is expressed in terms of predictive power — as, for example, ‘Granger’ causation — is likely to be found wanting. Causal explanations cannot be arrived at through statistical methodology alone: a subject-matter input is also required in the form of background knowledge and, crucially, theory …

Likewise, the idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods and, especially to ‘practical science’ where the central concern is indeed with ‘the consequences of performing particular acts’. The development of this idea in the context of medical and agricultural research is as understandable as the development of that of causation as robust dependence within applied econometrics. However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that, under the — highly anthropocentric — principle of ‘no causation without manipulation’, the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.

John H. Goldthorpe

Causality in social sciences — and economics — can never solely be a question of statistical inference. Statistics and data often serve to suggest causal accounts, but causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

5cd674ec7348d0620e102a79a71f0063Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.