Econometric toolbox developers get this year’s ‘Nobel prize’ in economics

11 Oct, 2021 at 17:46 | Posted in Statistics & Econometrics | 2 Comments

Many of the big questions in the social sciences deal with cause and effect. How does immigration affect pay and employment levels? How does a longer education affect someone’s future income? …

Tre får dela på ekonomipriset till Alfred Nobels minneThis year’s Laureates have shown that it is possible to answer these and similar questions using natural experiments. The key is to use situations in which chance events or policy changes result in groups of people being treated differently, in a way that resembles clinical trials in medicine.

Using natural experiments, David Card has analysed the labour market effects of minimum wages, immigration and education …

Data from a natural experiment are difficult to interpret, however … In the mid-1990s, Joshua Angrist and Guido Imbens solved this methodological problem, demonstrating how precise conclusions about cause and effect can be drawn from natural experiments.

Press release: The Prize in Economic Sciences 2021

For economists interested in research methodology in general and natural experiments in specific, these three economists are well-known. A central part of their work is based on the idea that random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, and hence this kind of ‘simple and transparent’ design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but …

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

• And then there is also the problem that ‘Nature’ may not always supply us with the random experiments we are most interested in. If we are interested in X, why should we study Y only because design dictates that? Method should never be prioritized over substance!

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of this nowadays popular — and ill-informed — view that randomization is the only valid and the best method on the market. It is not.

Trygve Haavelmo — the father of modern probabilistic econometrics — once wrote that he and other econometricians could not build a complete bridge between our models and reality by logical operations alone, but finally had to make “a non-logical jump.”  To Haavelmo and his modern followers, econometrics is not really in the truth business. The explanations we can give of economic relations and structures based on econometric models are “not hidden truths to be discovered” but rather our own “artificial inventions”.

Rigour and elegance in the analysis do not make up for the gap between reality and model. A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that it is harmless to consider economic variables as stochastic variables. In most cases, no such arguments are given.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization — just as econometrics — promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) econometric methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite.

The prize committe says that econometrics and natural experiments “help answer important questions for society.” Maybe so, but it is far from evident to what extent they do so. As a rule, the econometric practitioners of natural experiments have far to over-inflated hopes on their explanatory potential and value.

2 Comments »

RSS feed for comments on this post. TrackBack URI

  1. Not all bad news though. David Card is by economist standards quite humble, open minded and is very much aware of the problems in the economics profession. He has generally been sympathetic to critics from inside and outside the profession – i will try and find the quote when I can, but I remember him saying that one thing he does know – and that is that economists know very little. He is careful – pointing out that is very difficult to make generalisations about the consequences of large scale immigration: the consequences vary from time to time and place to place.. He is very much in favour of multidisciplinary work and work with researchers in other social scientists.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.
Entries and Comments feeds.