Noahpinion and the empirical ‘revolution’ in economics

20 March, 2016 at 13:32 | Posted in Economics | 2 Comments

Experimental-Economics-1-1038x576But I think that more important than any of these theoretical changes … is the empirical revolution in econ. Ten million cool theories are of little use beyond the “gee whiz” factor if you can’t pick between them. Until recently, econ was fairly bad about agreeing on rigorous ways to test theories against reality, so paradigms came and went like fashions and fads. Now that’s changing. To me, that seems like a much bigger deal than any new theory fad, because it offers us a chance to find enduringly reliable theories that won’t simply disappear when people get bored or political ideologies change.

So the shift to empiricism away from philosophy supersedes all other real and potential shifts in economic theory. Would-be econ revolutionaries absolutely need to get on board with the new empiricism, or else risk being left behind.

Noahpinion

Noah Smith maintains that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic models.

Yours truly beg to differ. When looked at carefully, there  are in fact few real reasons to share the optimism on this so called ’empirical revolution’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity and generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance and homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2016? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Noah Smith, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Noah Smith’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

I would, contrary to Noah Smith’s optimism, argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

2 Comments »

RSS feed for comments on this post. TrackBack URI

  1. “As I see it is this heart of the matter. External validity and generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies.”

    As I see it, this is the heart of the matter. External validity and generalization are not founded on assumptions, they are propositions to be tested. The empirical path to credibility is to identify those assumptions, and then test them via their implications as extensively, thoroughly, and rigorously as we can. Researchers in the natural sciences are thrilled when they obtain a new instrument or identify a new method of analysis that allows them to test their assumptions in heretofore inaccessible regimes. Biologists didn’t throw out their empirical method when thermophilic bacteria were discovered in the hot springs of Yellowstone National Park, because DNA degrades at temperatures above 60 degrees C., making life impossible for life to exist at the boiling point of water. They dove into the details and discovered fascinating ways that the molecules are stabilized under those conditions. Physicists are granted billions of dollars of funding to build dozens of giant instruments that may discover “new physics” that invalidates the assumptions of their Standard Model of quantum field theory.

    Many economists, it seems, would prefer to follow Chico Marx in Duck Soup in saying “Who you gonna believe, me or your own eyes?”

  2. Vain hopes in the ruins of economics
    Comment on Lars Syll on ‘Noahpinion and the empirical ‘revolution’ in economics’

    Economics is a failed science and the ultimate cause is the proven multi-generation scientific incompetence of economists. Since Adam Smith economists have not grasped what science is all about — despite the fact that it is unambiguously defined: “Research is in fact a continuous discussion of the consistency of theories: formal consistency insofar as the discussion relates to the logical cohesion of what is asserted in joint theories; material consistency insofar as the agreement of observations with theories is concerned.” (Klant, 1994, p. 31)

    It is always BOTH, logical AND empirical consistency and NOT either/or. This is the critical hazard: instead of keeping the balance on the high methodological tightrope the incompetent researcher tumbles down either on the side of vacuous deductivism or on the side of blind empiricism. What the history of economic thought clearly shows is a pointless flip-flop between fact-free model bricolage and theory-free application of statistical tools or, worse, commonsensical stylized-facts storytelling.

    So it comes as no surprise that, after the proven failure of maximization-and-equilibrium economics it is again the turn of ‘empirical revolution’. Needless to emphasize that this is just another instant of scientific incompetence because methodologically the theoretical revolution must precede any empirical revolution: “The moral of the story is simply this: it takes a new theory, and not just the destructive exposure of assumptions or the collection of new facts, to beat an old theory.” (Blaug, 1998, p. 703)

    Walrasianism, Keynesianism, Marxianism, and Austrianism is logically inconsistent or empirically inconsistent or both (2015).

    Always when economics is in open crisis four reactions are to be observed: (i) self-delusional denial, (ii) back pedaling and relativization, e.g. ‘economics is not a Science with a capital S’ (Solow), (iii) admission of the most noticeable flaws with the reassurance that our best brains are already working on them, (iv) clueless actionism and innovation showbiz.

    The two main pseudo-innovations consist of borrowing from either evolution theory or from the latest vintages of physics (complexity, networks, chaos, non-linearity, thermodynamics, disequilibrium, information, emergence, etc.). The actual confused state of these misdirected approaches may be gleaned from The Journal of Evolutionary Economics and from the EconoPhysics blog.

    Mindless copying/borrowing is the characteristic of what Feynman famously characterized as cargo cult science. Neither evolution theory nor EconoPhysics is the way forward. Economics has to redefine itself in a genuine paradigm shift. In very general terms, the methodological revolution consists in the switch from behavior-centered bottom-up, i.e. subjective microfoundation, to structure-centered top-down, i.e. objective macrofoundation (2014).

    First of all, the orthodox set of axioms (Weintraub, 1985, p. 109) has to be fully replaced.* Nothing short of a theoretical revolution, a.k.a. paradigm shift, will do. Needless to stress that the superior paradigm has to be logically AND empirically consistent. After more than 200 years of failure and Noah Smith’s latest methodological wind egg economics needs the true theory — fast.

    Egmont Kakarot-Handtke

    References
    Blaug, M. (1998). Economic Theory in Retrospect. Cambridge: Cambridge University Press, 5th edition.
    Kakarot-Handtke, E. (2014). Objective Principles of Economics. SSRN Working
    Paper Series, 2418851: 1–19. URL
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2418851.
    Kakarot-Handtke, E. (2015). Major Defects of the Market Economy. SSRN Working
    Paper Series, 2624350: 1–40. URL
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2624350.
    Klant, J. J. (1994). The Nature of Economic Thought. Aldershot, Brookfield, VT:
    Edward Elgar.
    Weintraub, E. R. (1985). General Equilibrium Analysis. Cambridge, London, New
    York, NY, etc.: Cambridge University Press.

    * For details see the post ‘From Orthodoxy, to Heterodoxy, to Sysdoxy’
    http://axecorg.blogspot.de/2016/03/from-orthodoxy-to-heterodoxy-to-sysdoxy.
    html


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.
Entries and comments feeds.