Leontief’s devastating critique of econom(etr)ics

18 Jan, 2021 at 17:57 | Posted in Economics | 2 Comments

Much of current academic teaching and research has been criticized for its lack of relevance, that is, of immediate practical impact … I submit that the consistently indifferent performance in practical applications is in fact a symptom of a fundamental imbalance in the present state of our discipline. The weak and all too slowly growing empirical foundation clearly cannot support the proliferating superstructure of pure, or should I say, speculative economic theory …

004806Uncritical enthusiasm for mathematical formulation tends often to conceal the ephemeral substantive content of the argument behind the formidable front of algebraic signs … In the presentation of a new model, attention nowadays is usually centered on a step-by-step derivation of its formal properties. But if the author — or at least the referee who recommended the manuscript for publication — is technically competent, such mathematical manipulations, however long and intricate, can even without further checking be accepted as correct. Nevertheless, they are usually spelled out at great length. By the time it comes to interpretation of the substantive conclusions, the assumptions on which the model has been based are easily forgotten. But it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.

What is really needed, in most cases, is a very difficult and seldom very neat assessment and verification of these assumptions in terms of observed facts. Here mathematics cannot help and because of this, the interest and enthusiasm of the model builder suddenly begins to flag: “If you do not like my set of assumptions, give me another and I will gladly make you another model; have your pick.” …

But shouldn’t this harsh judgment be suspended in the face of the impressive volume of econometric work? The answer is decidedly no. This work can be in general characterized as an attempt to compensate for the glaring weakness of the data base available to us by the widest possible use of more and more sophisticated statistical techniques. Alongside the mounting pile of elaborate theoretical models we see a fast-growing stock of equally intricate statistical tools. These are intended to stretch to the limit the meager supply of facts … Like the economic models they are supposed to implement, the validity of these statistical tools depends itself on the acceptance of certain convenient assumptions pertaining to stochastic properties of the phenomena which the particular models are intended to explain; assumptions that can be seldom verified.

Wassily Leontief

A salient feature of modern mainstream economics is the idea of science advancing through the use of “successive approximations” whereby ‘small-world’ models become more and more relevant and applicable to the ‘large world’ in which we live. Is this really a feasible methodology? Yours truly thinks not.

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). And all empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue — as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system. No matter how many convoluted refinements of concepts made in the model, if the “successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

So, I have to conclude that constructing “minimal economic models” — or using microfounded macroeconomic models as “stylized facts” or “stylized pictures” somehow “successively approximating” macroeconomic reality — is a rather unimpressive attempt at legitimizing using ‘small-world’ models and fictitious idealizations for reasons more to do with mathematical tractability than with a genuine interest of understanding and explaining features of real economies.

As noticed by Leontief, there is no reason to suspend this harsh judgment when facing econometrics. When it comes to econometric modelling one could, of course, choose to treat observational or experimental data as random samples from real populations. I have no problem with that (although it has to be noted that most ‘natural experiments’ are not based on random sampling from some underlying population — which, of course, means that the effect-estimators, strictly seen, only are unbiased for the specific samples studied). But econometrics does not content itself with that kind of populations. Instead, it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of  ‘infinite super populations.’ This is actually nothing else but hand-waving! And it is inadequate for real science. As David Freedman writes:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

2 Comments

  1. Prof. Syll’s Spurious Super-Populations
    ——————————————————
    Here and in many previous posts Prof. Syll gives a perverse account of the methodology of econometricians and other scientists. He alleges that that “because there is no source of further cases” econometricians treat data as samples drawn from “infinite super-populations” in “imaginary parallel universes”. A “fiction” is thus “propagated” .
    .
    However, the assumption or invention of data is the antithesis of science, even when available data is sparse.
    Unlike critical realist philosophers like Prof. Syll, scientists don’t seek a “deep reality” beyond the reach of science.
    Instead they recognize that we are largely ignorant about our universe and humbly describe it in probabilistic models. This enables the formulation and testing of hypotheses concerning patterns which may exist in the available data.
    This methodology reflects the innate learning processes found in humans and other animals.
    .
    1)Evolution
    There is overwhelming empirical evidence that our ancestors were very good at this kind of statistical thinking – this is how they survived and we evolved. They succeeded at hunting, fishing, scavenging, foraging, marauding, philandering etc by making judgements based on limited and imperfect information in an uncertain and dangerous world.
    “We are pattern seekers, believers in a coherent world, in which regularities … appear not by accident but as a result of mechanical causality or of someone’s intention … You can see why assuming causality could have had evolutionary advantages. It is part of the general vigilance that we have inherited from ancestors. We are automatically on the lookout for the possibility that the environment has changed.”
    Daniel Kahneman 2011: “Thinking, Fast and Slow”, Part 2
    https://b-ok.asia/dl/1915459/0d6102

    2) Statistical concepts are innate in humans.
    In ordinary life all of us make judgments on the basis of imagined possibilities which have never actually occurred. For example, from past experience and knowledge of the experiences of others, when crossing a busy road we assess risks and “see” the consequences of stepping in front of a fast moving vehicle.
    This is just a combination of ordinary common sense and intelligent imagination. Notions of super-populations and parallel universes are not required or helpful.
    A study of indigenous Maya people found that “humans have an innate grasp of probability” and that “probabilistic reasoning does not depend on formal education”.
    https://www.nature.com/news/humans-have-innate-grasp-of-probability-1.16271
    .
    Businessmen and others have to make decisions based on estimates and guesstimates despite imperfect data.
    “The ultimate logic, or psychology, of these deliberations is obscure, a part of the scientifically unfathomable mystery of life and mind. We must simply fall back upon a “capacity” in the intelligent animal to form more or less correct judgments about things, an intuitive sense of values. We are so built that what seems to us reasonable is likely to be confirmed by experience, or we could not live in the world at all.”
    Frank Knight: “Risk, Uncertainty and Profit”, 1921.
    https://www.econlib.org/library/Knight/knRUP.html?chapter_num=9#book-reader
    .
    3) Children
    The learning processes of children parallel econometrics. For example, toddlers rapidly learn from experience that bumps are correlated with pain depending on speed and direction of movement, hardness of objects, part of body, etc.
    Children begin to develop cause-and-effect thinking skills as early as eight months of age. They assume that regular causal mechanisms operate in real world. Evidence suggests that the development of causal reasoning in young children is consistent with Causal Graphical Models (CGMs). These are “representations of a joint probability distribution — a list of all possible combinations of events and the probability that each combination occurs.”
    Sobel & Kirkham – Developmental Psychology 2006: “ The Development of Causal Reasoning in Toddlers and Infants”
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.408.6089&rep=rep1&type=pdf
    Sobel & Legare – Cognition Science 2014: “Causal learning in children”
    https://sci-hub.se/10.1002/wcs.1291
    .
    4) Even chimps and many other animals naturally think probabilistically.
    From a series of 7 experiments with Bonobos, Chimpanzees, Gorillas and Orangutans it was concluded that:
    “a basic form of drawing inferences from populations to samples is not uniquely human, but evolutionarily more ancient:
    It is shared by our closest living primate relatives, the great apes, and perhaps by other species in the primate lineage and beyond and it thus clearly antedates language and formal mathematical thinking both phylogenetically and ontogenetically.”
    Rakoczy et al. (2014) – Apes are intuitive statisticians. Cognition, 131(1):60-8

    Click to access Rakoczy_Apes_Cognition_2014_1920316.pdf

    • The only important uncertainty for businessmen is psychological, not physical. Can they sell this asset? What lies of omission or half-truths will help? Causation is psychological. A dominant personality can cause financing to emerge.
      .
      Statistics only work if you don’t throw out any data no matter how inconvenient to your story. In economics, accurate data requires open books which violates the sanctity of private property. And even then, current statistical theory woefully neglects fat tails. The probability curve is more like a wave; the normal curve that statisticians focus on is just a section.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.