Mainstream economics — a case of explanatory disaster

17 Jun, 2019 at 18:19 | Posted in Economics | 2 Comments

To achieve explanatory success, a theory should, minimally, satisfy two criteria: it should have determinate implications for behavior, and the implied behavior should be what we actually observe. These are necessary conditions, not sufficient ones. Rational-choice theory often fails on both counts. The theory may be indeterminate, and people may be irrational. e201ada1b6In what was perhaps the first sustained criticism of the theory, Keynes emphasized indeterminacy, notably because of the pervasive presence of uncertainty. His criticism applied especially to cases where agents have to form expectations about the behavior of other agents or about the development of the economy in the long run. In the wake of the current economic crisis, this objection has returned to the forefront. Before the crisis, going back to the 1970s, the main objections to the theory were based on pervasive irrational behavior. Experimental psychology and behavioral economics have uncovered many mechanisms that cause people to deviate from the behavior that rational-choice theory prescribes.

Disregarding some more technical sources of indeterminacy, the most basic one is embarrassingly simple: how can one impute to the social agents the capacity to make the calculations that occupy many pages of mathematical appendixes in the leading journals of economics and political science and that can be acquired only through years of professional training?

I believe that much work in economics and political science that is inspired by rational-choice theory is devoid of any explanatory, aesthetic or mathematical interest, which means that it has no value at all. I cannot make a quantitative assessment of the proportion of work in leading journals that fall in this category, but I am confident that it represents waste on a staggering scale.

Jon Elster

Most mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of the interaction of its parts (micro). Building their economic models, modern mainstream economists ground their models on a set of core assumptions describing the agents as ‘rational’ actors and a set of auxiliary assumptions. Together these assumptions make up the base model of all mainstream economic models. Based on these two sets of assumptions, they try to explain and predict both individual and social phenomena.

The core assumptions typically consist of completeness, transitivity, non-satiation, expected utility maximization, and consistent efficiency equilibria.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given interests and goals. How these preferences, interests, and goals are formed is not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions – ‘rational choice’ – is a rational agent with strong cognitive capacity that knows what alternatives she is facing, evaluates them carefully, calculates the consequences and chooses the one – given her preferences – that she believes has the best consequences according to her. Weighing the different alternatives against each other, the actor makes a consistent optimizing choice and acts accordingly.

Besides the core assumptions the model also typically has a set of auxiliary assumptions that spatio-temporally specify the kind of social interaction between ‘rational’ actors that take place in the model. These assumptions can be seen as giving answers to questions such as: who are the actors and where and when do they act; which specific goals do they have; what are their interests; what kind of expectations do they have; what are their feasible actions; what kind of agreements (contracts) can they enter into; how much and what kind of information do they possess; and how do the actions of the different individuals interact with each other.

So, the base model basically consists of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making the auxiliary assumptions serve as a kind of restriction of the intended domain of application for the core assumptions and the deductively derived theorems). The list of assumptions can never be complete since there will always be unspecified background assumptions and some (often) silent omissions (usually based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

These models are not primarily constructed for being able to analyze individuals and their aspirations, motivations, interests, etc., but typically for analyzing social phenomena as a kind of equilibrium that emerges through the interaction between individuals.

Now, of course, no one takes the base model (and the models that build on it) as a good (or, even less, true) representation of reality (which would demand a high degree of appropriate conformity with the essential characteristics of the real phenomena, that, even when weighing in pragmatic aspects such as ‘purpose’ and ‘adequacy,’ it is hard to see that this ‘thin’ model could deliver). The model is typically seen as a kind of thought experimental ‘as if’ bench- mark device for enabling a rigorous mathematically tractable illustration of social interaction in an ideal-type model world, and to be able to compare that ‘ideal’ with reality. The ‘interpreted’ model is supposed to supply analytical and explanatory power, enabling us to detect and understand mechanisms and tendencies in what happens around us in real economies.

Based on the model – and on interpreting it as something more than a deductive-axiomatic system – predictions and explanations can be made and confronted with empirical data and what we think we know. The base model and its more or less tightly knit axiomatic core assumptions are used to set up further ‘as if’ models from which consistent and precise inferences are made. If the axiomatic premises are true, the conclusions necessarily follow. But if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization, that permeates the base model, is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, where concepts and entities are without clear boundaries and continually interact and overlap.

Being told that the model is rigorous and amenable to ‘successive approximations’ to reality is of little avail, especially when the law-like (nomological) core assumptions are highly questionable and extremely difficult to test. Being able to construct ‘thought-experiments’ depicting logical possibilities does not take us very far. An obvious problem with the mainstream base model is that it is formulated in such a way that it realiter is extremely difficult to empirically test and decisively ‘corroborate’ or ‘falsify.’

As Elster writes, such models have — from an explanatory point of view — indeed “no value at all.” The ‘thinness’ is bought at too high a price, unless you decide to leave the intended area of application unspecified or immunize your model by interpreting it as nothing more than two sets of assumptions making up a content-less theoretical system with no connection whatsoever to reality.

2 Comments

  1. “If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable?”
    .
    My next question: if prices are arbitrary, why make output, measured in arbitrary prices, a goal of public policy?

  2. completeness, transitivity, non-satiation, expected utility maximization, and consistent efficiency equilibria“i
    .
    Attacking instrumental rationality is kind of fatuous. Instrumental rationality is entirely appropriate to the study of political institutions that have as their purpose the instrumental satisfaction of material needs. That is the one thing conventional economics gets right, even if there idea of rationality is a crock.
    .
    Where things go terribly wrong in conventional mainstream theorizing is the preoccupation with proving out essentially qualitative propositions in what ought to be a quantitative discipline. The array of assumptions that go into creating homo economicus are aimed at enabling theory to prove welfare theorems, essentially qualitative assertions that we live in the best of all possible worlds.
    .
    The argument being constructed in conventional economic theorizing is not trying to give a logically complete account of “rational behavior” so much as it is trying to preempt judgment about the outcomes created by the economic system. The economist abstracts calculation from preferences, not because such an analytic separation makes any ontological sense, but because it serves an ideological purpose in deflecting critical judgment from the system as it exists and fails.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.