Do models make economics a science?

14 Sep, 2019 at 14:41 | Posted in Economics | 6 Comments

Well, if we are to believe most mainstream economists, models are what make economics a science.

economists_do_it_with_models_economics_humor_keychain-re3ab669c9dd84be1806dc76325a18fd0_x7j3z_8byvr_425In a Journal of Economic Literature review of Dani Rodrik’s Economics Rules, renowned game theorist Ariel Rubinstein discusses Rodrik’s justifications for the view that “models make economics a science.” Although Rubinstein has some doubts about those justifications — models are not indispensable for telling good stories or clarifying things in general; logical consistency does not determine whether economic models are right or wrong; and being able to expand our set of ‘plausible explanations’ doesn’t make economics more of a science than good fiction does — he still largely subscribes to the scientific image of economics as a result of using formal models that help us achieve ‘clarity and consistency’.

There’s much in the review I like — Rubinstein shows a commendable scepticism on the prevailing excessive mathematization​ of economics, and he is much more in favour of a pluralist teaching of economics than most other mainstream economists — but on the core question, “the model is the message,” I beg to differ with the view put forward by both Rodrik and Rubinstein.

Economics is more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.

Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what might be called the ‘ur-model’ (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them results​ in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice ​and acts accordingly.

Beside​ the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other

So, the ur-model of all economic models basically consists of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serves as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete ​since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

In some (textbook) model depictions, we are essentially given the following structure,

A1, A2, … An
———————-
Theorem,

where a set of undifferentiated assumptions are used to infer a theorem.

This is, however, too vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modelling​ strategy, where there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure

CA1, CA2, … CAn & AA1, AA2, … AAn
———————————————–
Theorem

or,

CA1, CA2, … CAn
———————-
(AA1, AA2, … AAn) → Theorem,

more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.

This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get

CA1, CA2, … CAn
———————
Theorem,

where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or

AA1, AA2, … AAn
———————-
Theorem,

where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.

Not clearly differentiating between CA and AA means that we can’t make this all-important interpretative distinction and opens up for unwarrantedly ‘saving’ or ‘immunizing’ models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context, ​it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.

Economics — in contradistinction to logic and mathematics — ought to be an empirical science, and empirical testing of ‘axioms’ ought to be self-evidently relevant for such a discipline. For although the mainstream economist himself (implicitly) claims that his axiom is universally accepted as true and in no need of proof, that is in no way a justified reason for the rest of us to simpliciter accept the claim.

When applying deductivist thinking to economics, mainstream (neoclassical) economists usually set up ‘as if’ models based on the logic of idealization and a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. But — although the procedure is a marvellous tool in mathematics and axiomatic-deductivist systems, it is a poor guide for real-world systems. As Hans Albert has it on the neoclassical style of thought:

hans_albertScience progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

Most mainstream economic models are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?

Confronted with the massive empirical failures of their models and theories, mainstream economists often retreat into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way.

This kind of scientific defeatism is equivalent to surrendering our search for understanding the world we live in. It can’t be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything of the world we live in – then why should we waste any of our precious time on them?

The way axioms and theorems are formulated in mainstream (neoclassical) economics standardly leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may, of course, ​be very ‘handy’, but totally void of any empirical value.

Mainstream economic models are nothing but broken pieces models. That kind of models can never make economics a science

6 Comments

  1. To qualify as a science (or even as a “science”), economics as an academic enterprise would have to be interested in and invested in investigating and objectively measuring the phenomena of the actual, institutionalized political economy. Such activity would need models, but those models would be operational models, very different creatures from the analytic models so treasured by economic theorists.
    .
    Instead economists apply econometrics, a discipline seemingly dedicated to filtering all the information out of the rich variety and complexity of social life in order to “stylize” a few meaningless “facts” of no fixed importance.
    .
    Economists could investigate the social mechanisms — known as institutions — of the actual economy to determine how they work. How does money and banking work? How does hierarchy and employment for wages work? How does business strategy work in the creation of industry? How does ownership of abstract, socially constructed entities like corporations work? What is “property”? What is “technology”? What are “crime” and “corruption”?

    Economists do some things of this kind, within their chosen disciplinary “specialties”, but the results feed back on the shared knowledge space of economists not at all — that space is occupied by the impervious neoclassical theory, in its a priori analytic splendour.

  2. The language of “core assumptions” and “auxiliary assumptions” recalls to my mind Popper’s attempt to reconcile his discovery that operational models were at the center of physicists’s empiricism with his notion that knowledge was embodied in the nomological machines of analytic theory, neatly deductive from clear axioms, imagined and observed.
    .
    You are trying very hard to build a fixed and well-founded bridge across a chasm that can only be crossed by leaps of imagination and insight. I don’t think analysis qua analysis is ever descriptive. I think testing axioms for factual realism misunderstands the fundamental nature, and limitations, of the activity known as analytic theory. One might as well try to practice geometry by searching for equilateral triangles in your flower garden — not likely to be a rewarding or productive activity.
    .
    One can certainly fault economists for neglecting study of the actual economy for study of economic theory, but if we are inquiring into the particulars of the practice of theoretical analysis in neoclassical economics, it is not the absence of factual realism, per se, from an activity that, by itself, neither uses nor produces facts that should trouble us. Neoclassical economics is surely doing something wrong to keep itself sterile and incurious, but we should take a care that we are not demanding something from the methods of analysis that it can never deliver.
    .
    Analysis, by its own terms, seeks not facts but logical relations. Facts are constructed by other processes and methods, making use of the logical relations worked out in analytic theory to be sure, but analysis takes place in an imaginative realm that deliberately abstracts from the particular of fact. Analysis, in seeking to work out logical relations composing a functional system, works to determine the necessary and sufficient elements of the system under consideration, a system that analysis is constructing in imagination rather than observing.

    “Observing” in practice takes as a pre-requisite logical relations and applies them in constructing facts. The abstract concept of “1” in a sense precedes even the possibility of a count of 1, and though a good pragmatist like myself would insist there is a circular interaction that elevates the importance or primacy of the concept by “proving” the usefulness of its application in a “count” of 1, I would never confuse the concept with any particular, or even a general and summary statement of particulars.) The pragmatist is never falling into Popper’s error: he is never testing an analytic theory for a factual accuracy it cannot have (lacking factual content); a pragmatist is always testing the world, to see how the mechanisms of the world work. For this purpose, what Popper called operational models (which do have factually grounded parameters) are used. These are always “wrong” — the only question of measure is, quantitatively how wrong?
    .
    But, I digress.
    .
    The immediate problem in application is, how to critique neoclassical economic theory? I think neoclassical economics is doing analysis wrong. Not because they fail to construct a bridge of auxiliary assumptions to reality, per se, but because they avoid the main task of analysis on its own terms: to work out the identification of the minimal set of necessary and sufficient elements composing a logically functional system of relations.
    .
    In the crooked shell game that neoclassical economics quickly became, theorists learned to hide the pea not with “auxiliary assumptions” per se, but with what I would call, “parsing assumptions”. You have identified some of them, but not their function, which is to parse. A key parsing move in neoclassical economics was defining economic efficiency as purely allocational efficiency. Another, which you treat in some detail from other angles, is entangled with the use of utilitarianism to isolate preferences as exogenous. The Lucas critique and its demand for microfoundations is another example of the use of parsing assumptions to disable analysis.
    .
    The consequence of parsing the problems posed to and by neoclassical theory is to disable analytic theory in its principal task: the identification of the necessary and sufficient elements of the system analyzed. By insisting on a focus exclusively on the efficient allocation of resources in production, neoclassical economics hid from itself the logical necessity of applying technical knowledge in the control of production processes as well as the implications of the necessary use of energy in production (e.g. energy applied to production implies entropy aka pollution). In a somewhat esoteric way, the Cambridge Capital Controversy exposed the fraudulent claim of rigor behind the hand-waving assertions that capital as a factor of production accumulates smoothly, or can be thought of in that dubious way.
    .
    The claim that economists’s use of analytic models is scientific founders on the use of parsing assumptions to avoid doing the analysis properly, and the hand-waving in place of systematic observation of fact that follows from the stunted use of analysis.

  3. I’m reminded of an interview with Emanuel Derman.
    .
    https://www.coursera.org/lecture/financial-engineering-1/interview-with-emmanuel-derman-0qS93
    .
    Some quotations:
    .
    “You know your model is not strictly correct, but you want it to reproduce the price of liquid instruments that are underliers, and you calibrate the model everyday if you have to.”
    .
    “I came from a physics background, and actually I came to Wall Street as I said in late 1985. And I got very excited about it. Getting a shot in the arm about applying physics and math techniques to new area that I have not done before. And, I think I like a lot of people have this illusion that you could sort of build a grand unified theory of finance, in which you would model all fixed income rates with stochastic process. And consistently price every instrument in the world and look for arbitrage opportunities, and BDT was an arbitrage-free model in its, in its own limited one factor way. And Fisher was actually much more pragmatic about all of this. He was quite happy to live with an imperfect financial market and have different models that weren’t consistent with each other in different areas. And didn’t have this overarching desire to unify everything and I only really got to that point sort of six or seven years later.”
    .
    “He says, my job I believe is to persuade others that my conclusions are sound. I will use an array of devices to do this theory, stylized affects, time series data surveys and appeals to introspection. I particularly like the appeals to introspection because he’s making clear that finance isn’t just a science, it’s a science of the way people behave and and an art.”
    .
    “He says, in the world of real research, conventional tests of statistical significance seem almost worthless. I particularly like that, because when people new, either students or even people who are economists come to Wall Street, in my experience, they always have great expectations for models and think they’re going to be used in a way that explains the truth. And think you should test them very carefully to calibrate them, find the best model and then use that. And the truth is, the financial world goes through regimes of change, and the same models don’t work in the same period. And if you try calibrating one model to 30 years. It doesn’t work because you really have to use different, different models at different times, and I think he kind of understood that.”

  4. I wonder if not the systems approach, general systems theory, have fallen into the same trap when one state systems should be looked on as hierarchies of systems, sub systems consisting of subsystems and consisting of components with attributes iosomorfically inherited from systems above…

    Social systems, as society, is a web of interconnected systems, organized with boundaries, having mission, functions, people, technology, but individuals are not wholly part of the systems, they contribute with part om who they are (systems in its own right). The contributions play part in the system functions and the functions, which process information or materia with input and output make use of several peoples contributions, people who’s believes, thought and kommunication centers around these functions.

    In order to understand a social system one should look at the functions.

    The question of the origin of the individudals utility or the systems utility would be found in the interrelations of systems, of individuals and systems.

  5. Even if general systems theory is a meta theory a model of social systems must conform to reality in order for the model to be “exportable to the target environment” as Syll proposes. A social system is a construktion, a social construction created by intelligent peopled, driven to build a system by social motivation. If one experiments and divide a group in two they will after a while display attributes of a system with border, organisation, tasks, funcions.

  6. A good example is people crowding and waiting for a bus. It’s a simple system functionally, but it’s not a summation of components who may hardly be aware of witch part, a small one, they play. Functionally the top of the hierarchy will as usual be to maintain the system, keep it in existens, and below the top function one may recognise:

    1) Maintain a que
    2) Keep reasonable distance
    3) Order the que
    4) Wait for the bus

    It’s obvious that people actually form a system. They don’t strive to far away, but not to near. If anyone is not taking up the presumed place e.g. an alcoholic the people create a bondary, it’s an anoyance. It’s as well a phenomena that people connect and actually feel som belonging, being part of a natural group with common interests.

    Mathematically it may be described usin a Poisson distribution for a que system and possibly as well using mathematical group theory, but the psychology can’t be modelled mathematically. What keeps people in position and the casual relations are double, goes in both directions. Still the que may be optimized mathematically, but this don’t explain why peopled actually optimize without the use of an algoritm.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.