Go your own way

19 Oct, 2019 at 22:39 | Posted in Varia | 1 Comment

 

I just loved this song when I first heard it forty years ago.
I still do.

Varför ekonomer har så stort inflytande

19 Oct, 2019 at 17:28 | Posted in Varia | 1 Comment

 

Unpacking the ‘Nobel prize’ in economics

19 Oct, 2019 at 12:33 | Posted in Economics | Leave a comment

plum2In a 2017 speech, Duflo famously likened economists to plumbers. In her view the role of an economist is to solve real world problems in specific situations. This is a dangerous assertion, as it suggests that the “plumbing” the randomistas are doing is purely technical, and not guided by theory or values. However, the randomistas’ approach to economics is not objective, value-neutral, nor pragmatic, but rather, rooted in a particular theoretical framework and world view – neoclassical microeconomic theory and methodological individualism.

The experiments’ grounding has implications for how experiments are designed and the underlying assumptions about individual and collective behavior that are made. Perhaps the most obvious example of this is that the laureates often argue that specific aspects of poverty can be solved by correcting cognitive biases. Unsurprisingly, there is much overlap between the work of randomistas and the mainstream behavioral economists, including a focus on nudges that may facilitate better choices on the part of people living in poverty.

Another example is Duflo’s analysis of women empowerment. Naila Kabeer argues that it employs an understanding of human behavior “uncritically informed by neoclassical microeconomic theory.” Since all behavior can allegedly be explained as manifestations of individual maximizing behavior, alternative explanations are dispensed with. Because of this, Duflo fails to understand a series of other important factors related to women’s empowerment, such as the role of sustained struggle by women’s organizations for rights or the need to address unfair distribution of unpaid work that limits women’s ability to participate in the community.

Ingrid Harvold Kvangraven

Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central.

Assume that you have examined how the performance of a group of people (A) is affected by a specific ‘treatment’ (B). How can we extrapolate/generalize to new samples outside the original population? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P'(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is far from satisfactory. And often it is – unfortunately – exactly this that I see when I study mainstream economists’ RCTs and ‘experiments.’

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

esther_dufloDuflo sees development as the implementation and replication of expert-led fixes to provide basic goods for the poor who are often blinded by their exacting situation. It is a technical quest for certainty and optimal measures in a fairly static framework.

In Duflo’s science-based ‘benevolent paternalism’, the experimental technique works as an ‘anti-politics machine’ … social goals being predefined and RCT outcomes settling ideally ambiguities and conflicts. Real-world politics – disregarding or instrumentalising RCTs – and institutions – resulting from social compromises instead of evidence – are thus often perceived as external disturbances and constraints to economic science and evidence-based policy.

Agnès Labrousse

Economists do not understand the economy

19 Oct, 2019 at 10:01 | Posted in Economics | Leave a comment

 

Accumulate, accumulate! That is Moses and the prophets!

18 Oct, 2019 at 18:38 | Posted in Economics | 2 Comments

 

In the postwar period, it has become increasingly clear that economic growth has not only brought greater prosperity. The other side of growth, in the form of pollution, contamination, wastage of resources, and climate change, has emerged as perhaps the greatest challenge of our time.

Against the mainstream theory’s view on the economy as a balanced and harmonious system, where growth and the environment go hand in hand, ecological economists object that it can rather be characterized as an unstable system that at an accelerating pace consumes energy and matter, and thereby pose a threat against the very basis for its survival.

nicholasThe Romanian-American economist Nicholas Georgescu-Roegen (1906-1994) argued in The Entropy Law and the Economic Process (1971) that the economy was actually a giant thermodynamic system in which entropy increases inexorably and our material basis disappears. If we choose to continue to produce with the techniques we have developed, then our society and earth will disappear faster than if we introduce small-scale production, resource-saving technologies and limited consumption.

Following Georgescu-Roegen, ecological economists have argued that industrial society inevitably leads to increased environmental pollution, energy crisis and an unsustainable growth.

Today we really need to re-consider how we look upon how our economy influences the environment and climate change. And we need to do it fast. Nicholas Georgescu-Roegen gives us a good starting point for doing so!

A truly distinguished economist who knows what he is talking about

18 Oct, 2019 at 16:08 | Posted in Economics | Comments Off on A truly distinguished economist who knows what he is talking about

 

‘Nobel prize’ winners Duflo and Banerjee do not tackle the real root causes of poverty

17 Oct, 2019 at 17:54 | Posted in Economics | 2 Comments

banSome go so far as to insist that development interventions should be subjected to the same kind of randomised control trials used in medicine, with “treatment” groups assessed against control groups. Such trials are being rolled out to evaluate the impact of a wide variety of projects – everything from water purification tablets to microcredit schemes, financial literacy classes to teachers’ performance bonuses …

The real problem with the “aid effectiveness” craze is that it narrows our focus down to micro-interventions at a local level that yield results that can be observed in the short term. At first glance this approach might seem reasonable and even beguiling. But it tends to ignore the broader macroeconomic, political and institutional drivers of impoverishment and underdevelopment. Aid projects might yield satisfying micro-results, but they generally do little to change the systems that produce the problems in the first place. What we need instead is to tackle the real root causes of poverty, inequality and climate change …

If we are concerned about effectiveness, then instead of assessing the short-term impacts of micro-projects, we should evaluate whole public policies … In the face of the sheer scale of the overlapping crises we face, we need systems-level thinking …

Fighting against poverty, inequality, biodiversity loss and climate change requires changing the rules of the international economic system to make it more ecological and fairer for the world’s majority. It’s time that we devise interventions – and accountability tools – appropriate to this new frontier.

Angus Deaton James Heckman Judea Pearl Joseph Stiglitz et al.

Most ‘randomistas’ — not only Duflo and Banerjee — underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that are produced every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real-world target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

Apart from these methodological problems, I do think there is also a rather disturbing kind of scientific naïveté in the Duflo-Banerjee approach to combatting poverty. The way they present their whole endeavour smacks of not so little ‘scientism’ where fighting poverty becomes a question of applying ‘objective’ quantitative ‘techniques.’ But that can’t be the right way to fight poverty! Fighting poverty and inequality is basically a question of changing the structure and institutions of our economies and societies.

Econometrics — junk science with no relevance whatsoever to real-world economics

17 Oct, 2019 at 09:47 | Posted in Statistics & Econometrics | 3 Comments

Do you believe that 10 to 20% of the decline in crime in the 1990s was caused by an increase in abortions in the 1970s? Or that the murder rate would have increased by 250% since 1974 if the United States had not built so many new prisons? Did you believe predictions that the welfare reform of the 1990s would force 1,100,000 children into poverty?

qs-econometrics-titleIf you were misled by any of these studies, you may have fallen for a pernicious form of junk science: the use of mathematical modeling to evaluate the impact of social policies. These studies are superficially impressive. Produced by reputable social scientists from prestigious institutions, they are often published in peer reviewed scientific journals. They are filled with statistical calculations too complex for anyone but another specialist to untangle. They give precise numerical “facts” that are often quoted in policy debates. But these “facts” turn out to be will o’ the wisps …

These predictions are based on a statistical technique called multiple regression that uses correlational analysis to make causal arguments … The problem with this, as anyone who has studied statistics knows, is that correlation is not causation. A correlation between two variables may be “spurious” if it is caused by some third variable. Multiple regression researchers try to overcome the spuriousness problem by including all the variables in analysis. The data available for this purpose simply is not up to this task, however, and the studies have consistently failed.

Ted Goertzel

Mainstream economists often hold the view that if you are critical of econometrics it can only be because you are a sadly misinformed and misguided person who dislike and do not understand much of it.

As Goertzel’s eminent article shows, this is, however, nothing but a gross misapprehension.

And just as Goertzel, Keynes certainly did not misunderstand the crucial issues at stake in his critique of econometrics. Quite the contrary. He knew them all too well — and was not satisfied with the validity and philosophical underpinnings of the assumptions made for applying its methods.

LierKeynes’ critique is still valid and unanswered in the sense that the problems he pointed at are still with us today and ‘unsolved.’ Ignoring them — the most common practice among applied econometricians — is not to solve them.

To apply statistical and mathematical methods to the real-world economy, the econometrician has to make some quite strong assumptions. In a review of Tinbergen’s econometric work — published in The Economic Journal in 1939 — Keynes gave a comprehensive critique of Tinbergen’s work, focusing on the limiting and unreal character of the assumptions that econometric analyses build on:

Completeness: Where Tinbergen attempts to specify and quantify which different factors influence the business cycle, Keynes maintains there has to be a complete list of all the relevant factors to avoid misspecification and spurious causal claims. Usually, this problem is ‘solved’ by econometricians assuming that they somehow have a ‘correct’ model specification. Keynes is, to put it mildly, unconvinced:

istheseptuagintaIt will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.

J M Keynes

Homogeneity: To make inductive inferences possible — and being able to apply econometrics — the system we try to analyse has to have a large degree of ‘homogeneity.’ According to Keynes most social and economic systems — especially from the perspective of real historical time — lack that ‘homogeneity.’ As he had argued already in Treatise on Probability, it wasn’t always possible to take repeated samples from a fixed population when we were analysing real-world economies. In many cases, there simply are no reasons at all to assume the samples to be homogenous. Lack of ‘homogeneity’ makes the principle of ‘limited independent variety’ non-applicable, and hence makes inductive inferences, strictly seen, impossible since one of its fundamental logical premises are not satisfied. Without “much repetition and uniformity in our experience” there is no justification for placing “great confidence” in our inductions.

And then, of course, there is also the ‘reverse’ variability problem of non-excitation: factors that do not change significantly during the period analysed, can still very well be extremely important causal factors.

Stability: Tinbergen assumes there is a stable spatio-temporal relationship between the variables his econometric models analyze. But as Keynes had argued already in his Treatise on Probability it was not really possible to make inductive generalisations based on correlations in one sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have shown us, it is exceedingly difficult to find and establish the existence of stable econometric parameters for anything but rather short time series.

Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions if it is possible to adequately quantify and measure things like expectations and political and psychological factors. And more than anything, he questioned — both on epistemological and ontological grounds — that it was always and everywhere possible to measure real-world uncertainty with the help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, “only lead to error and delusion.”

Independence: Tinbergen assumes that the variables he treats are independent (still a standard assumption in econometrics). Keynes argues that in such a complex, organic and evolutionary system as an economy, independence is a deeply unrealistic assumption to make. Building econometric models from that kind of simplistic and unrealistic assumptions risk producing nothing but spurious correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly seen, inapplicable. Mechanical probabilistic models have little leverage when applied to non-atomic evolving organic systems — such as economies.

originalIt is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis … that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish.

Building econometric models can’t be a goal in itself. Good econometric models are means that make it possible for us to infer things about the real-world systems they ‘represent.’ If we can’t show that the mechanisms or causes that we isolate and handle in our econometric models are ‘exportable’ to the real world, they are of limited value to our understanding, explanations or predictions of real-world economic systems.

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …

Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts.

Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he study to be linear. This is still standard procedure today, but as Keynes writes:

It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves; indeed, it is ridiculous.

To Keynes, it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).

2014+22keynes%20illo2The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.

J. M. Keynes “Ethics in Relation to Conduct” (1903)

And as even one of the founding fathers of modern econometrics — Trygve Haavelmo — wrote:

What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, additivity as being ontologically isomorphic to real-world economic systems, Keynes’ critique is still valid. As long as — as Keynes writes in a letter to Frisch in 1935 — “nothing emerges at the end which has not been introduced expressively or tacitly at the beginning,” I remain doubtful of the scientific aspirations of econometrics.

In his critique of Tinbergen, Keynes points us to the fundamental logical, epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

Econometric modelling should never be a substitute for thinking. From that perspective, it is really depressing to see how much of Keynes’ critique of the pioneering econometrics in the 1930s-1940s is still relevant today. And that is also a reason why we — as does Goertzl — have to keep on criticizing it.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

Keynes, letter to E.J. Broster, December 19, 1939

Seyla Benhabib und Rainer Forst über Habermas

16 Oct, 2019 at 19:11 | Posted in Politics & Society | Leave a comment

 

The experimental approach to global poverty

15 Oct, 2019 at 19:21 | Posted in Economics | 4 Comments

 

Wicked game

15 Oct, 2019 at 13:39 | Posted in Varia | Leave a comment

 

L’un part, l’autre reste

14 Oct, 2019 at 23:07 | Posted in Varia | Leave a comment

 

‘Nobel prize’ winner Esther Duflo on how to fight poverty

14 Oct, 2019 at 15:35 | Posted in Economics | 2 Comments

 

Today The Royal Swedish Academy of Sciences announced that it has decided to award The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2019 to Esther Duflo, Abhijit Banerjee and Michael Kremer.

Great choice!

In one single stroke the academy doubled the number of women having received the ‘Nobel prize’ in economics. Compared with most other recipients for the last thirty years this an excellent choice (although yours truly does have some quarrels with Duflo’s preferred randomization methodology).

On the limited applicability of game theory

14 Oct, 2019 at 10:34 | Posted in Economics | 1 Comment

Many mainstream economists – still — think that game theory is useful and can be applied to real-life and give important and interesting results. That, however, is a rather unsubstantiated view. What game theory does is, strictly seen, nothing more than investigating the logic of behaviour among non-existant robot-imitations of humans. Knowing how those ‘rational fools’ play games do not help us to decide and act when interacting with real people. Knowing some game theory may actually make us behave in a way that hurts both ourselves and others. Decision-making and social interaction are always embedded in socio-cultural contexts. Not taking account of that, game theory will remain an analytical cul-de-sac that never will be able to come up with useful and relevant explanations.

GT-Fig-16Over-emphasizing the reach of instrumental rationality and abstracting away from the influence of many known to be important factors, reduces the analysis to a pure thought experiment without any substantial connection to reality. Limiting theoretical economic analysis in this way — not incorporating both motivational and institutional factors when trying to explain human behaviour — makes economics insensitive to social facts.

Game theorists extensively exploit ‘rational choice’ assumptions in their explanations. That is probably also the reason why game theory has not been able to accommodate known anomalies into the core claims of the theory. That should hardly come as a surprise to anyone. Game theory with its axiomatic view on individuals’ tastes, beliefs, and preferences, cannot accommodate very much of real-life behaviour. It is hard to find really compelling arguments in favour of us continuing down its barren paths since individuals obviously do not comply with, or are guided by game theory. Apart from (perhaps) few notable exceptions — like Schelling on segregation (1978) and Akerlof on ‘lemons’ (1970) — it is difficult to find really successful applications of game theory. Why? To a large extent simply because the boundary conditions of game theoretical models are false and baseless from a real-world perspective. And, perhaps even more importantly, since they are not even close to being good approximations of real-life, game theory is lacking predictive power. This should come as no surprise. As long as game theory sticks to its ‘rational choice’ foundations, there is not much to be hoped for.

Game theorists can, of course, marginally modify their tool-box and fiddle with the auxiliary assumptions to get whatever outcome they want. But as long as the ‘rational choice’ core assumptions are left intact, it seems a pointless effort of hampering with an already excessive deductive-axiomatic formalism. If you do believe in a real-world relevance of game theoretical ‘science fiction’ assumptions such as expected utility, ‘common knowledge,’ ‘backward induction,’ correct and consistent beliefs etc., etc., then adding things like ‘framing,’ ‘cognitive bias,’ and different kinds of heuristics, do not ‘solve’ any problem. If we want to construct a theory that can provide us with explanations of individual cognition, decisions, and social interaction, we have to look for something else.

In real life, people – acting in a world where the assumption of an unchanging future does not hold — do not always know what kind of plays they are playing. And if they do, they often do not take it for given, but rather try to change it in different ways. And the way they play – the strategies they choose to follow — depends not only on the expected utilities but on what specifics these utilities are calculated. What these specifics are — food, water, luxury cars, money etc. – influence to what extent we let justice, fairness, equality, influence our choices. ‘Welfarism’ – the consequentialist view that all that really matters to people is the utility of the outcomes — is a highly questionable short-coming built into game theory, and certainly detracts from its usefulness in understanding real-life choices made outside the model world of game theory.

Games people play in societies are usually not like games of chess. In the confined context of parlour-games – like in the nowadays so often appealed to auction negotiations for ‘defending’ the usefulness of game theory — the rather thin rationality concept on which game theory is founded may be adequate. But far from being congratulatory, this ought to warn us of the really bleak applicability of game theory. Game theory, with its highly questionable assumptions on ‘rationality’, equilibrium solutions, information, and knowledge, simply makes it useless as an instrument for explaining real-world phenomena.

Applications of game theory have on the whole resulted in massive predictive failures. People simply do not act according to the theory. They do not know or possess the assumed probabilities, utilities, beliefs or information to calculate the different (‘subgame,’ ‘trembling-hand perfect,’ or whatever Nash-) equilibria. They may be reasonable and make use of their given cognitive faculties as well as they can, but they are obviously not those perfect and costless hyper-rational expected utility maximizing calculators game theory posits. And fortunately so. Being ‘reasonable’ make them avoid all those made-up ‘rationality’ traps that game theory would have put them in if they had tried to act as consistent players in a game-theoretical sense.

The lack of successful empirical application of game theory shows there certainly are definitive limits of how far instrumental rationality can take us in trying to explain and understand individual behaviour in social contexts. The kind of preferences, knowledge, information and beliefs — and lack of contextual ‘thickness’ — that are assumed to be at hand in the axiomatic game-theoretical set-up do not give much space for delivering real and relevant insights of the kind of decision-making and action we encounter in our everyday lives.

Instead of making formal logical argumentation based on deductive-axiomatic models the message, we are arguably better served by social scientists who more than anything else try to contribute to solving real problems – and in that endeavour, other inference schemes may be much more relevant than formal logic.

Game theoretical models build on a theory that is abstract, unrealistic and presenting mostly non-testable hypotheses. One important rationale behind this kind of model building is the quest for rigour, and more precisely, logical rigour. Instead of basically trying to establish a connection between empirical data and assumptions, ‘truth’ has come to be reduced to, a question of fulfilling internal consistency demands between conclusion and premises, instead of showing a ‘congruence’ between model assumptions and reality. This has, of course, severely restricted the applicability of game theory and its models.

Game theory builds on ‘rational choice’ theory and so shares its short-comings. Especially the lack of bridging between theory and real-world phenomena is deeply problematic since it makes game-theoretical theory testing and explanation impossible.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and the theories and models of it.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

The limits of extrapolation in economics

13 Oct, 2019 at 19:15 | Posted in Theory of Science & Methodology | 2 Comments

steelThere are two basic challenges that confront any account of extrapolation that seeks to resolve the shortcomings of simple induction. One challenge, which I call extrapolator’s circle, arises from the fact that extrapolation is worthwhile only when there are important limitations on what one can learn about the target by studying it directly. The challenge, then, is to explain how the suitability of the model as a basis for extrapolation can be established given only limited, partial information about the target … The second challenge is a direct consequence of the heterogeneity of populations studied in biology and social sciences. Because of this heterogeneity, it is inevitable there will be causally relevant differences between the model and the target population.

In economics — as a rule — we can’t experiment on the real-world target directly.  To experiment, economists therefore standardly construct ‘surrogate’ models and perform ‘experiments’ on them. To be of interest to us, these surrogate models have to be shown to be relevantly ‘similar’ to the real-world target, so that knowledge from the model can be exported to the real-world target. The fundamental problem highlighted by Steel is that this ‘bridging’ is deeply problematic​ — to show that what is true of the model is also true of the real-world target, we have to know what is true of the target, but to know what is true of the target we have to know that we have a good model  …

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built tractability assumptions — like, e. g., invariance, additivity, faithfulness, modularity, common knowledge, etc., etc. — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of surrogate models.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.