Unpacking the ‘Nobel prize’ in economics

19 Oct, 2019 at 12:33 | Posted in Economics | Leave a comment

plum2In a 2017 speech, Duflo famously likened economists to plumbers. In her view the role of an economist is to solve real world problems in specific situations. This is a dangerous assertion, as it suggests that the “plumbing” the randomistas are doing is purely technical, and not guided by theory or values. However, the randomistas’ approach to economics is not objective, value-neutral, nor pragmatic, but rather, rooted in a particular theoretical framework and world view – neoclassical microeconomic theory and methodological individualism.

The experiments’ grounding has implications for how experiments are designed and the underlying assumptions about individual and collective behavior that are made. Perhaps the most obvious example of this is that the laureates often argue that specific aspects of poverty can be solved by correcting cognitive biases. Unsurprisingly, there is much overlap between the work of randomistas and the mainstream behavioral economists, including a focus on nudges that may facilitate better choices on the part of people living in poverty.

Another example is Duflo’s analysis of women empowerment. Naila Kabeer argues that it employs an understanding of human behavior “uncritically informed by neoclassical microeconomic theory.” Since all behavior can allegedly be explained as manifestations of individual maximizing behavior, alternative explanations are dispensed with. Because of this, Duflo fails to understand a series of other important factors related to women’s empowerment, such as the role of sustained struggle by women’s organizations for rights or the need to address unfair distribution of unpaid work that limits women’s ability to participate in the community.

Ingrid Harvold Kvangraven

Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central.

Assume that you have examined how the performance of a group of people (A) is affected by a specific ‘treatment’ (B). How can we extrapolate/generalize to new samples outside the original population? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P'(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is far from satisfactory. And often it is – unfortunately – exactly this that I see when I study mainstream economists’ RCTs and ‘experiments.’

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

esther_dufloDuflo sees development as the implementation and replication of expert-led fixes to provide basic goods for the poor who are often blinded by their exacting situation. It is a technical quest for certainty and optimal measures in a fairly static framework.

In Duflo’s science-based ‘benevolent paternalism’, the experimental technique works as an ‘anti-politics machine’ … social goals being predefined and RCT outcomes settling ideally ambiguities and conflicts. Real-world politics – disregarding or instrumentalising RCTs – and institutions – resulting from social compromises instead of evidence – are thus often perceived as external disturbances and constraints to economic science and evidence-based policy.

Agnès Labrousse

Economists do not understand the economy

19 Oct, 2019 at 10:01 | Posted in Economics | Leave a comment

 

Accumulate, accumulate! That is Moses and the prophets!

18 Oct, 2019 at 18:38 | Posted in Economics | 2 Comments

 

In the postwar period, it has become increasingly clear that economic growth has not only brought greater prosperity. The other side of growth, in the form of pollution, contamination, wastage of resources, and climate change, has emerged as perhaps the greatest challenge of our time.

Against the mainstream theory’s view on the economy as a balanced and harmonious system, where growth and the environment go hand in hand, ecological economists object that it can rather be characterized as an unstable system that at an accelerating pace consumes energy and matter, and thereby pose a threat against the very basis for its survival.

nicholasThe Romanian-American economist Nicholas Georgescu-Roegen (1906-1994) argued in The Entropy Law and the Economic Process (1971) that the economy was actually a giant thermodynamic system in which entropy increases inexorably and our material basis disappears. If we choose to continue to produce with the techniques we have developed, then our society and earth will disappear faster than if we introduce small-scale production, resource-saving technologies and limited consumption.

Following Georgescu-Roegen, ecological economists have argued that industrial society inevitably leads to increased environmental pollution, energy crisis and an unsustainable growth.

Today we really need to re-consider how we look upon how our economy influences the environment and climate change. And we need to do it fast. Nicholas Georgescu-Roegen gives us a good starting point for doing so!

A truly distinguished economist who knows what he is talking about

18 Oct, 2019 at 16:08 | Posted in Economics | Comments Off on A truly distinguished economist who knows what he is talking about

 

‘Nobel prize’ winners Duflo and Banerjee do not tackle the real root causes of poverty

17 Oct, 2019 at 17:54 | Posted in Economics | 2 Comments

banSome go so far as to insist that development interventions should be subjected to the same kind of randomised control trials used in medicine, with “treatment” groups assessed against control groups. Such trials are being rolled out to evaluate the impact of a wide variety of projects – everything from water purification tablets to microcredit schemes, financial literacy classes to teachers’ performance bonuses …

The real problem with the “aid effectiveness” craze is that it narrows our focus down to micro-interventions at a local level that yield results that can be observed in the short term. At first glance this approach might seem reasonable and even beguiling. But it tends to ignore the broader macroeconomic, political and institutional drivers of impoverishment and underdevelopment. Aid projects might yield satisfying micro-results, but they generally do little to change the systems that produce the problems in the first place. What we need instead is to tackle the real root causes of poverty, inequality and climate change …

If we are concerned about effectiveness, then instead of assessing the short-term impacts of micro-projects, we should evaluate whole public policies … In the face of the sheer scale of the overlapping crises we face, we need systems-level thinking …

Fighting against poverty, inequality, biodiversity loss and climate change requires changing the rules of the international economic system to make it more ecological and fairer for the world’s majority. It’s time that we devise interventions – and accountability tools – appropriate to this new frontier.

Angus Deaton James Heckman Judea Pearl Joseph Stiglitz et al.

Most ‘randomistas’ — not only Duflo and Banerjee — underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that are produced every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real-world target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

Apart from these methodological problems, I do think there is also a rather disturbing kind of scientific naïveté in the Duflo-Banerjee approach to combatting poverty. The way they present their whole endeavour smacks of not so little ‘scientism’ where fighting poverty becomes a question of applying ‘objective’ quantitative ‘techniques.’ But that can’t be the right way to fight poverty! Fighting poverty and inequality is basically a question of changing the structure and institutions of our economies and societies.

The experimental approach to global poverty

15 Oct, 2019 at 19:21 | Posted in Economics | 4 Comments

 

‘Nobel prize’ winner Esther Duflo on how to fight poverty

14 Oct, 2019 at 15:35 | Posted in Economics | 2 Comments

 

Today The Royal Swedish Academy of Sciences announced that it has decided to award The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2019 to Esther Duflo, Abhijit Banerjee and Michael Kremer.

Great choice!

In one single stroke the academy doubled the number of women having received the ‘Nobel prize’ in economics. Compared with most other recipients for the last thirty years this an excellent choice (although yours truly does have some quarrels with Duflo’s preferred randomization methodology).

On the limited applicability of game theory

14 Oct, 2019 at 10:34 | Posted in Economics | 1 Comment

Many mainstream economists – still — think that game theory is useful and can be applied to real-life and give important and interesting results. That, however, is a rather unsubstantiated view. What game theory does is, strictly seen, nothing more than investigating the logic of behaviour among non-existant robot-imitations of humans. Knowing how those ‘rational fools’ play games do not help us to decide and act when interacting with real people. Knowing some game theory may actually make us behave in a way that hurts both ourselves and others. Decision-making and social interaction are always embedded in socio-cultural contexts. Not taking account of that, game theory will remain an analytical cul-de-sac that never will be able to come up with useful and relevant explanations.

GT-Fig-16Over-emphasizing the reach of instrumental rationality and abstracting away from the influence of many known to be important factors, reduces the analysis to a pure thought experiment without any substantial connection to reality. Limiting theoretical economic analysis in this way — not incorporating both motivational and institutional factors when trying to explain human behaviour — makes economics insensitive to social facts.

Game theorists extensively exploit ‘rational choice’ assumptions in their explanations. That is probably also the reason why game theory has not been able to accommodate known anomalies into the core claims of the theory. That should hardly come as a surprise to anyone. Game theory with its axiomatic view on individuals’ tastes, beliefs, and preferences, cannot accommodate very much of real-life behaviour. It is hard to find really compelling arguments in favour of us continuing down its barren paths since individuals obviously do not comply with, or are guided by game theory. Apart from (perhaps) few notable exceptions — like Schelling on segregation (1978) and Akerlof on ‘lemons’ (1970) — it is difficult to find really successful applications of game theory. Why? To a large extent simply because the boundary conditions of game theoretical models are false and baseless from a real-world perspective. And, perhaps even more importantly, since they are not even close to being good approximations of real-life, game theory is lacking predictive power. This should come as no surprise. As long as game theory sticks to its ‘rational choice’ foundations, there is not much to be hoped for.

Game theorists can, of course, marginally modify their tool-box and fiddle with the auxiliary assumptions to get whatever outcome they want. But as long as the ‘rational choice’ core assumptions are left intact, it seems a pointless effort of hampering with an already excessive deductive-axiomatic formalism. If you do believe in a real-world relevance of game theoretical ‘science fiction’ assumptions such as expected utility, ‘common knowledge,’ ‘backward induction,’ correct and consistent beliefs etc., etc., then adding things like ‘framing,’ ‘cognitive bias,’ and different kinds of heuristics, do not ‘solve’ any problem. If we want to construct a theory that can provide us with explanations of individual cognition, decisions, and social interaction, we have to look for something else.

In real life, people – acting in a world where the assumption of an unchanging future does not hold — do not always know what kind of plays they are playing. And if they do, they often do not take it for given, but rather try to change it in different ways. And the way they play – the strategies they choose to follow — depends not only on the expected utilities but on what specifics these utilities are calculated. What these specifics are — food, water, luxury cars, money etc. – influence to what extent we let justice, fairness, equality, influence our choices. ‘Welfarism’ – the consequentialist view that all that really matters to people is the utility of the outcomes — is a highly questionable short-coming built into game theory, and certainly detracts from its usefulness in understanding real-life choices made outside the model world of game theory.

Games people play in societies are usually not like games of chess. In the confined context of parlour-games – like in the nowadays so often appealed to auction negotiations for ‘defending’ the usefulness of game theory — the rather thin rationality concept on which game theory is founded may be adequate. But far from being congratulatory, this ought to warn us of the really bleak applicability of game theory. Game theory, with its highly questionable assumptions on ‘rationality’, equilibrium solutions, information, and knowledge, simply makes it useless as an instrument for explaining real-world phenomena.

Applications of game theory have on the whole resulted in massive predictive failures. People simply do not act according to the theory. They do not know or possess the assumed probabilities, utilities, beliefs or information to calculate the different (‘subgame,’ ‘trembling-hand perfect,’ or whatever Nash-) equilibria. They may be reasonable and make use of their given cognitive faculties as well as they can, but they are obviously not those perfect and costless hyper-rational expected utility maximizing calculators game theory posits. And fortunately so. Being ‘reasonable’ make them avoid all those made-up ‘rationality’ traps that game theory would have put them in if they had tried to act as consistent players in a game-theoretical sense.

The lack of successful empirical application of game theory shows there certainly are definitive limits of how far instrumental rationality can take us in trying to explain and understand individual behaviour in social contexts. The kind of preferences, knowledge, information and beliefs — and lack of contextual ‘thickness’ — that are assumed to be at hand in the axiomatic game-theoretical set-up do not give much space for delivering real and relevant insights of the kind of decision-making and action we encounter in our everyday lives.

Instead of making formal logical argumentation based on deductive-axiomatic models the message, we are arguably better served by social scientists who more than anything else try to contribute to solving real problems – and in that endeavour, other inference schemes may be much more relevant than formal logic.

Game theoretical models build on a theory that is abstract, unrealistic and presenting mostly non-testable hypotheses. One important rationale behind this kind of model building is the quest for rigour, and more precisely, logical rigour. Instead of basically trying to establish a connection between empirical data and assumptions, ‘truth’ has come to be reduced to, a question of fulfilling internal consistency demands between conclusion and premises, instead of showing a ‘congruence’ between model assumptions and reality. This has, of course, severely restricted the applicability of game theory and its models.

Game theory builds on ‘rational choice’ theory and so shares its short-comings. Especially the lack of bridging between theory and real-world phenomena is deeply problematic since it makes game-theoretical theory testing and explanation impossible.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and the theories and models of it.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Teflon economics

10 Oct, 2019 at 09:16 | Posted in Economics | 2 Comments

At least since the time of Keynes’s famous critique of Tinbergen’s econometric methods, those of us in the social science community who have been impolite enough to dare to question the preferred methods and models applied in quantitative research in general and economics more specifically, are as a rule met with disapproval. Although people seem to get very agitated and upset by the critique — just read the commentaries on this blog if you don’t believe me — defenders of received theory always say that the critique is ‘nothing new’, that they have always been ‘well aware’ of the problems, and so on, and so on.

So, for the benefit of all mindless practitioners of economics — who don’t want to be disturbed in their doings — eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:

We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what everybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

Knut Wicksell’s principle of just taxation

7 Oct, 2019 at 18:19 | Posted in Economics | 21 Comments

knut-wicksellMy claim is that the very concept of taxation presupposes that all incomes are equally justified. This applies no matter which principle of taxation one wishes to put into practice. A tax that only reduces by a certain amount what a person unjustly possesses cannot be regarded as an appropriate compensation on his part for the efforts the state has made on his behalf, nor can it be seen as a sacrifice comparable to the sacrifice another person has to bear out of the income he or she has earned in legitimate ways.

La baisse des taux d’intérêt

7 Oct, 2019 at 10:59 | Posted in Economics | 1 Comment

La politique monétaire des pays de l’OCDE est devenue très expansionniste depuis la crise de 2008-2009. Les taux d’intérêt des banques centrales ont beaucoup baissé (2,25 % aux Etats-Unis aujourd’hui, 0 % dans la zone euro et au Japon) …

taux-negatifbraPourtant, on ne voit pas d’effets très positifs de cette politique monétaire extraordinairement expansionniste. La croissance est en train de ralentir ; le taux d’investissement des entreprises par rapport à la valeur ajoutée est plus bas en 2019 (11,5 %) qu’en 2008 (11,7 %) ou en 2000 (12,3 %), malgré les taux d’intérêt quasi nuls …

Comment le comprendre ? Voici une piste : certes, les taux d’intérêt sans risque ont considérablement diminué, mais cette baisse a été entièrement compensée par la hausse des primes de risque …

Donnons deux exemples pour expliquer ce mécanisme. On peut d’abord comparer le taux d’intérêt à long terme (à 10 ans) sur les dettes publiques des pays de l’OCDE et le rendement, dans ces mêmes pays, du capital physique des entreprises, le ROACE, « return on average capital employed » ou rendement du capital mis en place …

En 2019, le rendement du capital des entreprises est de 5,3 %, le taux d’intérêt à 10 ans sans risque, de 0,8 %.

Cela veut dire qu’est apparue une énorme prime de risque affectant l’investissement dans le capital des entreprises : en 2019, on exige 4,5 points de plus pour investir dans les entreprises que pour investir dans les dettes publiques. Ceci explique pourquoi les taux d’intérêt bas n’ont pas stimulé l’investissement dans les pays de l’OCDE : malgré la baisse des taux d’intérêt, le rendement exigé du capital, donc des investissements en entreprise, est resté le même ; une baisse du rendement exigé du capital aurait permis de réaliser des investissements supplémentaires, certes à la rentabilité plus faible, mais ceci n’a pas été le cas.

Patrick Artus / Le Monde

The primary problem with mainstream economics

3 Oct, 2019 at 09:31 | Posted in Economics | 10 Comments

Jamie Morgan: To a member of the public it must seem weird that it is possible to state, as you do, such fundamental criticism of an entire field of study. The perplexing issue from a third party point of view is how do we reconcile good intention (or at least legitimate sense of self as a scholar), and power and influence in the world with error, failure and falsity in some primary sense; given that the primary problem is methodological, the issues seem to extend in different ways from Milton Friedman to Robert Lucas Jr, from Paul Krugman to Joseph Stiglitz. Do such observations give you pause? My question (invitation) I suppose, is how does one reconcile (explain or account for) the direction of travel of mainstream economics: the degree of commonality identified in relation to its otherwise diverse parts, the glaring problems of that commonality – as identified and stated by you and many other critics?

einstein1988berlinLars P. Syll: When politically “radical” economists like Krugman, Wren-Lewis or Stiglitz confront the critique of mainstream economics from people like me, they usually have the attitude that if the critique isn’t formulated in a well-specified mathematical model it isn’t worth taking seriously. To me that only shows that, despite all their radical rhetoric, these economists – just like Milton Friedman, Robert Lucas Jr or Greg Mankiw – are nothing but die-hard defenders of mainstream economics. The only economic analysis acceptable to these people is the one that takes place within the analytic-formalistic modelling strategy that makes up the core of mainstream economics. Models and theories that do not live up to the precepts of the mainstream methodological canon are considered “cheap talk”. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered to be doing economics …

The kind of “diversity” you asked me about, is perhaps even better to get a perspective on, by considering someone like Dani Rodrik, who a couple of years ago wrote a book on economics and its modelling strategies – Economics Rules (2015) – that attracted much attention among economists in the academic world. Just like Krugman and the other politically “radical” mainstream economists, Rodrik shares the view that there is nothing basically wrong with standard theory. As long as policymakers and economists stick to standard economic analysis everything is fine. Economics is just a method that makes us “think straight” and “reach correct answers”. Similar to Krugman, Rodrik likes to present himself as a kind of pluralist anti-establishment economics iconoclast, but when it really counts, he shows what he is – a mainstream economist fanatically defending the relevance of standard economic modelling strategies. In other words – no heterodoxy where it would really count. In my view, this isn’t pluralism. It’s a methodological reductionist strait-jacket.

Real-World Economics Review

Die schwarze Null ist völliger ökonomischer Unsinn

2 Oct, 2019 at 18:04 | Posted in Economics | Leave a comment

Jürgen Zurheide: Da kommt natürlich sofort die Frage, wie wollen Sie das dann finanzieren? Wir tragen die schwarze Null wie eine Monstranz vor uns her, wollen Sie die opfern?

nullSebastian Dullien: Die schwarze Null ist völliger ökonomischer Unsinn. Das hat überhaupt keinen Sinn und keinen Zweck. Zurzeit zahlt Deutschland für seine Staats-schulden keine Zinsen, wenn es neue Schulden aufnimmt.

Gestern waren zum ersten Mal auch die Zinsen auf dreißigjährige Staatsanleihen negativ. Das heißt, selbst wenn sich der Staat jetzt für 30 Jahre etwas leiht, muss er nur diese Summe zurückzahlen und keinen Cent Zinsen bezahlen. In einer solchen Situation nicht zu investieren, ist eigentlich fahrlässig.

Wir haben den Investitionsbedarf, viele der Dinge, die wir da machen würden, würden auch den späteren Generationen zugutekommen. Wenn man jetzt dekarbonisiert, müssen unsere Kinder und Enkel kein Öl mehr importieren und sparen damit Geld. Es gibt überhaupt keinen Grund, warum man all das aus dem heutigen Haushalt bezahlen müsste.

Von daher würde ich sagen, ja, wir brauchen eine Diskussion, die die schwarze Null endlich infrage stellt und die im Grunde auch die Schuldenbremse infrage stellt. Denn auch die wäre, wenn man jetzt die schwarze Null beiseite lassen würde, die Schuldenbremse wäre dann das nächste Hindernis, um ein vernünftiges Investitionsprogramm zu starten.

Deutschlandfunk

Verhaltensökonomie

1 Oct, 2019 at 21:34 | Posted in Economics | Leave a comment

Unser Umgang mit Fremden ist heute durch die Institutionen des Marktes und des Geldes geprägt – und die abstrakte “Sprache des Preises”. Diese Institutionen lassen sich jedoch nur begrenzt in die Menschheitsgeschichte zurückspiegeln. Unsere Wirtschaftsform ist das Ergebnis massiver gesellschaftlicher Veränderungen innerhalb weniger Jahrhunderte. Kapitalismus entsteht im Kopf: nicht als Instinkt, sondern als Idee. Unter anderem durch Gelehrte, die auch die moderne Wirtschaftswissenschaft entwickelt haben …

awWenn die Ver-haltensökonomie zur Erklärung dieser Entwicklung dennoch in die Urgeschichte der Menschheit zurückspringt, führt das dazu, dass eine von mehreren möglichen kulturhistorischen Entwicklungen quasi biologisch notwendig erscheint, die bestehende Gesellschaftsordnung als alternativlos hingestellt wird – eben weil sie unserer Natur entspricht.

Der Nobelpreis belohnt herausragende Forschung, die dem “Wohle der Menschheit” dient. Relevant ist dafür nicht nur der eigentliche Gehalt der Forschung, sondern auch, welches Erkenntnisinteresse dahintersteht und welche Handlungsempfehlungen daraus abgeleitet werden. Meine Kritik zielt weniger auf die Anerkennung wissenschaftlicher Leistungen als auf die Autorisierung politischer Botschaften, welche ein Eigenleben entfalten können. Biologisch inspirierte Verhaltensökonomen wie Bowles, Fehr und Gintis suchen nach den evolutionären Wurzeln menschlicher Sozialität und ziehen daraus Schlüsse für die Entwicklung und Gestaltung moderner gesellschaftlicher Institutionen und Systeme. Wie Wirtschaft, die Politik und auch das Recht unserer sozialen Natur gerecht werden und zugleich das Beste aus uns herausholen. Zum Wohle der Menschheit. Wirklich?

Sabine Frerichs

What’s wrong with Krugman’s economics?

1 Oct, 2019 at 10:30 | Posted in Economics | 9 Comments

Krugman writes: “So how do you do useful economics? In general, what we really do is combine maximization-and-equilibrium as a first cut with a variety of ad hoc modifications reflecting what seem to be empirical regularities about how both individual behavior and markets depart from this idealized case.”

Alexander  Rosenberg
Duke UniversityBut if you ask the New Classical economists, they’ll say, this is exactly what we do—combine maximizing-and-equilibrium with empirical regularities. And they’d go on to say it’s because Krugman’s Keynesian models don’t do this or don’t do enough of it, they are not “useful” for prediction or explanation …

The trouble is that the macroeconomic evidence can’t tell us when and where maximization-and-equilibrium goes wrong, and there seems no immediate prospect for improving the assumptions of perfect rationality and perfect markets from behavioral economics, neuroeconomics, experimental economics, evolutionary economics, game theory, etc.

But these concessions are all the New Classical economists need to defend themselves against Krugman. After all, he seems to admit there is no alternative to maximization and equilibrium …

One thing that’s missing from Krugman’s treatment of economics is the explicit recognition of what Keynes and before him Frank Knight, emphasized: the persistent presence of enormous uncertainty in the economy …

Alexander Rosenberg

As Rosenberg notes, Krugman works with a very simple modelling dichotomy — either models are complex or they are simple. For years now, self-proclaimed “proud neoclassicist” Paul Krugman has in endless harpings on the same old IS-LM string told us about the splendour of the Hicksian invention — so, of course, to Krugman simpler models are always preferred.

Krugman has argued that ‘Keynesian’ macroeconomics more than anything else “made economics the model-oriented field it has become.” In Krugman’s eyes, Keynes was a “pretty klutzy modeler,” and it was only thanks to Samuelson’s famous 45-degree diagram and Hicks’s IS-LM that things got into place. Although admitting that economists have a tendency to use ”excessive math” and “equate hard math with quality” he still vehemently defends — and always have — the mathematization of economics:

I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

wrong-tool-by-jerome-awBut if the math-is-the-message-modelers aren’t able to show that the mechanisms or causes that they isolate and handle in their mathematically formalized macromodels are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ those mathematical models do only hold under ceteris paribus conditions and are consequently of limited value to our understandings, explanations and predictions of real economic systems.

For years now, Krugman has criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now — just to take one example — Paul Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.”

When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise criticizes.

If macroeconomic models — no matter of what ilk — make assumptions, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable.

So let me — respectfully — summarize: A gadget is just a gadget — and brilliantly silly simple models — IS-LM included — do not help us working with the fundamental issues of modern economies any more than brilliantly silly complicated models — calibrated DSGE and RBC models included. And as Rosenberg rightly notices:

When he accepts maximizing and equilibrium as the (only?) way useful economics is done Krugman makes a concession so great it threatens to undercut the rest of his arguments against New Classical economics.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.