Balanced budget — an old fashioned religion

30 December, 2015 at 13:12 | Posted in Economics | 2 Comments

PAUL_SAMUELSONI think there is an element of truth in the view that the superstition that the budget must be balanced at all times [is necessary]. Once it is debunked, [it] takes away one of the bulwarks that every society must have against expenditure out of control. There must be discipline in the allocation of resources or you will have anarchistic chaos and inefficiency. And one of the functions of old fashioned religion was to scare people by sometimes what might be regarded as myths into behaving in a way that the long-run civilized life requires. We have taken away a belief in the intrinsic necessity of balancing the budget if not in every year, [and then] in every short period of time. If Prime Minister Gladstone came back to life he would say “oh, oh what you have done” and James Buchanan argues in those terms. I have to say that I see merit in that view.

Paul Samuelson

Advertisements

Think you’re rational? Better think twice!

28 December, 2015 at 16:23 | Posted in Economics | 10 Comments

My friend Ben says that on the first day he got the following sequence of Heads and Tails when tossing a coin:
H H H H H H H H H H

And on the second day he says that he got the following sequence:
H T T H H T T H T H

Which day-report makes you suspicious?

Most people I ask this question says the first day-report looks suspicious.

But actually both days are equally probable! Every time you toss a (fair) coin there is the same probability (50 %) of getting H or T. Both days Ben makes equally many tosses and every sequence is equally probable!

And in mainstream economics one of the basic assumptions is typically — still — that people make rational choices …

Dani Rodrik on behavioural challenges (VIII)

26 December, 2015 at 18:16 | Posted in Economics | 4 Comments

ChameleonHow would you react if a renowned physicist, say Richard Feynman, was telling you that sometimes force is proportional to acceleration and at other times it is proportional to acceleration squared?

I guess you would be unimpressed. But actually, what Dani Rodrik does in Economics Rules amounts to the same strange thing when it comes to theory development and model modification.

In mainstream neoclassical theory preferences are standardly expressed in the form of a utility function. But although the expected utility theory has been known for a long time to be both theoretically and descriptively inadequate, neoclassical economists all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

What Dani Rodrik and most other mainstream economists try to do in face of the obvious theoretical and behavioural inadequacies of the expected utility theory, is to marginally mend it. But that cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them!

As Matthew Rabin and Richard Thaler have it in Risk Aversion:

It is time for economists to recognize that expected utility is an ex-hypothesis, so that we can concentrate our energies on the important task of developing better descriptive models of choice under uncertainty.

In a similar vein, Daniel Kahneman maintains — e. g. in Thinking, Fast and Slow — that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by changes in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have a (typically unquestioned) axiomatic status within economic theory?

Much of what experimental and behavioural economics come up with, is really bad news for mainstream economi theory, and to just conclude, as Rodrik does, that these

insights from social psychology were subsequently applied to many areas of decision making, such as saving behavior, choice of medical insurance, and fertilizer use by poor farmers

sounds, to say the least, somewhat lame, when the works of people like Rabin, Thaler and Kahneman, in reality, show that expected utility theory is nothing but transmogrifying truth.

To Rodrik, mainstream economics is nothing but a smorgasbord of ‘thought experimental’ models. For every purpose you may have, there is always an appropriate model to pick.

But, really, there has to be some limits to the flexibility of a theory!

If you freely can substitute any part of the core and auxuiliary sets of assumptions and still consider that you deal with the same — mainstream, neoclassical or what have you — theory, well, then it’s not at theory, but a chameleon.

The big problem with Rodrik’s cherry-picking view of models is of course that the theories and models presented get totally immunized against all critique.  A sure way to get rid of all kinds of ‘anomalies,’ yes, but at a far too high price. So people do not behave optimizing? No problem, we have models that assume satisficing! So people do not maximize expected utility? No problem, we have models that assume … etc., etc..

A theory that accomodates for any observed phenomena whatsoever by creating a new special model for the occasion, and a fortiori having no chance of being tested severely and found wanting, is of little real value.

 

Jump

23 December, 2015 at 21:41 | Posted in Varia | Comments Off on Jump

 

O Holy Night

23 December, 2015 at 21:07 | Posted in Economics, Varia | 1 Comment

 

Jussi — still no. 1!

Dani Rodrik and the ’empirical turn’ in economics (VII)

23 December, 2015 at 17:17 | Posted in Economics | 9 Comments

alchemyIn Economics Rules, Dani Rodrik maintains that ‘imaginative empirical methods’ — such as game theoretical applications, natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions conerning the external validity of economic models. In Rodrik’s view they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever expanding ‘collection of potentially applicable models.’ Writes Rodrik:

Another way we can observe the transformation of the discipline is by looking at the new areas of research that have flourished in recent decades. Three of these are particularly noteworthy: behavioral economics, randomized controlled trials (RCTs), and institutions … They suggest that the view of economics as an insular, inbred discipline closed to the outside influences is more caricature than reality.

I beg to differ. When looked at carefully, there  are in fact few real reasons to share  Rodrik’s optimism on this ’empirical turn’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2014? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Rodrik, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Rodrik’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

 

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Re game theory, yours truly remembers when back in 1991, earning my first Ph.D. with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that

repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.

This, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed. Listening to what one of the world’s most renowned game theorists — Ariel Rubinstein — has to say on the — rather limited — applicability of game theory in this interview (emphasis added), I basically think he confirms my doubts about how well-founded is Rodrik’s ‘optimism:’

Is game theory useful in a concrete sense or not? … I believe that game theory is very interesting. I’ve spent a lot of my life thinking about it, but I don’t respect the claims that it has direct applications.

The analogy I sometimes give is from logic. Logic is a very interesting field in philosophy, or in mathematics. But I don’t think anybody has the illusion that logic helps people to be better performers in life. A good judge does not need to know logic. It may turn out to be useful – logic was useful in the development of the computer sciences, for example – but it’s not directly practical in the sense of helping you figure out how best to behave tomorrow, say in a debate with friends, or when analysing data that you get as a judge or a citizen or as a scientist …

Game theory is about a collection of fables. Are fables useful or not? In some sense, you can say that they are useful, because good fables can give you some new insight into the world and allow you to think about a situation differently. But fables are not useful in the sense of giving you advice about what to do tomorrow, or how to reach an agreement between the West and Iran. The same is true about game theory …

In general, I would say there were too many claims made by game theoreticians about its relevance. Every book of game theory starts with “Game theory is very relevant to everything that you can imagine, and probably many things that you can’t imagine.” In my opinion that’s just a marketing device …

So — contrary to Rodrik’s optimism — I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a true empirical science.

Between debt and the devil

22 December, 2015 at 12:29 | Posted in Economics | Comments Off on Between debt and the devil

 

Dani Rodrik on math and models (VI)

21 December, 2015 at 11:28 | Posted in Economics | 5 Comments

According to Dani Rodrik — as argued in Economics Rules — an economic model basically consists of ‘clearly stated assumptions and behavioral mechansisms” that easily lend themselves to mathematical treatment. Furthermore, Rodrik thinks that the usual critique against the use of mathematics in economics is wrong-headed. Math only plays an instrumental role in economic models:

First, math ensures that the elements of a model … are stated clearly and are transparent …

The second virtue of mathematics is that it ensures the internal consistency of a model — simply put, that the conclusions follow from the assumptions.

What is lacking in this overly simplistic view on using mathematical modeling in economics is an ontological reflection on the conditions that have to be fullfilled for appropriately applying the methods of mathematical modeling.

Using formal mathematical modeling, mainstream economists like Rodriik sure can guarantee that the conclusion holds given the assumptions. However, there is no warrant that the validity we get in abstract model worlds automatically transfer to real world economies. Validity and consistency may be good, but it isn’t enough. From a realist perspective both relevance and soundness are sine qua non.

broken-linkIn their search for validity, rigour and precision, mainstream macro modellers of various ilks construct microfounded DSGE models that standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

Behavioural and experimental economics — not to speak of psychology — show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same – why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions – so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

In all those economic models that Rodrik praise — where the conclusions follow deductively from the assumptions — mathematics is the preferred means to assure that we get what we want  to establish with deductive rigour and precision. The problem, however, is that what guarantees this deductivity are as a rule the same things that make the external validity of the models wanting. The core assumptions (CA), as we have shown in previous posts, are as a rule not very many, and so, if the modellers want to establish ‘interesting’ facts about the economy, they have to make sure the set of auxiliary assumptions (AA) is large enough to enable the derivations. But then — how do we validate that large set of assumptions that gives Rodrik his ‘clarity’ and ‘consistency’ outside the model itself? How do we evaluate those assumptions that are clearly used for no other purpose than to guarantee an analytical-formalistic use of mathematics?  And how do we know that our model results ‘travel’ to the real world?

On a deep level one could argue that the one-eyed focus on validity and consistency make mainstream economics irrelevant, since its insistence on deductive-axiomatic foundations doesn’t earnestly consider the fact that its formal logical reasoning, inferences and arguments show an amazingly weak relationship to their everyday real world equivalents. Although the formal logic focus may deepen our insights into the notion of validity, the rigour and precision has a devastatingly important trade-off: the higher the level of rigour and precision, the smaller is the range of real world application. So the more mainstream economists insist on formal logic validity, the less they have to say about the real world. The time is due and over-due for getting the priorities right …

Bootstrapping and The Münchhausen Trilemma

20 December, 2015 at 17:17 | Posted in Economics, Varia | 1 Comment

Muenchhausen_Herrfurth_7_500x789The Münchhausen Trilemma is a term used in epistemology to stress the impossibility to prove any truth even in the fields of logic and mathematics. The name Münchhausen Trilemma was coined by the German philosopher Hans Albert in 1968 in reference to a Trilemma of “dogmatism vs. infinite regress vs. psychologism” used by Karl Popper; it is a reference to the problem of “bootstrapping”, after the story of Baron Münchhausen, pulling himself and the horse on which he was sitting out of a mire by his own hair.   (Wikipedia)

Dani Rodrik and not so transparent user’s guides to models (V)

19 December, 2015 at 18:55 | Posted in Economics | Comments Off on Dani Rodrik and not so transparent user’s guides to models (V)

user-guides-5

In Dani Rodrik’s Economics Rules it is argud that ‘the multiplicity of models is economics’ strength,’ and that a science that has a different model for everything is non-problematic, since

economic models are cases that come with explicit user’s guides — teaching notes on how to apply them. That’s because they are transparent about their critical assumptions and behavioral mechanisms.

Hmm …

That really is at odds with yours truly’s experience from studying mainstream economic models during four decades.

When, e. g., criticizing the basic (DSGE) workhorse macroeconomic model for its inability to explain involuntary unemployment, its defenders maintain that later ‘successive approximations’ and elaborations — especially newer search models — manage to do just that. However, one of the more conspicuous problems with those ‘solutions,’ is that they — as e.g. Pissarides’ ‘Loss of Skill during Unemployment and the Persistence of Unemployment Shocks’ QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. These theories and models do not come at all with the transparent and ‘explicit user’s guides’ that Rodrik maintains they do. And there’s a very obvious reason for that. For how could one even imagine to empirically test assumptions such as Pissarides’ ‘model 1’ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — far from ‘good enough’ or ‘close enough’ to real world situations?

Suck on that — and tell me if those typical mainstream neoclassical modeling assumptions in any possibly relevant way — with or without due pragmatic considerations — can be considered anything else but imagined model worlds assumptions that has nothing at all to do with the real world we happen to live in!

Here is no real transparency as to the deeper significance and role of the chosen set of axiomatic assumptions.

Here is no explicit user’s guide or indication of how we should be able to, as Rodrik puts it, ‘discriminate’ between the ‘bewildering array of possibilities’ that flow out of such outlandish and known to be false assumptions.

Theoretical models building on piles of known to be false assumptions are in no way close to being scientific explanations. On the contrary. They are untestable and a fortiori totally worthless from the point of view of scientific relevance.

Dani Rodrik disses rethink economics students (IV)

19 December, 2015 at 11:35 | Posted in Economics | 8 Comments

1390045613Economics students today are complaining more and more about the way economics is taught. The lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and narrowing of the curriculum, dissatisfy econ students all over the world. The frustrating lack of real world relevance has led many of them to demand the discipline to start develop a more open and pluralistic theoretical and methodological attitude.

Dani Rodrik has little understanding for these views, finding it hard to ‘understand these complaints in the light of the patent multiplicity of models within economics.’ Rodrik shares the view of his colleauges Paul Krugman, Greg Mankiw and Simon Wren-Lewis — all of whom he approvingly cites — that there is nothing basically wrong with ‘standard theory’ and ‘economics textbooks.’ As long as policy makers and economists stick to ‘standard economic analysis’ everything is fine. Economics is just a method that makes us ‘think straight’ and ‘reach correct answers.’

Writes Rodrik in Economics Rules:

Pluralism with respect to conclusions is one thing; pluralism with respect to methods is something else … An aspiring economist has to formulate clear models … These models can incorporate a wide range of assumptions … but not all assumptions are equally acceptable. In economics, this means that the greater the departure from benchmark assumptions, the greater the burden of justifying and motivating why those departures are needed …

Some methods are better than others … For some these constraints represent a kind of methodological straitjacket that crowds out new thinking. But it is easy to exaggerate the rigidity of the rules within which the profession operates.

Although Rodrik often likes to present himself as a kind anti-establishment economics iconoclast, when it really counts, he shows what he is — a mainstream neoclassical economist fanatically defending the relevance of standard economic modeling strategies.

So, all you young economics students that want to see a real change in economics and the way it’s taught — now you know where you have Rodrik, Mankiw, Krugman & Co. If you really want something other than the same old mainstream neoclassical catechism, if you really don’t want to be force-fed with mainstream neoclassical deductive-axiomatic analytical formalism, you have to look elsewhere.

Heterodox economics, in the first instance, is a rejection of a very specific form of methodological reductionism. It is a rejection of the view that formalistic methods are everywhere and always appropriate.

Tony Lawson

Radikal? Ja! Neoklassiker? Nej!

18 December, 2015 at 20:37 | Posted in Economics | Comments Off on Radikal? Ja! Neoklassiker? Nej!

cg4921a17b67a940

De flesta vetenskaper ägnar sig åt teoretiska förenklingar av något slag. Det tror jag inte någon ifrågasätter. Vad frågan gäller är vad för slags teoretiska förenklingar man jobbar med. Och just här är kontrasten mellan ekonomisk teori och till exempel fysiken frapperande. Där fysiken kan göra både materiella och ideella ”galileiska experiment” (om galileiska experiment kan man läsa i vetenskapsteoretikern Nancy Cartwrights Hunting Causes and Using Them: Approaches in Philosophy and Economics (CUP 2007)) och komma fram till genuina sanningar om den empiriska världen så saknar neoklassiska ekonomer – vare sig de betraktar sig som ”radikala” eller ej – med sina teorier denna möjlighet. I Robert Lucas ”analoga ekonomier” eller Robert Sugdens ”surrogatmodeller” uppnår vi förvisso verifikationsmässig deduktivitet (genom att bland annat tillskriva dessa ekonomier karakteristika som kan representeras matematiskt) – men endast till priset av bristande extern validitet. Där fysiken har ”high leverage” med sina ”magra” lagar och principer, kommer ekonomen ingen vart alls utan en massa extra antaganden. Kruxet – för ekonomen, men inte fysikern – är att konklusionernas validitet beror nästan uteslutande på alla dessa speciella antaganden som måste göras för att säkerställa deduktionen.

Och det är precis detta som är problemet för de som betraktar sig som i någon mening “radikala” och ändå väljer att använda sig av den neoklassiska nationalekonomin. För visst kan de visa allt det de vill med denna teori. Men de kan också visa på raka motsatsen! När de ska tillämpa sin teori empiriskt hänger det i väldigt liten utsträckning på teorin som sådana om de ska kunna säga något som är empiriskt relevant, utan nästan uteslutande på specialantagandena därutöver. Därav följer också att hela argumentationen för en “radikal” nationalekonomi byggd på detta teorifundament är en non sequitur. Problemet som jag och andra vetenskapsteoretiskt orienterade forskare försökt få neoklassiska ekonomer att begripa är att ekonomimodellernas ”rigorösa” och ”exakta” slutsatser verkar hopplösa att validera utanför modellerna. Och vad har vi då för glädje eller nytta av dem?

Min huvudkritik mot alla som försöker använda sig av det deduktivistiska neoklassiska paradigmet och dess olika instantieringar – vare sig man är ”radikal” eller ej – är att de är gravt problematiska ur vetenskapsteoretisk och metodologisk synpunkt. Det neoklassiska nationalekonomer – ”radikala” eller ej – gör, är i stället att ”se åt andra hållet” och få det att framstå som om neoklassisk teori skulle stå över all kritik så länge man inte kan visa fram något alternativ som skulle vara ”bättre”. Jag tror att de flesta nog ändå tycker att man kan resa kritik mot teorier/modeller/paradigm utan att man nödvändigtvis först ska ha visat att det eventuellt finns något som är ”bättre”. Om något inte är bra ska det kunna kritiseras, vare sig det finns alternativ eller ej. Att hävda något annat är att göra det alldeles för lätt för sig. När någon påpekar att kejsaren inte har några kläder bör vi åtminstone försöka bemöta detta påstående och inte bara vifta bort det med argumentet att ingen annan bär upp klädskruden bättre än han.

Om det är verkliga, konkreta, empiriska, relevanta ekonomiska sakförhållanden vi vill analysera och förstå, förstår jag inte varför vi ska ta omvägen om en orealistisk, irrelevant, axiomatisk-deduktiv teori som bygger på – för att tala med Friedrich Hayek  – ”kvasi-allvetande individer” och möjligen kan betraktas som sann a priori, men som när den tillämpas ”saknar det berättigande som uppstår när vår hypotetiska värld liknar situationen i den verkliga världen.”

Econometrics — playing tennis with the net down

18 December, 2015 at 11:19 | Posted in Statistics & Econometrics | Comments Off on Econometrics — playing tennis with the net down

91LnGmWgEeLSuppose you test a highly confirmed hypothesis, for example, that the price elasticity of demand is negative. What would you do if the computer were to spew out a positive coefficient? Surely you would not claim to have overthrown the law of demand … Instead, you would rerun many variants of your regression until the recalcitrant computer finally acknowledged the sovereignty of your theory …

Only the naive are shocked by such soft and gentle testing … Easy it is. But also wrong, when the purpose of the exercise is not to use a hypothesis, but to determine its validity …

Econometric tests are far from useless. They are worth doing, and their results do tell something … But many economists insist that economics can deliver more, much more, than merely, more or less, plausible knowledge, that it can reach its results with compelling demonstrations. By such a standard how should one describe our usual way of testing hypotheses? One possibility is to interpret it as Blaug [The Methodology of Economics, 1980, p. 256] does, as ‘playing tennis with the net down’ …

Perhaps my charge that econometric testing lacks seriousness of purpose is wrong … But regardless of the cause, it should be clear that most econometric testing is not rigorous. Combining such tests with formalized theoretical analysis or elaborate techniques is another instance of the principle of the strongest link. The car is sleek and elegant; too bad the wheels keep falling off.

A room with a view (personal)

17 December, 2015 at 20:42 | Posted in Varia | Comments Off on A room with a view (personal)

my house
 
After almost forty years in Lund, yours truly last year returned to the town where he was born and bred — Malmö. Living on the top floor of this grandiose building, next to The Magistrate’s Park, and with The Opera and The Municipal Art Gallery just across the street, always convinces me returning was a good decision …

Dani Rodrik’s pseudo-pluralism (III)

17 December, 2015 at 15:20 | Posted in Economics | Comments Off on Dani Rodrik’s pseudo-pluralism (III)

In Dani Rodrik’s Economics Rules the proliferation of economic models during the last twenty-thirty years is presented as a sign of great diversity and abundance of new ideas:

economists-do-it-with-modelsRather than a single, specific model, economics encompasses a collection of models … Economics is in fact, a collection of diverse models that do not have a particular ideological bent or lead to a unique conclusion …

The possibilities of social life are too diverse to be squeezed into unique frameworks. But each economic model is like a partial map that illuminates a fragment of the terrain …

Different contexts … require different models … When models are selected judiciously, they are a source of illumination …

The correct answer to almost any question in economics is: It depends. Different models, each equally respectable, provide different answers.

But, again, it’s not, really, that simple.

Just as Colander, Holt, and Rosser argued twenty years ago in “The Changing Face of Mainstream Economics” (RPE, vol. 16, 2004), Rodrik also wants to purvey the view that mainstream economics is an open and pluralistic “let thousand flowers bloom” science.

But in reality it is rather “plus ça change, plus c’est la même chose.”

Pruning2.jpgWhy? Because almost all the change and diversity that Rodrik applauds only takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All the flowers that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your analytical formalist models and apply it to whatever you want – as long as you do it (Colander et al., p. 492) “with a careful understanding of the strengths of the recent orthodox approach and with a modeling methodology acceptable to the mainstream.” If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. “If it isn’t modeled, it isn’t economics.” This isn’t pluralism. It’s a methodological reductionist straightjacket.

So, even though we have seen a proliferation of models, it has almost exclusively taken place as a kind of axiomatic variation — where the core assumptions (CA) are usually untouched —
within the standard ‘urmodel,’ which is always (following an unwritten, but impregnable rule) used as a self-evident bench-mark. Seen from the perspective presented here, that is actually just another variant of theory immunization. When the preferred axiomatic specification fails (we obviously don’t have a case of perfect competition (auxiliary assumption AAi)) — just switch from AAi to AAj (e. g. monopolistic competition).

In Rodrik’s world (p. 71) “newer generations of models do not render the older generations wrong or less relevant,” but “simply expand the range of the discipline’s insights.” I don’t want to sound derisory or patronizing, but although it’s easy to say what Rodrik says, we cannot have our cake and eat it. Analytical formalism doesn’t save us from either specifying the intended areas of application of the models, or having to accept them as rival models facing the risk of being put to the test and found falsified.

The insistence on using analytical formalism and mathematical methods comes at a high cost — it often makes the analysis irrelevant from an empirical-realist point of view.

Applying closed analytical-formalist-mathematical-deductivist-axiomatic models, built on atomistic-reductionist assumptions to a world assumed to consist of atomistic-isolated entities, is a sure recipe for failure when the real world is known to be an open system where complex and relational structures and agents interact. Validly deducing things in models of that kind doesn’t much help us understanding or explaining what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream economists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had Rodrik not been so in love with his smorgasbord of models, he would have perceived this too. Just telling us that the plethora of models that make up modern economics  “expand the range of the discipline’s insights” is nothing short of hand waving.

No matter how many thousands of models mainstream economists come up with, as long as they are just axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

Nu krävs en expansiv finanspolitik

17 December, 2015 at 10:24 | Posted in Economics | 1 Comment

Yours truly, Lars Ahnland och Rodney Edvinsson skrev häromdagen i Svenska Dagbladet om behovet av nytänk inom penningpolitiken.

Hans Palmstierna och Martin Moraeus kommenterade ett par dagar senare artikeln.

Igår skrev vi i vår slutreplik:

Sverige befinner sig i dag i en situation som är obehagligt lik en likviditetsfälla. I en sådan blir penningpolitiken verkningslös eftersom företagen slutar att låna till investeringar och istället amorterar sina befintliga lån. Trots negativ ränta och kraftig expansion av penningmängden har Riksbanken misslyckats med att uppnå sitt inflationsmål. 3330817625_a2b345ac45_oDessutom har resursutnyttjandet och köpkraften utvecklats svagt i många år. Det enda som inflateras är bostadspriserna. Om dessa plötsligt börjar sjunka medan skulderna stiger i värde kommer köpkraften och därmed investeringsviljan att sjunka än mer. Då blir likviditetsfällan till ett slukhål …

Pengar i sig skapar inte resurser, men de kan vara ett smörjmedel för att befintliga resurser utnyttjas. Problemet idag är att det privata banksystemet har misslyckats att skapa pengar för nödvändiga investeringar. Istället går lånen till en skenande bostadsmarknad.

Därför krävs nu kraftigt expansiv finanspolitik, som måste finansieras med nya pengar. Det låga resursutnyttjandet visar att det finns stora möjligheter. Det gäller särskilt bostadsbyggandet, som legat på en europeisk bottennivå i snart tjugo år. Förslagsvis skulle ett nytt, klimatsmart, miljonprogram kunna lösa inte bara bostadsbristen, utan mildra en mängd andra problem också. Det skulle dämpa prisuppgången på bostäder och därmed hushållens skuldsättning, och skapa mängder av nya jobb – även för flyktingar om dessa utbildas rätt. Det skulle även hjälpa Sverige på traven att nå de klimatmål som satts upp i Paris.

Added: Och Thomas Gür är tydligen upprörd över att vi påpekar — vad som för alla som kan åtminstone en gnutta ekonomihistoria är välkänt — att åtstramningspolitiken på 1930-talet medförde en massarbetslöshet som underlättade nazisternas maktövertagande i Tyskland. Ja, vad ska man säga? Man tager sig för pannan …

Dani Rodrik’s blind spot (II)

16 December, 2015 at 18:53 | Posted in Economics, Theory of Science & Methodology | 1 Comment

51A1kO+7AoL._SX329_BO1,204,203,200_As I argued in a previous post, Dani Rodrik’s Economics Rules  describes economics as a more or less problem-free smorgasbord collection of models. Economics is portrayed as advancing through a judicious selection from a continually expanding library of models, models that are presented as “partial maps” or “simplifications designed to show how specific mechanisms  work.”

But one of the things that’s missing in Rodrik’s view of economic models is the all-important distinction between core and auxiliary assumptions. Although Rodrik repeatedly speaks of ‘unrealistic’ or ‘critical’ assumptions, he basically just lumps them all together without differentiating between different types of assumptions, axioms or theorems. In a typical passage, Rodrik writes (p. 25):
 

Consumers are hyperrational, they are selfish, they always prefer more consumption to less, and they have a long time horizon, stretching into infinity. Economic models are typically assembled out of many such unrealistic assumptions. To be sure, many models are more realistic in one or more of these dimensions. But even in these more layered guises, other unrealistic assumptions can creep in somewhere else.

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/ goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.

Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other.

So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

But in Rodrik’s model depiction we are essentially given the following structure,

A1, A2, … An
———————-
Theorem,

where a set of undifferentiated assumptions are used to infer a theorem.

This is, however, to vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modeling strategy, where — as I’ve argued in a previous post — there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure

CA1, CA2, … CAn & AA1, AA2, … AAn
———————————————–
Theorem

or,

CA1, CA2, … CAn
———————-
(AA1, AA2, … AAn) → Theorem,

more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.

This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get

CA1, CA2, … CAn
———————
Theorem,

where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or

AA1, AA2, … AAn
———————-
Theorem,

where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.

Not clearly differentiating between CA and AA means that Rodrik can’t make this all-important interpretative distinction, and so opens up for unwarrantedly “saving” or “immunizing” models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.

What we do in life echoes in eternity

15 December, 2015 at 10:44 | Posted in Varia | 4 Comments

ken

Courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but stand up for one’s rights not to be humiliated or abused in any ways by the rich and powerful.

Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given. Like in the great stories. The ones where people have lots of chances of turning back — but don’t.

As when Sir Nicholas Winton organised the rescue of 669 children destined for Nazi concentration camps during World War II.

Or as when Ernest Shackleton, in April 1916, aboard the small boat ‘James Caird’, spent 16 days crossing 1,300 km of ocean to reach South Georgia, then trekked across the island to a whaling station, and finally could rescue the remaining men from the crew of ‘Endurance’ left on the Elephant Island.

shackletonNot a single member of the expedition died.

Not to step back — that’s what creates courageous acts that stay in our memories and mean something.

What we do in life echoes in eternity.

Dani Rodrik’s smorgasbord view of economic models (I)

14 December, 2015 at 15:56 | Posted in Economics | 7 Comments

Traveling by train to Stockholm during the weekend, yours truly had plenty of time to catch up on some reading.

Dani Rodrik’s Economics Rules (Oxford University Press, 2015) is one of those rare examples where a mainstream economist — instead of just looking the other way — takes his time to ponder on the tough and deep science-theoretic and methodological questions that underpin the economics discipline.

There’s much in the book I like and appreciate, but there is also a very disturbing apologetic tendency to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection. To Rodrik the problem is always the economists, never economics itself. I sure wish it was that simple, but having written more than ten books on the history and methodology of economics, and having spent almost forty years among them econs, I have to confess I don’t quite recognize the picture …

first economist

Dags för synvända i statsskuldspolitiken

14 December, 2015 at 13:26 | Posted in Economics | Comments Off on Dags för synvända i statsskuldspolitiken

Yours truly och ekonomihistorikerna Lars Ahnland och Rodney Edvinsson hade i lördagens Svenska Dagbladet en artikel om behovet av en synvända i statskuldspolitiken:

I normalfallet är det en risk att inflationen ökar om en stat finansierar sig genom att trycka pengar. Men i dag är högre inflationstakt inte ett hot, utan en välsignelse. Man kan likna inflationen i ekonomin vid blodomloppet hos en människa. Cirkulationen av pengar är det som håller den ekonomiska aktiviteten vid liv. Liksom människors hälsa hotas av både för högt och för lågt blodtryck, hotas ekonomins hälsa av för hög och för låg inflation …

varfor-har-staten-budgetunderskott

Sverige är i ett särskilt gynnsamt läge. Den ­svenska utlandsskulden är mycket liten. I fjol var den endast cirka 7 procent av BNP. Det begränsar starkt risken för att valutaspekulanter skulle sänka den svenska kronan genom att dumpa statspapper och därmed blåsa upp utlandsskuldens värde.

I regel har statsskuldens storlek sällan i sig varit någon avgörande orsak bakom ekonomiska kriser. Snarare har den varit ett symtom på att vi befunnit oss i kris – en kris som med stor sannolikhet blivit ännu värre om inte underskotten i de offentliga finan­serna fått öka.

Med tanke på de stora utmaningar som Sverige står inför så ter sig talet om ansvar för statsbudgeten som oansvarigt. Det finns inget självändamål i det. I stället för att ”värna om statsfinanserna” borde rege­ringen göra det den är ämnad att göra – värna om samhällets framtid och ta sig an de stora ut­maningar vi står inför.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.