## Some unsettled questions in macroeconomic theory

31 October, 2016 at 18:39 | Posted in Economics | Comments Off on Some unsettled questions in macroeconomic theory

## What is mainstream economics?

30 October, 2016 at 15:00 | Posted in Economics | 4 CommentsThe reason you study an issue at all is usually that you care about it, that there’s something you want to achieve or see happen. Motivation is always there; the trick is to do all you can to avoid motivated reasoning that validates what you want to hear.

In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

Hmm …

So when Krugman and other ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’

Yours truly is, to say the least, far from convinced. The alarm that sets off in *my* brain is that this, rather than being helpful for understanding real world economic issues, sounds more like an ill-advised *plaidoyer* for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’ The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.

Mainstream economics today is mainly an approach in which you think the goal is to be able to write down a set of empirically untested assumptions and then deductively infer conclusions from them. When applying this deductivist thinking to economics, economists usually set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t do for the simple reason that empty theoretical exercises of this kind do not tell us anything about the world. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its parts prevent the possibility of treating it as constituted by atoms with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind. To search for deductive precision and rigour in such a world is self-defeating. The only way to defend such an endeavour is to restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see.

## Econometrics and the rabbits principle

29 October, 2016 at 19:03 | Posted in Statistics & Econometrics | Comments Off on Econometrics and the rabbits principleIn econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics, and as Clint Ballinger has highlighted, this is a particularly questionable rabbit pulling assumption:

Inferential statistics are based on taking a random sample from a larger population … and attempting to draw conclusions about a) the larger population from that data and b) the probability that the relations between measured variables are consistent or are artifacts of the sampling procedure.

However, in political science, economics, development studies and related fields the data often represents as complete an amount of data as can be measured from the real world (an ‘apparent population’). It is not the result of a random sampling from a larger population. Nevertheless, social scientists treat such data as the result of random sampling.

Because there is no source of further cases a fiction is propagated—the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Imagine there could be more worlds with more cases and the problem is fixed …

What ‘draw’ from this imaginary superpopulation does the real-world set of cases we have in hand represent? This is simply an unanswerable question. The current set of cases could be representative of the superpopulation, and it could be an extremely unrepresentative sample, a one in a million chance selection from it …

The problem is not one of statistics that need to be fixed. Rather, it is a problem of the misapplication of inferential statistics to non-inferential situations.

## Economics — the triumph of ideology over science

29 October, 2016 at 11:24 | Posted in Economics | Comments Off on Economics — the triumph of ideology over scienceResearch shows not only that individuals sometimes act differently than standard economic theories predict, but that they do so regularly, systematically, and in ways that can be understood and interpreted through alternative hypotheses, competing with those utilised by orthodox economists.

To most market participants – and, indeed, ordinary observers – this does not seem like big news … In fact, this irrationality is no news to the economics profession either. John Maynard Keynes long ago described the stock market as based not on rational individuals struggling to uncover market fundamentals, but as a beauty contest in which the winner is the one who guesses best what the judges will say …

Adam Smith’s invisible hand – the idea that free markets lead to efficiency as if guided by unseen forces – is invisible, at least in part, because it is not there …

For more than 20 years, economists were enthralled by so-called “rational expectations” models which assumed that all participants have the same (if not perfect) information and act perfectly rationally, that markets are perfectly efficient, that unemployment never exists (except when caused by greedy unions or government minimum wages), and where there is never any credit rationing.

That such models prevailed, especially in America’s graduate schools, despite evidence to the contrary, bears testimony to a triumph of ideology over science. Unfortunately, students of these graduate programmes now act as policymakers in many countries, and are trying to implement programmes based on the ideas that have come to be called market fundamentalism … Good science recognises its limitations, but the prophets of rational expectations have usually shown no such modesty.

## Economics — non-ideological and valuefree? I’ll be dipped!

27 October, 2016 at 21:29 | Posted in Economics | 3 CommentsI’ve subsequently stayed away from the minimum wage literature for a number of reasons. First, it cost me a lot of friends. People that I had known for many years, for instance, some of the ones I met at my first job at the University of Chicago, became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.

Back in 1992, New Jersey raised the minimum wage by 18 per cent while its neighbour state, Pennsylvania, left its minimum wage unchanged. Unemployment in New Jersey should — according to mainstream economics textbooks — have increased relative to Pennsylvania. However, when economists David Card and Alan Krueger gathered information on fast food restaurants in the two states, it turned out that unemployment had actually decreased in New Jersey relative to that in Pennsylvania. Counter to mainstream demand theory we had an anomalous case of a backward-sloping supply curve.

Lo and behold!

But of course — when facts and theory don’t agree, it’s the facts that have to be wrong …

The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the pre-supposition that human choice behavior is sufficiently rational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teaching of two centuries; we have not yet become a bevy of camp-following whores.

James M. BuchananinWall Street Journal(April 25, 1996)

Economics — non-ideological and valuefree? I’ll be dipped!

## What is truth in economics?

27 October, 2016 at 15:11 | Posted in Economics | 3 CommentsIn my view, scientific theories are not to be considered ‘true’ or ‘false.’ In constructing such a theory, we are not trying to get at the truth, or even to approximate to it: rather, we are trying to organize our thoughts and observations in a useful manner.

What a handy view of science.

How reassuring for all of you who have always thought that believing in the tooth fairy make you understand what happens to kids’ teeth. Now a ‘Nobel prize’ winning economist tells you that if there are such things as tooth fairies or not doesn’t really matter. Scientific theories are not about what is true or false, but whether ‘they enable us to organize and understand our observations’!

What Aumann and other defenders of scientific storytelling ‘forgets’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just* believe* that you understand or explain things better with thought experiments is not enough. Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough.

Truth ought to be as important a concept in economics as it is in real science.

## The perils of calling your pet cat a dog …

27 October, 2016 at 11:05 | Posted in Statistics & Econometrics | 1 CommentSince econometrics doesn’t content itself with only making optimal predictions, but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — most important of these are additivity and linearity. Important, simply because if they are not true, your model is invalid and descriptively incorrect. And when the model is wrong — well, then it’s wrong.

The assumption of additivity and linearity means that the outcome variable is, in reality, linearly related to any predictors … and that if you have several predictors then their combined effect is best described by adding their effects together …

This assumption is the most important because if it is not true then even if all other assumptions are met, your model is invalid because you have described it incorrectly. It’s a bit like calling your pet cat a dog: you can try to get it to go in a kennel, or to fetch sticks, or to sit when you tell it to, but don’t be surprised when its behaviour isn’t what you expect because even though you’ve called it a dog, it is in fact a cat. Similarly, if you have described your statistical model inaccurately it won’t behave itself and there’s no point in interpreting its parameter estimates or worrying about significance tests of confidence intervals: the model is wrong.

## Svensk universitetsutbildning i fritt fall (II)

26 October, 2016 at 18:02 | Posted in Education & School | Comments Off on Svensk universitetsutbildning i fritt fall (II)Jag sprang nyligen på en sammanställning av genusdoktorsavhandlingar från 2014. Många godbitar. Men en av dem var särskilt anmärkningsvärd, nämligen nummer 19 i uppräkningen. Det handlar om doktorsavhandlingen “Rum, rytm och resande” från Linköpings universitet (pdf). Sammanställningen sammanfattar:

“Avhandlingen undersöker järnvägstationer som fysiska platser och sociala rum ur könsperspektiv. Kimstad pendeltågsstation, Norrköpings järnvägsstation och Stockholms Centralstation ingår i studien. Resultaten visar att järnvägsstationerna reproducerar könsmaktsordningen och att detta påverkar både män som kvinnor som vistas där.”

En doktorand har alltså ägnat minst 4-5 år och flera miljoner skattekronor åt att besöka järnvägsstationer och komma fram till att “järnvägsstationerna reproducerar könsmaktsordningen”. Doktorandens chef har planerat detta arbete och chefens chefer har godkänt det. Dessutom har en betygskommitté med externa granskare bedömt och skrivit under på att doktorsavhandlingen håller måttet.

Det var katten. Men det blir värre.

Doktorsavhandlingen sammanfattas på engelska. Det börjar med:

“Results from the study show that individuals in different ways are affected by gendered power relations that dwell in rhythms of collective believes and in shape of materialized objects that encounter the commuters when visiting the railway station. While the rhythms of masculine seriality contains believes of males as potentially violent, as defenders and as bread winners, the rhythms of female seriality contains believes of women as primary mothers and housewives, of women as primary victim of sexual violence and of objectification of women’s bodies as either decent or as sexually available to heterosexual men”.

Rytmer av könsmaktsordningen. Poetiskt.

Ja, inför sådant tyckmyckentrutat pseudo-vetenskapligt blaj kan man inte annat än taga sig för pannan.

Mitt eget favoritexempel på detta ‘vetenskapliga’ skojeri är hämtat ur ett nummer av *Pedagogisk Forskning i Sverige* (2-3 2014) där författaren till artikeln “En pedagogisk relation mellan människa och häst. På väg mot en pedagogisk filosofisk utforskning av mellanrummet” ger följande intressanta ‘programförklaring:

Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.

Och så säger man att svensk universitetsutbildning är i kris. Undrar varför …

## What it takes to make economics a real science

26 October, 2016 at 09:23 | Posted in Economics | 1 CommentWhat is science? One brief definition runs: “A systematic knowledge of the physical or material world.” Most definitions emphasize the two elements in this definition: (1) “systematic knowledge” about (2) the real world. Without pushing this definitional question to its metaphysical limits, I merely want to suggest that if economics is to be a science, it must not only develop analytical tools but must also apply them to a world that is now observable or that can be made observable through improved methods of observation and measurement. Or in the words of the Hungarian mathematical economist Janos Kornai, “In the real sciences, the criterion is not whether the proposition is logically true and tautologically deducible from earlier assumptions. The criterion of ‘truth’ is, whether or not the proposition corresponds to reality” …

One of our most distinguished historians of economic thought, George Stigler, has stated that: “The dominant influence upon the working range of economic theorists is the set of internal values and pressures of the discipline. The subjects of study are posed by the unfolding course of scientific developments.” He goes on to add: “This is not to say that the environment is without influence …” But, he continues, “whether a fact or development is significant depends primarily on its relevance to current economic theory.” What a curious relating of rigor to relevance! Whether the real world matters depends presumably on “its relevance to current economic theory.” Many if not most of today’s economic theorists seem to agree with this ordering of priorities …

Today, rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant … The theoretical analysis in much of this literature rests on assumptions that also fly in the face of the facts … Another related recent development in which theory proceeds with impeccable logic from unrealistic assumptions to conclusions that contradict the historical record, is the recent work on rational expectations …

I have scolded economists for what I think are the sins that too many of them commit, and I have tried to point the way to at least partial redemption. This road to salvation will not be an easy one for those who have been seduced by the siren of mathematical elegance or those who all too often seek to test unrealistic models without much regard for the quality or relevance of the data they feed into their equations. But let us all continue to worship at the altar of science. I ask only that our credo be: “relevance with as much rigor as possible,” and not “rigor regardless of relevance.” And let us not be afraid to ask — and to try to answer the really big questions.

## Svensk universitetsutbildning i fritt fall

25 October, 2016 at 17:48 | Posted in Education & School | Comments Off on Svensk universitetsutbildning i fritt fallSamtidigt som urholkningen av högskolan fortsätter finns det ett politiskt tryck på fler utbildningsplatser. Men ensidiga satsningar på fler platser gynnar varken samhället, högskolorna eller de studenter som får utbildning av tvivelaktig kvalitet. I nuläget måste högskolans begränsade budgetutrymme användas till att stärka utbildningens kvalitet, inte till att bygga ut högskolan ytterligare …

Kvaliteten på högskolan är i dag ifrågasatt. Den tveksamma kvaliteten syns bland annat i Universitetskanslersämbetets kvalitetsutvärderingar, där nästan var femte utbildning underkändes. En bidragande orsak till den bristande kvaliteten kan vara att antalet studenter har ökat snabbare än resurserna tillåter. Dessutom har det skett en urholkning av resurserna, då ersättningsbeloppen inte räknas upp med de faktiska kostnadsökningarna och dessutom minskas genom ett produktivitetsavdrag.

Samtidigt visar IFAU (Institutet för arbetsmarknadspolitisk utvärdering) att utbyggnaden av högskolan medfört sämre studentkvalitet. Detta är en naturlig följd av att övergången till högre studier redan är hög bland ungdomar i de högsta betygsintervallen. Det finns inte något överskott av högpresterande studenter som kan antas när fler platser tillkommer. Detta medför att studenter med allt sämre förkunskaper kommer in när högskolan växer. Dessutom är grundskolan och gymnasiets kvalitet ifrågasatt generellt. Lärarna måste lägga allt mer tid på att hjälpa studenterna genom utbildningarna vilket tar resurser från övrig undervisning. Det går inte heller att utesluta att kraven på studenterna sänks. Ökad genomströmning ger mer resurser till högskolan. Dagens resurssystem ger tyvärr incitament till högskolorna att godkänna studenter som inte når upp till kraven för godkänt.

Bra rutet! Svenska universitet och högskolor brottas idag med många problem. Två av de mer akuta är hur man ska hantera en situation med krympande ekonomi och att allt fler av studenterna är dåligt förberedda för högskolestudier.

*Varför *har det blivit så här? Yours truly har vid upprepade tillfällen blivit approcherad av media apropå dessa frågor, och har då utöver ‘the usual suspects’ också försökt lyfta en problematik som sällan — av rädsla för att inte vara ‘politiskt korrekt’ — lyfts i debatten.

De senaste femtio åren har vi haft en fullständig explosion av nya studentgrupper som går vidare till universitets- och högskolestudier. Detta är på ett sätt klart glädjande. Idag har vi lika många doktorander i vårt utbildningssystem som vi hade gymnasister på 1950-talet. Men denna utbildningsexpansion har tyvärr i mycket skett till priset av försämrade möjligheter för studenterna att tillgodogöra sig högskoleutbildningens kompetenskrav. Många utbildningar har fallit till föga och sänkt kraven.

Tyvärr är de studenter vi får till universitet och högskolor över lag allt sämre rustade för sina studier. Omstruktureringen av skolan i form av decentralisering, avreglering och målstyrning har tvärtemot politiska utfästelser inte levererat. I takt med den eftergymnasiala utbildningsexpansionen har en motsvarande kunskapskontraktion hos stora studentgrupper ägt rum. Den skolpolitik som lett till denna situation slår hårdast mot dem den utger sig för att värna — de med litet eller inget ‘kulturkapital’ i bagaget hemifrån.

Mot denna bakgrund är det egentligen anmärkningsvärt att man inte i större utsträckning problematiserat vad utbildningsexplosionen i sig kan leda till.

Eftersom vi för femtio år sedan vid våra universitet utbildade enbart en bråkdel av befolkningen, är det ingen djärv gissning — under antagande av att ‘begåvning’ i en population är åtminstone approximativt normalfördelad — att lejonparten av dessa studenter ‘begåvningsmässigt’ låg till höger om mittpunkten på normalfördelningskurvan. Om vi idag tar in fem gånger så många studenter på våra högskolor och universitet kan vi — under samma antagande — knappast räkna med att en lika stor del av dessa utgörs av individer som ligger till höger om normalfördelningskurvans mittpunkt. Rimligen torde detta — *ceteris paribus* — innebära att i takt med att proportionen av befolkningen som går vidare till högskola och universitet ökar, så ökar svårigheterna för många av dessa att uppnå traditionellt högt ställda akademiska kravnivåer.

Här borde i så fall statsmakterna ha ytterligare en stark anledning till att öka resurserna till högskola och universitet, istället för att som idag bedriva utbildningar på mager kost och med få lärarledda föreläsningar i rekordstora studentgrupper. Med nya kategorier av studenter, som i allt större utsträckning rekryteras från studieovana hem, är det svårt att se hur vi med knappare resursramar ska kunna lösa dilemmat med högre krav på meritmässigt allt mer svagpresterande studenter.

## Statistical significance tests do not validate models

25 October, 2016 at 00:20 | Posted in Economics, Statistics & Econometrics | 1 CommentThe word ‘significant’ has a special place in the world of statistics, thanks to a test that researchers use to avoid jumping to conclusions from too little data. Suppose a researcher has what looks like an exciting result: She gave 30 kids a new kind of lunch, and they all got better grades than a control group that didn’t get the lunch. Before concluding that the lunch helped, she must ask the question: If it actually had no effect, how likely would I be to get this result? If that probability, or p-value, is below a certain threshold — typically set at 5 percent — the result is deemed ‘statistically significant.’

Clearly, this statistical significance is not the same as real-world significance — all it offers is an indication of whether you’re seeing an effect where there is none. Even this narrow technical meaning, though, depends on where you set the threshold at which you are willing to discard the ‘null hypothesis’ — that is, in the above case, the possibility that there is no effect. I would argue that there’s no good reason to always set it at 5 percent. Rather, it should depend on what is being studied, and on the risks involved in acting — or failing to act — on the conclusions …

This example illustrates three lessons. First, researchers shouldn’t blindly follow convention in picking an appropriate p-value cutoff. Second, in order to choose the right p-value threshold, they need to know how the threshold affects the probability of a Type II error. Finally, they should consider, as best they can, the costs associated with the two kinds of errors.

Statistics is a powerful tool. But, like any powerful tool, it can’t be used the same way in all situations.

Good lessons indeed — underlining how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero – even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing science.

In its standard form, a significance test is not the kind of ‘severe test’ that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypotheses. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are *model constructions*. Our p-values mean next to nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this:

ifthe model is rightandthe coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:

i) An unlikely event occurred.

ii) Or the model is right and some of the coefficients differ from 0.

iii) Or the model is wrong.

So?

## Why p-values cannot be taken at face value

24 October, 2016 at 09:00 | Posted in Economics, Statistics & Econometrics | 7 CommentsA researcher is interested in differences between Democrats and Republicans in how they perform in a short mathematics test when it is expressed in two different contexts, either involving health care or the military. The research hypothesis is that context matters, and one would expect Democrats to do better in the health- care context and Republicans in the military context … At this point there is a huge number of possible comparisons that can be performed—all consistent with the data. For example, the pattern could be found (with statistical significance) among men and not among women— explicable under the theory that men are more ideological than women. Or the pattern could be found among women but not among men—explicable under the theory that women are more sensitive to context, compared to men … A single overarching research hypothesis—in this case, the idea that issue context interacts with political partisanship to affect mathematical problem-solving skills—corresponds to many different possible choices of the decision variable.

At one level, these multiplicities are obvious. And it would take a highly unscrupulous researcher to perform test after test in a search for statistical significance … Given a particular data set, it is not so difficult to look at the data and construct completely reasonable rules for data exclusion, coding, and data analysis that can lead to statistical significance—thus, the researcher needs only perform one test, but that test is conditional on the data … A researcher when faced with multiple reasonable measures can reason (perhaps correctly) that the one that produces a significant result is more likely to be the least noisy measure, but then decide (incorrectly) to draw inferences based on that one only.

## Mainstream economists dissing people that want to rethink economics

22 October, 2016 at 20:02 | Posted in Economics | 14 CommentsThere’s a lot of commenting on the blog now, after yours truly put up a post where Cambridge economist Pontus Rendahl in an interview compared heterodox economics to ‘creationism’ and ‘alternative medicine,’ and totally dissed students that want to see the economics curriculum moving in a more pluralist direction.

Sad to say, Rendahl is not the only mainstream economist having monumental problems when trying to argue with people challenging the ruling orthodoxy.

A couple of years ago Paul Krugman felt a similar urge to defend mainstream neoclassical economics against the critique from students asking for more relevance, realism and pluralism in the teaching of economics. According to Krugman, the students and people like yours truly are wrong in blaming mainstream economics for not being relevant and not being able to foresee crises. To Krugman there is nothing wrong with ‘standard theory’ and ‘economics textbooks.’ If only policy makers and economists stick to ‘standard economic analysis’ everything would be just fine.

I’ll be dipped! If there’s anything the last couple of years have shown us, it is that economists have gone astray. Krugman’s ‘standard theory’ — mainstream neoclassical economics – has contributed to causing todays’s economic crisis rather than to solving it. Reading Krugman, I guess a lot of the young economics students that today are looking for alternatives to mainstream neoclassical theory are deeply disappointed. Rightly so. But — although Krugman, especially on his blog, certainly tries to present himself as a kind of radical and anti-establishment economics guy — when it really counts, he shows what he is — a die-hard teflon-coated mainstream neoclassical economist.

Perhaps this becomes less perplexing to grasp when one considers what Krugman said in a speech (emphasis added) in 1996:

I like to think that I am more open-minded about alternative approaches to economics than most, but

I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations …

Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium … I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.

So now all young economics students that want to see a real change in economics and the way it’s taught — now you know where you have people like Rendahl and Krugman. If you really want something other than the same old mainstream neoclassical catechism, if you really don’t want to be force-fed with mainstream neoclassical theories and models, you have to look elsewhere.

## Econometrics and the bridge between model and reality

21 October, 2016 at 23:28 | Posted in Statistics & Econometrics | 1 CommentTrygve Haavelmo, the “father” of modern probabilistic econometrics, wrote that he and other econometricians could not “build a complete bridge between our models and reality” by logical operations alone, but finally had to make “a non-logical jump” [‘Statistical testing of business-cycle theories,’ 1943:15]. A part of that jump consisted in that econometricians “like to believe … that the various a priori possible sequences would somehow cluster around some typical time shapes, which if we knew them, could be used for prediction” [1943:16]. But since we do not know the true distribution, one has to look for the mechanisms (processes) that “might rule the data” and that hopefully persist so that predictions may be made. Of possible hypothesis on different time sequences (“samples” in Haavelmo’s somewhat idiosyncratic vocabulary)) most had to be ruled out a priori “by economic theory”, although “one shall always remain in doubt as to the possibility of some … outside hypothesis being the true one” [1943:18].

The explanations we can give of economic relations and structures based on econometric models are, according to Haavelmo, “not hidden truths to be discovered” but rather our own “artificial inventions”. Models are consequently perceived not as true representations of the Data Generating Process, but rather instrumentally conceived “as if”-constructs. Their “intrinsic closure” is realized by searching for parameters showing “a great degree of invariance” or relative autonomy and the “extrinsic closure” by hoping that the “practically decisive” explanatory variables are relatively few, so that one may proceed “as if … natural limitations of the number of relevant factors exist” [‘The probability approach in econometrics,’ 1944:29].

But why the “logically conceivable” really should turn out to be the case is difficult to see. At least if we are not satisfied by sheer hope. In real economies it is unlikely that we find many “autonomous” relations and events. And one could of course also raise the objection that to invoke a probabilistic approach to econometrics presupposes, e. g., that we have to be able to describe the world in terms of risk rather than genuine uncertainty.

And that is exactly what Haavelmo [1944:48] does: “To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular ‘true’, but unknown, probability law.”

But to use this “trick of our own” and just assign “a certain probability law to a system of observable variables”, however, cannot – just as little as hoping – build a firm bridge between model and reality. Treating phenomena *as if* they essentially were stochastic processes is not the same as showing that they essentially *are* stochastic processes. As Hicks so neatly puts it in *Causality in Economics* [1979:120-21]:

Things become more difficult when we turn to time-series … The econometrist, who works in that field, may claim that he is not treading on very shaky ground. But if one asks him what he is really doing, he will not find it easy, even here, to give a convincing answer … [H]e must be treating the observations known to him as a sample of a larger “population”; but what population? … I am bold enough to conclude, from these considerations that the usefulness of “statistical” or “stochastic” methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand.”

And as if this wasn’t enough, one could also seriously wonder what kind of “populations” these statistical and econometric models ultimately are based on. Why should we as social scientists – and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems – unquestioningly accept Haavelmo’s “infinite population”, Fisher’s “hypothetical infinite population”, von Mises’s “collective” or Gibbs’s ”ensemble”?

Of course one could treat our observational or experimental data as random samples from real populations. I have no problem with that. But modern (probabilistic) econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of “parallel universes” and assume that our data are random samples from that kind of populations.

But this is actually nothing else but handwaving! And it is inadequate for real science. As David Freedman writes in *Statistical Models and Causal Inference *[2010:105-111]*: *

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely

assumesthat such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treatedas ifit were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

Econometricians should know better than to treat random variables, probabilites and expected values as anything else than things that strictly seen only pertain to statistical models. If they want us take the leap of faith from mathematics into the empirical world in applying the theory, they have to really *argue* an *justify* this leap by showing that those neat mathematical assumptions (that, to make things worse, often are left implicit, as e.g. independence and additivity) do not collide with the ugly reality. The set of mathematical assumptions is no validation *in itself* of the adequacy of the application.

A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that “there can be no harm in considering economic variables as stochastic variables” [Haavelmo 1943:13]. In most cases no such arguments are given.

Of course you are entitled — like Haavelmo and his modern probabilistic followers — to express a hope “at a metaphysical level” that there are invariant features of reality to uncover and that also show up at the empirical level of observations as some kind of regularities.

But is it a *justifiable* hope? I have serious doubts. The kind of regularities you may hope to find in society is not to be found in the domain of surface phenomena, but rather at the level of causal mechanisms, powers and capacities. Persistence and generality has to be looked out for at an underlying deep level. Most econometricians do not want to visit that playground. They are content with setting up theoretical models that give us correlations and eventually “mimic” existing causal properties.

We have to accept that reality has no “correct” representation in an economic or econometric model. There is no such thing as a “true” model that can capture an open, complex and contextual system in a set of equations with parameters stable over space and time, and exhibiting invariant regularities. To just “believe”, “hope” or “assume” that such a model *possibly* could exist is not enough. It has to be justified in relation to the ontological conditions of social reality.

In contrast to those who want to give up on (fallible, transient and transformable) “truth” as a relation between theory and reality and content themselves with “truth” as a relation between a model and a probability distribution, I think it is better to really scrutinize if this latter attitude is feasible. To abandon the quest for truth and replace it with sheer positivism would indeed be a sad fate of econometrics.

## On the proper use of mathematics in economics

21 October, 2016 at 10:01 | Posted in Economics | 1 CommentOne must, of course, beware of expecting from this method more than it can give. Out of the crucible of calculation comes not an atom more truth than was put in. The assumptions being hypothetical, the results obviously cannot claim more than a vey limited validity. The mathematical expression ought to facilitate the argument, clarify the results, and so guard against possible faults of reasoning — that is all.

It is, by the way, evident that the

economicaspects must be the determining ones everywhere: economic truth must never be sacrificed to the desire for mathematical elegance.

## Econometrics and the axiom of correct specification

20 October, 2016 at 17:22 | Posted in Statistics & Econometrics | 4 CommentsMost work in econometrics and regression analysis is — still — made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

When things sound to good to be true, they usually aren’t. And that goes for econometric wet dreams too. The snag is, of course, that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is not only a belief without support, but a belief *impossible* to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

*Every* regression model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

In order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. Parameter-values estimated in specific spatio-temporal contexts are *presupposed* to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And

an enormous amount of fiction has been produced, masquerading as rigorous science.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are invalid.

Regression models have some serious weaknesses. Their ease of estimation tends to suppress attention to features of the data that matching techniques force researchers to consider, such as the potential heterogeneity of the causal effect and the alternative distributions of covariates across those exposed to different levels of the cause. Moreover, the traditional exogeneity assumption of regression … often befuddles applied researchers … As a result, regression practitioners can too easily accept their hope that the specification of plausible control variables generates as-if randomized experiment.

Econometrics — and regression analysis — is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics and regression analysis.

## DSGE modeling – a statistical critique

19 October, 2016 at 16:36 | Posted in Statistics & Econometrics | Comments Off on DSGE modeling – a statistical critiqueAs Paul Romer’s recent assault on ‘post-real’ macroeconomics showed, yours truly is not the only one that questions the validity and relevance of DSGE modeling. After having read one of my posts on the issue, eminent statistician Aris Spanos kindly sent me a working paper where he discusses the validity of DSGE models and shows that the calibrated structural models are often at odds with observed data, and that many of the ‘deep parameters’ used are not even identifiable.

Interesting reading. And confirming, once again, that DSGE models do not marry particularly well with real world data. This should come as no surprise — microfounded general equilibrium modeling with inter-temporally optimizing representative agents seldom do.

This paper brings out several weaknesses of the traditional DSGE modeling, including statistical misspecification, non-identification of deep parameters, substantive inadequacy, weak forecasting performance and potentially misleading policy analysis. It is argued that most of these weaknesses stem from failing to distinguish between statistical and substantive adequacy and secure the former before assessing the latter. The paper untangles the statistical from the substantive premises of inference with a view to delineate the above mentioned problems and suggest solutions. The critical appraisal is based on the Smets and Wouters (2007) DSGE model using US quarterly data. It is shown that this model is statistically misspecified …

Lucas’s (1980) argument: “Any model that is well enough articulated to give clear answers to the questions we put to it will necessarily be artificial, abstract, patently ‘unreal’” (p. 696), is misleading because it blurs the distinction between substantive and statistical adequacy. There is nothing wrong with constructing a simple, abstract and idealised theory model aiming to capture key features of the phenomenon of interest, with a view to shed light on (understand, explain, forecast) economic phenomena of interest, as well as gain insight concerning alternative policies. Unreliability of inference problems arise when the statistical model implicitly specified by the theory model is statistically misspecified, and no attempt is made to reliably assess whether the theory model does, indeed, capture the key features of the phenomenon of interest; see Spanos (2009a). As argued by Hendry (2009):

“This implication is not a tract for mindless modeling of data in the absence of eco- nomic analysis, but instead suggests formulating more general initial models that embed the available economic theory as a special case, consistent with our knowledge of the institutional framework, historical record, and the data properties … Applied econometrics cannot be conducted without an economic theoretical framework to guide its endevous and help interpret its findings. Nevertheless, since economic theory is not complete, correct, and immutable, and never will be, one also cannot justify an insistence on deriving empirical models from theory alone.” (p. 56-7)

Statistical misspecification is not the inevitable result of abstraction and simplification, but it stems from imposing invalid probabilistic assumptions on the data.

## Rational choice theory …

19 October, 2016 at 08:59 | Posted in Economics | 3 CommentsIn economics it is assumed that people make rational choices

## Econometric objectivity …

18 October, 2016 at 10:16 | Posted in Statistics & Econometrics | 2 CommentsIt is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

Magnus and Morgan conducted a further experiment which involved eight expert teams, from different universities, analysing the same sets of data each using their own particular methodology. The data concerned the demand for food in the US and in the Netherlands and was based on a classic study by Tobin (1950) augmented with more recent data. The teams were asked to estimate the income elasticity of food demand and to forecast per capita food consumption. In terms of elasticities, the lowest estimates were around 0.38 whilst the highest were around 0.74 – clearly vastly different especially when remembering that these were based on

the same sets of data. The forecasts were perhaps even more extreme – from a base of around 4000 in 1989 the lowest forecast for the year 2000 was 4130 while the highest was nearly 18000!

## Sweden’s growing housing bubble

16 October, 2016 at 16:18 | Posted in Economics | 5 CommentsHouse prices are increasing fast in EU. And more so in Sweden than in any other member state, as shown in the Eurostat graph below, showing percentage increase in annually deflated house price index by member state 2015:

Sweden’s house price boom started in mid-1990s, and looking at the development of real house prices during the last three decades there are reasons to be deeply worried. The indebtedness of the Swedish household sector has also risen to alarmingly high levels:

Yours truly has been trying to argue with ‘very serious people’ that it’s really high time to ‘take away the punch bowl.’ Mostly I have felt like the voice of one calling in the desert.

Where do housing bubbles come from? There are of course many different explanations, but one of the fundamental mechanisms at work is that people expect house prices to increase, which makes people willing to keep on buying houses at steadily increasing prices. It’s this kind of self-generating cumulative process à la Wicksell-Myrdal that is the core of the housing bubble. Unlike the usual commodities markets where demand curves usually point downwards, on asset markets they often point upwards, and therefore give rise to this kind of instability. And, the greater leverage, the greater the increase in prices.

What is especially worrying is that although the aggregate net asset position of the Swedish households is still on the solid side, an increasing proportion of those assets is illiquid. When the inevitable drop in house prices hits the banking sector and the rest of the economy, the consequences will be enormous. It hurts when bubbles burst …

Blog at WordPress.com.

Entries and comments feeds.