Economics — non-ideological and valuefree? I’ll be dipped!

27 October, 2016 at 21:29 | Posted in Economics | 2 Comments

jp-imgresI’ve subsequently stayed away from the minimum wage literature for a number of reasons. First, it cost me a lot of friends. People that I had known for many years, for instance, some of the ones I met at my first job at the University of Chicago, became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.

David Card

Back in 1992, New Jersey raised the minimum wage by 18 per cent while its neighbour state, Pennsylvania, left its minimum wage unchanged. Unemployment in New Jersey should — according to mainstream economics textbooks — have increased relative to Pennsylvania. However, when economists David Card and Alan Krueger gathered information on fast food restaurants in the two states, it turned out that unemployment had actually decreased in New Jersey relative to that in Pennsylvania. Counter to mainstream demand theory we had an anomalous case of a backward-sloping supply curve.

Lo and behold!

But of course — when facts and theory don’t agree, it’s the facts that have to be wrong …

buchC6The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the pre-supposition that human choice behavior is sufficiently rational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teaching of two centuries; we have not yet become a bevy of camp-following whores.

James M. Buchanan in Wall Street Journal (April 25, 1996)

Economics — non-ideological and valuefree? I’ll be dipped!

What is truth in economics?

27 October, 2016 at 15:11 | Posted in Economics | 3 Comments

28mptoothfairy_jpg_1771152eIn my view, scientific theories are not to be considered ‘true’ or ‘false.’ In constructing such a theory, we are not trying to get at the truth, or even to approximate to it: rather, we are trying to organize our thoughts and observations in a useful manner.

Robert Aumann

What a handy view of science.

How reassuring for all of you who have always thought that believing in the tooth fairy make you understand what happens to kids’ teeth. Now a ‘Nobel prize’ winning economist tells you that if there are such things as tooth fairies or not doesn’t really matter. Scientific theories are not about what is true or false, but whether ‘they enable us to organize and understand our observations’!

What Aumann and other defenders of scientific storytelling ‘forgets’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough. Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough.

Truth ought to be as important a concept in economics as it is in real science.

What it takes to make economics a real science

26 October, 2016 at 09:23 | Posted in Economics | 1 Comment

What is science? One brief definition runs: “A systematic knowledge of the physical or material world.” Most definitions emphasize the two elements in this definition: (1) “systematic knowledge” about (2) the real world. Without pushing this definitional question to its metaphysical limits, I merely want to suggest that if economics is to be a science, it must not only develop analytical tools but must also apply them to a world that is now observable or that can be made observable through improved methods of observation and measurement. Or in the words of the Hungarian mathematical economist Janos Kornai, “In the real sciences, the criterion is not whether the proposition is logically true and tautologically deducible from earlier assumptions. The criterion of ‘truth’ is, whether or not the proposition corresponds to reality” …


One of our most distinguished historians of economic thought, George Stigler, has stated that: “The dominant influence upon the working range of economic theorists is the set of internal values and pressures of the discipline. The subjects of study are posed by the unfolding course of scientific developments.” He goes on to add: “This is not to say that the environment is without influence …” But, he continues, “whether a fact or development is significant depends primarily on its relevance to current economic theory.” What a curious relating of rigor to relevance! Whether the real world matters depends presumably on “its relevance to current economic theory.” Many if not most of today’s economic theorists seem to agree with this ordering of priorities …

Today, rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant … The theoretical analysis in much of this literature rests on assumptions that also fly in the face of the facts … Another related recent development in which theory proceeds with impeccable logic from unrealistic assumptions to conclusions that contradict the historical record, is the recent work on rational expectations …

I have scolded economists for what I think are the sins that too many of them commit, and I have tried to point the way to at least partial redemption. This road to salvation will not be an easy one for those who have been seduced by the siren of mathematical elegance or those who all too often seek to test unrealistic models without much regard for the quality or relevance of the data they feed into their equations. But let us all continue to worship at the altar of science. I ask only that our credo be: “relevance with as much rigor as possible,” and not “rigor regardless of relevance.” And let us not be afraid to ask — and to try to answer the really big questions.

Robert A. Gordon

Statistical significance tests do not validate models

25 October, 2016 at 00:20 | Posted in Economics, Statistics & Econometrics | 1 Comment

The word ‘significant’ has a special place in the world of statistics, thanks to a test that researchers use to avoid jumping to conclusions from too little data. Suppose a researcher has what looks like an exciting result: She gave 30 kids a new kind of lunch, and they all got better grades than a control group that didn’t get the lunch. Before concluding that the lunch helped, she must ask the question: If it actually had no effect, how likely would I be to get this result? If that probability, or p-value, is below a certain threshold — typically set at 5 percent — the result is deemed ‘statistically significant.’

significant-p-valueClearly, this statistical significance is not the same as real-world significance — all it offers is an indication of whether you’re seeing an effect where there is none. Even this narrow technical meaning, though, depends on where you set the threshold at which you are willing to discard the ‘null hypothesis’ — that is, in the above case, the possibility that there is no effect. I would argue that there’s no good reason to always set it at 5 percent. Rather, it should depend on what is being studied, and on the risks involved in acting — or failing to act — on the conclusions …

This example illustrates three lessons. First, researchers shouldn’t blindly follow convention in picking an appropriate p-value cutoff. Second, in order to choose the right p-value threshold, they need to know how the threshold affects the probability of a Type II error. Finally, they should consider, as best they can, the costs associated with the two kinds of errors.

Statistics is a powerful tool. But, like any powerful tool, it can’t be used the same way in all situations.

Narayana Kocherlakota

Good lessons indeed — underlining how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero – even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing science.

In its standard form, a significance test is not the kind of ‘severe test’ that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypotheses. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

411-y9smopl-_sx346_bo1204203200_In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.

Why p-values cannot be taken at face value

24 October, 2016 at 09:00 | Posted in Economics, Statistics & Econometrics | 7 Comments

statistics-done-wrong-alex-reinhartA researcher is interested in differences between Democrats and Republicans in how they perform in a short mathematics test when it is expressed in two different contexts, either involving health care or the military. The research hypothesis is that context matters, and one would expect Democrats to do better in the health- care context and Republicans in the military context … At this point there is a huge number of possible comparisons that can be performed—all consistent with the data. For example, the pattern could be found (with statistical significance) among men and not among women— explicable under the theory that men are more ideological than women. Or the pattern could be found among women but not among men—explicable under the theory that women are more sensitive to context, compared to men … A single overarching research hypothesis—in this case, the idea that issue context interacts with political partisanship to affect mathematical problem-solving skills—corresponds to many different possible choices of the decision variable.

At one level, these multiplicities are obvious. And it would take a highly unscrupulous researcher to perform test after test in a search for statistical significance … Given a particular data set, it is not so difficult to look at the data and construct completely reasonable rules for data exclusion, coding, and data analysis that can lead to statistical significance—thus, the researcher needs only perform one test, but that test is conditional on the data … A researcher when faced with multiple reasonable measures can reason (perhaps correctly) that the one that produces a significant result is more likely to be the least noisy measure, but then decide (incorrectly) to draw inferences based on that one only.

Andrew Gelman & Eric Loken

Mainstream economists dissing people that want to rethink economics

22 October, 2016 at 20:02 | Posted in Economics | 9 Comments

There’s a lot of commenting on the blog now, after yours truly put up a post where Cambridge economist Pontus Rendahl in an interview compared heterodox economics to ‘creationism’ and ‘alternative medicine,’ and totally dissed students that want to see the economics curriculum moving in a more pluralist direction.
Rethinking econ_0
Sad to say, Rendahl is not the only mainstream economist having monumental problems when trying to argue with people challenging the ruling orthodoxy

A couple of years ago Paul Krugman felt a similar urge to defend mainstream neoclassical economics against the critique from students asking for more relevance, realism and pluralism in the teaching of economics. According to Krugman, the students and people like yours truly are wrong in blaming mainstream economics for not being relevant and not being able to foresee crises. To Krugman there is nothing wrong with ‘standard theory’ and ‘economics textbooks.’ If only policy makers and economists stick to ‘standard economic analysis’ everything would be just fine.

I’ll be dipped! If there’s anything the last couple of years have shown us, it is that economists have gone astray. Krugman’s ‘standard theory’ — mainstream neoclassical economics – has contributed to causing todays’s economic crisis rather than to solving it. Reading Krugman, I guess a lot of the young economics students that today are looking for alternatives to mainstream neoclassical theory are deeply disappointed. Rightly so. But — although Krugman, especially on his blog, certainly tries to present himself as a kind of radical and anti-establishment economics guy — when it really counts, he shows what he is —  a die-hard teflon-coated mainstream neoclassical economist.

Perhaps this becomes less perplexing to grasp when one considers what Krugman said in a speech (emphasis added) in 1996:

I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations …

Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium … I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.

So now all young economics students that want to see a real change in economics and the way it’s taught — now you know where you have people like Rendahl and Krugman. If you really want something other than the same old mainstream neoclassical catechism, if you really don’t want to be force-fed with mainstream neoclassical theories and models, you have to look elsewhere. 

On the proper use of mathematics in economics

21 October, 2016 at 10:01 | Posted in Economics | 1 Comment

19266522One must, of course, beware of expecting from this method more than it can give. Out of the crucible of calculation comes not an atom more truth than was put in. The assumptions being hypothetical, the results obviously cannot claim more than a vey limited validity. The mathematical expression ought to facilitate the argument, clarify the results, and so guard against possible faults of reasoning — that is all.

It is, by the way, evident that the economic aspects must be the determining ones everywhere: economic truth must never be sacrificed to the desire for mathematical elegance.

Rational choice theory …

19 October, 2016 at 08:59 | Posted in Economics | 3 Comments

In economics it is assumed that people make rational choices

Sweden’s growing housing bubble

16 October, 2016 at 16:18 | Posted in Economics | 5 Comments

House prices are increasing fast in EU. And more so in Sweden than in any other member state, as shown in the Eurostat graph below, showing percentage increase in annually deflated house price index by member state 2015:


Sweden’s house price boom started in mid-1990s, and looking at the development of real house prices during the last three decades there are reasons to be deeply worried. The indebtedness of the Swedish household sector has also risen to alarmingly high levels:


Yours truly has been trying to argue with ‘very serious people’ that it’s really high time to ‘take away the punch bowl.’ Mostly I have felt like the voice of one calling in the desert.


Where do housing bubbles come from? There are of course many different explanations, but one of the fundamental mechanisms at work is  that people expect house prices to increase, which makes people willing to keep on buying  houses at steadily increasing prices. It’s this kind of self-generating cumulative process à la Wicksell-Myrdal that is the core of the housing bubble. Unlike the usual commodities markets where demand curves usually point downwards, on asset markets they often point upwards, and therefore give rise to this kind of instability. And, the greater leverage, the greater the increase in prices.

What is especially worrying is that although the aggregate net asset position of the Swedish households is still on the solid side, an increasing proportion of those assets is illiquid. When the inevitable drop in house prices hits the banking sector and the rest of the economy, the consequences will be enormous. It hurts when bubbles burst …

Microfoundational angels

14 October, 2016 at 17:15 | Posted in Economics | Leave a comment

goodhartAmongst the several problems/disadvantages of this current consensus is that, in order to make a rational expectations, micro-founded model mathematically and analytically tractable it has been necessary in general to impose some (absurdly) simplifying assumptions, notably the existence of representative agents, who never default.This latter (nonsensical) assumption goes under the jargon term as the transversality condition.

This makes all agents perfectly creditworthy. Over any horizon there is only one interest rate facing all agents, i.e. no risk premia. All transactions can be undertaken in capital markets; there is no role for banks. Since all IOUs are perfectly creditworthy, there is no need for money. There are no credit constraints. Everyone is angelic; there is no fraud; and this is supposed to be properly micro-founded!

Charles Goodhart

Next Page »

Create a free website or blog at
Entries and comments feeds.