Brad DeLong is wrong on realism and inference to the best explanation

31 August, 2015 at 13:43 | Posted in Theory of Science & Methodology | 4 Comments

Brad DeLong has a new post up where he gets critical about scientific realism and inference to the best explanation:

Daniel Little: The Case for Realism in the Social Realm:

“The case for scientific realism in the case of physics is a strong one…

hqdefaultThe theories… postulate unobservable entities, forces, and properties. These hypotheses… are not individually testable, because we cannot directly observe or measure the properties of the hypothetical entities. But the theories as wholes have a great deal of predictive and descriptive power, and they permit us to explain and predict a wide range of physical phenomena. And the best explanation of the success of these theories is that they are true: that the world consists of entities and forces approximately similar to those hypothesized in physical theory. So realism is an inference to the best explanation…”

“WTF?!” is the only reaction I can have when I read Daniel Little.

Ptolemy’s epicycles are a very good model of planetary motion–albeit not as good as General Relativity. Nobody believes that epicycles are real …

There is something there. But just because your theory is good does not mean that the entities in your theory are “really there”, whatever that might mean…

Although Brad sounds upset, I can’t really see any good reasons why.

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

In a truly wonderful essay – chapter three of Error and Inference (Cambridge University Press, 2010, eds. Deborah Mayo and Aris Spanos) – Alan Musgrave gives strong arguments why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in:

For realists, the name of the scientific game is explaining phenomena, not just saving them. Realists typically invoke ‘inference to the best explanation’ or IBE …

IBE is a pattern of argument that is ubiquitous in science and in everyday life as well. van Fraassen has a homely example:
“I hear scratching in the wall, the patter of little feet at midnight, my cheese disappears – and I infer that a mouse has come to live with me. Not merely that these apparent signs of mousely presence will continue, not merely that all the observable phenomena will be as if there is a mouse, but that there really is a mouse.” (1980: 19-20)
Here, the mouse hypothesis is supposed to be the best explanation of the phenomena, the scratching in the wall, the patter of little feet, and the disappearing cheese.
alan musgraveWhat exactly is the inference in IBE, what are the premises, and what the conclusion? van Fraassen says “I infer that a mouse has come to live with me”. This suggests that the conclusion is “A mouse has come to live with me” and that the premises are statements about the scratching in the wall, etc. Generally, the premises are the things to be explained (the explanandum) and the conclusion is the thing that does the explaining (the explanans). But this suggestion is odd. Explanations are many and various, and it will be impossible to extract any general pattern of inference taking us from explanandum to explanans. Moreover, it is clear that inferences of this kind cannot be deductively valid ones, in which the truth of the premises guarantees the truth of the conclusion. For the conclusion, the explanans, goes beyond the premises, the explanandum. In the standard deductive model of explanation, we infer the explanandum from the explanans, not the other way around – we do not deduce the explanatory hypothesis from the phenomena, rather we deduce the phenomena from the explanatory hypothesis …

The intellectual ancestor of IBE is Peirce’s abduction, and here we find a different pattern:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, … A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)

Here the second premise is a fancy way of saying “A explains C”. Notice that the explanatory hypothesis A figures in this second premise as well as in the conclusion. The argument as a whole does not generate the explanans out of the explanandum. Rather, it seeks to justify the explanatory hypothesis …

Abduction is deductively invalid … IBE attempts to improve upon abduction by requiring that the explanation is the best explanation that we have. It goes like this:

F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, H is true
(William Lycan, 1985: 138)

This is better than abduction, but not much better. It is also deductively invalid …

There is a way to rescue abduction and IBE. We can validate them without adding missing premises that are obviously false, so that we merely trade obvious invalidity for equally obvious unsoundness. Peirce provided the clue to this. Peirce’s original abductive scheme was not quite what we have considered so far. Peirce’s original scheme went like this:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, there is reason to suspect that A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)

This is obviously invalid, but to repair it we need the missing premise “There is reason to suspect that any explanation of a surprising fact is true”. This missing premise is, I suggest, true. After all, the epistemic modifier “There is reason to suspect that …” weakens the claims considerably. In particular, “There is reason to suspect that A is true” can be true even though A is false. If the missing premise is true, then instances of the abductive scheme may be both deductively valid and sound.

IBE can be rescued in a similar way. I even suggest a stronger epistemic modifier, not “There is reason to suspect that …” but rather “There is reason to believe (tentatively) that …” or, equivalently, “It is reasonable to believe (tentatively) that …” What results, with the missing premise spelled out, is:

It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true.

This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences.

Of course, to establish that any such inference is sound, the ‘explanationist’ owes us an account of when a hypothesis explains a fact, and of when one hypothesis explains a fact better than another hypothesis does. If one hypothesis yields only a circular explanation and another does not, the latter is better than the former. If one hypothesis has been tested and refuted and another has not, the latter is better than the former. These are controversial issues, to which I shall return. But they are not the most controversial issue – that concerns the major premise. Most philosophers think that the scheme is unsound because this major premise is false, whatever account we can give of explanation and of when one explanation is better than another. So let me assume that the explanationist can deliver on the promises just mentioned, and focus on this major objection.

People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths.

What if the best explanation not only might be false, but actually is false. Can it ever be reasonable to believe a falsehood? Of course it can. Suppose van Fraassen’s mouse explanation is false, that a mouse is not responsible for the scratching, the patter of little feet, and the disappearing cheese. Still, it is reasonable to believe it, given that it is our best explanation of those phenomena. Of course, if we find out that the mouse explanation is false, it is no longer reasonable to believe it. But what we find out is that what we believed was wrong, not that it was wrong or unreasonable for us to have believed it.

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

I do appreciate when mainstream economists like Brad make an effort at doing some methodological-ontological-epistemological reflection. On this issue, unfortunately — although it’s always interesting and thought-provoking to read what Brad has to say — his arguments are too weak to warrant the negative stance on scientific realism and inference to the best explanation.

Advertisements

Unbiased econometric estimates? Forget it!

30 August, 2015 at 20:42 | Posted in Statistics & Econometrics | Comments Off on Unbiased econometric estimates? Forget it!

Following our recent post on econometricians’ traditional privileging of unbiased estimates, there were a bunch of comments echoing the challenge of teaching this topic, as students as well as practitioners often seem to want the comfort of an absolute standard such as best linear unbiased estimate or whatever. Commenters also discussed the tradeoff between bias and variance, and the idea that unbiased estimates can overfit the data.

I agree with all these things but I just wanted to raise one more point: In realistic settings, unbiased estimates simply don’t exist. In the real world we have nonrandom samples, measurement error, nonadditivity, nonlinearity, etc etc etc.

So forget about it. We’re living in the real world …

figure3

It’s my impression that many practitioners in applied econometrics and statistics think of their estimation choice kinda like this:

1. The unbiased estimate. It’s the safe choice, maybe a bit boring and maybe not the most efficient use of the data, but you can trust it and it gets the job done.

2. A biased estimate. Something flashy, maybe Bayesian, maybe not, it might do better but it’s risky. In using the biased estimate, you’re stepping off base—the more the bias, the larger your lead—and you might well get picked off …

If you take the choice above and combine it with the unofficial rule that statistical significance is taken as proof of correctness (in econ, this would also require demonstrating that the result holds under some alternative model specifications, but “p less than .05″ is still key), then you get the following decision rule:

A. Go with the safe, unbiased estimate. If it’s statistically significant, run some robustness checks and, if the result doesn’t go away, stop.

B. If you don’t succeed with A, you can try something fancier. But . . . if you do that, everyone will know that you tried plan A and it didn’t work, so people won’t trust your finding.

So, in a sort of Gresham’s Law, all that remains is the unbiased estimate. But, hey, it’s safe, conservative, etc, right?

And that’s where the present post comes in. My point is that the unbiased estimate does not exist! There is no safe harbor. Just as we can never get our personal risks in life down to zero … there is no such thing as unbiasedness. And it’s a good thing, too: recognition of this point frees us to do better things with our data right away.

Andrew Gelman

‘New Keynesian’ models are not too simple. They are just wrong.

30 August, 2015 at 11:30 | Posted in Economics | 4 Comments

kSimon Wren-Lewis has a nice post discussing Paul Romer’s critique of macro. In Simon’s words:

“It is hard to get academic macroeconomists trained since the 1980s to address [large scale Keynesian models] , because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems … But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency … Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.”

Nope! Not too simple. Just wrong!

I disagree with Simon. NK models are not too simple. They are simply wrong. There are no ‘frictions’. There is no Calvo Fairy. There are simply persistent nominal beliefs.

Period.

Roger Farmer

Yes indeed. There really is something about the way macroeconomists construct their models nowadays that obviously doesn’t sit right.

Empirical evidence only plays a minor role in neoclassical mainstream economic theory, where models largely function as a substitute for empirical evidence. One might have hoped that humbled by the manifest failure of its theoretical pretences during the latest economic-financial crisis, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics would give way to methodological pluralism based on ontological considerations rather than formalistic tractability. That has, so far, not happened.

Fortunately — when you’ve got tired of the kind of macroeconomic apologetics produced by “New Keynesian” macroeconomists and other DSGE modellers — there still are some real Keynesian macroeconomists to read. One of them — Axel Leijonhufvud — writes:

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it …

I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes … are about how to view our responsibilities and how to approach our subject.

If macroeconomic models — no matter of what ilk — build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations-microfoundations is not a symptom of “irrationality”. It rather shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality.

A gadget is just a gadget — and no matter how brilliantly silly DSGE models you come up with, they do not help us working with the fundamental issues of modern economies. Using DSGE models only confirms Robert Gordon‘s  dictum that today

rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant.

Funeral Ikos

30 August, 2015 at 09:31 | Posted in Varia | Comments Off on Funeral Ikos

 
spotify:track:31Z8nhEPrq3QOxCq9tRgWo

tavenerIf thou hast shown mercy
unto man, o man,
that same mercy
shall be shown thee there;
and if on an orphan
thou hast shown compassion,
that same shall there
deliver thee from want.
If in this life
the naked thou hast clothed,
the same shall give thee
shelter there,
and sing the psalm:
Alleluia.
 
 
 
 
 

A life without the music of people like John Tavener and Arvo Pärt would be unimaginable.

Has macroeconomics — really — progressed?

25 August, 2015 at 09:32 | Posted in Economics | 3 Comments

A typical DSGE model has a key property that from my work seems wrong. A good example is the model in Galí and Gertler (2007). In this model a positive price shock—a ‘‘cost push” shock — is explosive unless the Fed raises the nominal interest rate more than the increase in the inflation rate. get-off-your-assumptionsIn other words, positive price shocks with the nominal interest rate held constant are expansionary (because the real interest rate falls). In my work, however, they are contractionary. If there is a positive price shock like an oil price increase, nominal wages lag output prices, and so the real wage initially falls. This has a negative effect on consumption. In addition, household real wealth falls because nominal asset prices don’t initially rise as much as the price level. This has a negative effect on consumption through a wealth effect. There is little if any offset from lower real interest rates because households appear to respond more to nominal rates than to real rates. Positive price shocks are thus contractionary even if the Fed keeps the nominal interest rate unchanged. This property is important for a monetary authority in deciding how to respond to a positive price shock. If the authority used the Galí and Gertler (2007) model, it would likely raise the nominal interest rate too much thinking that the price shock is otherwise expansionary. Typical DSGE models are thus likely to be misleading for guiding monetary policy if this key property of the models is wrong.

Ray C. Fair

Deirdre McCloskey’s shallow and misleading rhetoric

20 August, 2015 at 17:07 | Posted in Economics | 7 Comments

This is not new to most of you of course. You are already steeped in McCloskey’s Rhetoric. Or you ought to be. After all economists are simply telling stories about the economy. Sometimes we are taken in. Sometimes we are not.

spin-meme-generator-dont-say-capitalism-replace-it-with-either-economic-freedom-or-free-market-3ba401Unfortunately McCloskey herself gets a little too caught up in her stories. As in her explanation as to how she can be both a feminist and a free market economist:

“The market is the great liberator of women; it has not been the state, which is after all an instrument of patriarchy … The market is the way out of enslavement from your dad, your husband, or your sons. … The enrichment that has come through allowing markets to operate has been a tremendous part of the learned freedom of the modern women.” — Quoted in “The Changing Face of Economics – Conversations With Cutting Edge Economists” by Colander, Holt, and Rosser

Notice the binary nature of the world in this story. There are only the market (yea!) and the state (boo!). There are no other institutions. Whole swathes of society vanish or are flattened into insignificance. The state is viewed as a villain that the market heroically battles against to advance us all.

It is a ripping tale.

It is shallow and utterly misleading.

Peter Radford

Top universities — preserves of bad economics

19 August, 2015 at 09:56 | Posted in Economics | Comments Off on Top universities — preserves of bad economics

There are certainly some things that the top institutions offer which lower-ranked once simply can’t: great buildings and history for starters. To walk around Cambridge, to see its grand architecture, and to feel drenched in its history, is an amazing experience …

aaBut the quality of the education you get at University depends very much on the individual people you are taught by, and here University rankings are far from a perfect guide. Extremely gifted teachers and researchers can be at lower ranked Universities, for a multitude of reasons from personal preferences to sheer lock-in: a capable person can start in a lower-ranked institution, and find that the “Old Boys Network” locks them out of the higher ranked ones.

In my own field of economics, there is also a paradox at play: in many ways the top universities have become preserves of bad economics, both in content and in teaching quality, while the best education in economics often comes from the lower ranked Universities.

In fact, there’s a case to be made that the better the University is ranked, the worse the education in economics will be. And before you think I’m just flogging my own wares here, consider what the American Economics Association had to say about the way that economics education appeared to be headed in the USA back in 1991:

“The Commission’s fear is that graduate programs may be turning out a generation with too many idiots savants, skilled in technique but innocent of real economic issues.” (“Report of the Commission on Graduate Education in Economics”, American Economic Association 1991)

The graduates of 1991 have become the University lecturers of today, and thanks to them, the trend the report identified at the graduate level has trickled down to undergraduate education at the so-called leading Universities.

Steve Keen/Forbes

Things economists could learn from kids

18 August, 2015 at 19:26 | Posted in Economics | Comments Off on Things economists could learn from kids

scientificmethod__92804.1338919332.1280.1280

Kids, somehow, seem to be more in touch with real science than can-opener-assuming economists …

A physicist, a chemist, and an economist are stranded on a desert island. One can only imagine what sort of play date went awry to land them there. Anyway, they’re hungry. Like, desert island hungry. And then a can of soup washes ashore. Progresso Reduced Sodium Chicken Noodle, let’s say. Which is perfect, because the physicist can’t have much salt, and the chemist doesn’t eat red meat.

Campbell's_Soup_with_Can_OpenerBut, famished as they are, our three professionals have no way to open the can. So they put their brains to the problem. The physicist says “We could drop it from the top of that tree over there until it breaks open.” And the chemist says “We could build a fire and sit the can in the flames until it bursts open.”

Those two squabble a bit, until the economist says “No, no, no. Come on, guys, you’d lose most of the soup. Let’s just assume a can opener.”

Statistical power and significance

18 August, 2015 at 09:41 | Posted in Statistics & Econometrics | 1 Comment

Much has been said about significance testing – most of it negative. Methodologists constantly point out that researchers misinterpret p-values. Some say that it is at best a meaningless exercise and at worst an impediment to scientific discoveries. Consequently, I believe it is extremely important that students and researchers correctly interpret statistical tests. This visualization is meant as an aid for students when they are learning about statistical hypothesis testing.

Kristoffer Magnusson

Great stuff!

Robert Solow kicking Lucas and Sargent in the pants

16 August, 2015 at 20:34 | Posted in Economics | 6 Comments

McNees documented the radical break between the 1960s and 1970s. The question is: what are the possible responses that economists and economics can make to those events?

robert_solow4One possible response is that of Professors Lucas and Sargent. They describe what happened in the 1970s in a very strong way with a polemical vocabulary reminiscent of Spiro Agnew. Let me quote some phrases that I culled from thepaper: “wildly incorrect,” “fundamentally flawed,” “wreckage,” “failure,” “fatal,” “of no value,” “dire implications,” “failure on a grand scale,” “spectac- ular recent failure,” “no hope” … I think that Professors Lucas and Sargent really seem to be serious in what they say, and in turn they have a proposal for constructive research that I find hard to talk about sympathetically. They call it equilibrium business cycle theory, and they say very firmly that it is based on two terribly important postulates — optimizing behavior and perpetual market clearing. When you read closely, they seem to regard the postulate of optimizing behavior as self-evident and the postulate of market-clearing behavior as essentially meaningless. I think they are too optimistic, since the one that they think is self-evident I regard as meaningless and the one that they think is meaningless, I regard as false. The assumption that everyone optimizes implies only weak and uninteresting consistency conditions on their behavior. Anything useful has to come from knowing what they optimize, and what constraints they perceive. Lucas and Sargent’s casual assumptions have no special claim to attention …

It is plain as the nose on my face that the labor market and many markets for produced goods do not clear in any meaningful sense. Professors Lucas and Sargent say after all there is no evidence that labor markets do not clear, just the unemployment survey. That seems to me to be evidence. Suppose an unemployed worker says to you “Yes, I would be glad to take a job like the one I have already proved I can do because I had it six months ago or three or four months ago. And I will be glad to work at exactly the same wage that is being paid to those exactly like myself who used to be working at that job and happen to be lucky enough still to be working at it.” Then I’m inclined to label that a case of excess supply of labor and I’m not inclined to make up an elaborate story of search or misinformation or anything of the sort. By the way I find the misinformation story another gross implausibility. I would like to see direct evidence that the unemployed are more misinformed than the employed, as I presume would have to be the case if everybody is on his or her supply curve of employment. Similarly, if the Chrysler Motor Corporation tells me that it would be happy to make and sell 1000 more automobiles this week at the going price if only it could find buyers for them, I am inclined to believe they are telling me that price exceeds marginal cost, or even that marginal revenue exceeds marginal cost, and regard that as a case of excess supply of automobiles. Now you could ask, why do not prices and wages erode and crumble under those circumstances? Why doesn’t the unemployed worker who told me “Yes, I would like to work, at the going wage, at the old job that my brother-in-law or my brother-in-law’s brother-in-law is still holding”, why doesn’t that person offer to work at that job for less? Indeed why doesn’t the employer try to encourage wage reduction? That doesn’t happen either. Why does the Chrysler Corporation not cut the price? Those are questions that I think an adult person might spend a lifetime studying. They are important and serious questions, but the notion that the excess supply is not there strikes me as utterly implausible.

Robert Solow

No unnecessary beating around the bush here.

The always eminently quotable Solow says it all.

The purported strength of New Classical macroeconomics is that it has firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing “forward-loooking” individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world.

That anyone should take that kind of stuff seriously is totally and unbelievably ridiculous. Or as Solow has it:

4703325Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous – that is, by laughing at it – so as not to fall into the trap of taking it seriously and passing on to matters of technique.

Robert Solow

Our kids and the American dream

15 August, 2015 at 17:37 | Posted in Education & School | Comments Off on Our kids and the American dream

 

No doubt one of the most important books you will read this year!

General equilibrium theory — a gross misallocation of intellectual resources and time

15 August, 2015 at 10:30 | Posted in Economics | 2 Comments

General equilibrium is fundamental to economics on a more normative level as well. A story about Adam Smith, the invisible hand, and the merits of markets pervades introductory textbooks, classroom teaching, and contemporary political discourse.getbourse The intellectual foundation of this story rests on general equilibrium, not on the latest mathematical excursions. If the foundation of everyone’s favourite economics story is now known to be unsound — and according to some, uninteresting as well — then the profession owes the world a bit of an explanation.

Frank Ackerman

Almost a century and a half after Léon Walras founded general equilibrium theory, economists still have not been able to show that markets lead economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient.

But after reading Frank Ackerman’s article — or Franklin M. Fisher’s The stability of general equilibrium – what do we know and why is it important? — one has to ask oneself — what good does that do?

As long as we cannot show that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is nil. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming Santa Claus conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up, leaving their Santa Claus economics in the dustbin of history.

Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away), is a gross misallocation of intellectual resources and time. As Ackerman writes:

The guaranteed optimality of market outcomes and laissez-faire policies died with general equilibrium. If economic stability rests on exogenous social and political forces, then it is surely appropriate to debate the desirable extent of intervention in the market — in part, in order to rescue the market fromits own instability.

Les garcons de la plage

14 August, 2015 at 21:24 | Posted in Varia | Comments Off on Les garcons de la plage

 

Ragnar Frisch on the limits of statistics and significance testing

13 August, 2015 at 12:00 | Posted in Statistics & Econometrics | Comments Off on Ragnar Frisch on the limits of statistics and significance testing

Frisch_anim_xxxI do not claim that the technique developed in the present paper will, like a stone of the wise, solve all the problems of testing “significance” with which the economic statistician is confronted. No statistical technique, however, refined, will ever be able to do such a thing. The ultimate test of significance must consist in a network of conclusions and cross checks where theoretical economic considerations, intimate and realistic knowledge of the data and a refined statistical technique concur.

Ragnar Frisch

Noah Smith thinks p-values work. Read my lips — they don’t!

12 August, 2015 at 16:24 | Posted in Statistics & Econometrics | 5 Comments

Noah Smith has a post up trying to defend p-values and traditional statistical significance testing against the increasing attacks launched against it:

fisher-smokingSuddenly, everyone is getting really upset about p-values and statistical significance testing. The backlash has reached such a frenzy that some psych journals are starting to ban significance testing. Though there are some well-known problems with p-values and significance testing, this backlash doesn’t pass the smell test. When a technique has been in wide use for decades, it’s certain that LOTS of smart scientists have had a chance to think carefully about it. The fact that we’re only now getting the backlash means that the cause is something other than the inherent uselessness of the methodology.

Hmm …

That doesn’t sound very convincing.

Maybe we should apply yet another smell test …

A non-trivial part of teaching statistics is made up of learning students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p values – really are, still most students misinterpret them.

This is not to blame on students’ ignorance, but rather on significance testing not being particularly transparent (conditional probability inference is difficult even to those of us who teach and practice it). A lot of researchers fall pray to the same mistakes. So — given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape — why continue to press students and researchers to do null hypothesis significance testing, testing that relies on weird backward logic that students and researchers usually don’t understand?

Statistical significance doesn’t say that something is important or true. And since there already are far better and more relevant testing that can be done, it is high time to give up on this statistical fetish.

Jager and Leek may well be correct in their larger point, that the medical literature is broadly correct. But I don’t think the statistical framework they are using is appropriate for the questions they are asking. My biggest problem is the identification of scientific hypotheses and statistical “hypotheses” of the “theta = 0″ variety.

Based on the word “empirical” title, I thought the authors were going to look at a large number of papers with p-values and then follow up and see if the claims were replicated. But no, they don’t follow up on the studies at all! What they seem to be doing is collecting a set of published p-values and then fitting a mixture model to this distribution, a mixture of a uniform distribution (for null effects) and a beta distribution (for non-null effects). Since only statistically significant p-values are typically reported, they fit their model restricted to p-values less than 0.05. But this all assumes that the p-values have this stated distribution. You don’t have to be Uri Simonsohn to know that there’s a lot of p-hacking going on. Also, as noted above, the problem isn’t really effects that are exactly zero, the problem is that a lot of effects are lots in the noise and are essentially undetectable given the way they are studied.

Jager and Leek write that their model is commonly used to study hypotheses in genetics and imaging. I could see how this model could make sense in those fields … but I don’t see this model applying to published medical research, for two reasons. First … I don’t think there would be a sharp division between null and non-null effects; and, second, there’s just too much selection going on for me to believe that the conditional distributions of the p-values would be anything like the theoretical distributions suggested by Neyman-Pearson theory.

So, no, I don’t at all believe Jager and Leek when they write, “we are able to empirically estimate the rate of false positives in the medical literature and trends in false positive rates over time.” They’re doing this by basically assuming the model that is being questioned, the textbook model in which effects are pure and in which there is no p-hacking.

Andrew Gelman

Indeed. If anything, this underlines how important it is — and on this Noah Smith and yours truly agree — not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero –  even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

35mm_12312_ 023And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

freed1Most importantly — we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. As eminent mathematical statistician David Freedman writes:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Statistical significance tests DO NOT validate models!

images-19In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.
So?

Perfect Day (private)

11 August, 2015 at 15:06 | Posted in Varia | Comments Off on Perfect Day (private)

IMG_01521
 
A newly wedded couple celebrating in the garden of their summer residence.
Fourteen years ago. Feels like it was yesterday …
 

Rethinking expectations

11 August, 2015 at 09:55 | Posted in Economics | 1 Comment

The tiny little problem that there is no hard empirical evidence that verifies rational expectations models doesn’t usually bother its protagonists too much. Rational expectations überpriest Thomas Sargent has defended the epistemological status of the rational expectations hypothesis arguing that since it “focuses on outcomes and does not pretend to have behavioral content,” it has proved to be “a powerful tool for making precise statements.”

Precise, yes, but relevant and realistic? I’ll be dipped!

In their attempted rescue operations, rational expectationists try to give the picture that only heterodox economists like yours truly are critical of the rational expectations hypothesis.

But, on this, they are, simply … eh … wrong.

Let’s listen to Nobel laureate Edmund Phelps — hardly a heterodox economist — and what he has to say (emphasis added):

Question: In a new volume with Roman Frydman, “Rethinking Expectations: The Way Forward for Macroeconomics,” you say the vast majority of macroeconomic models over the last four decades derailed your “microfoundations” approach. Can you explain what that is and how it differs from the approach that became widely accepted by the profession?

frydAnswer: In the expectations-based framework that I put forward around 1968, we didn’t pretend we had a correct and complete understanding of how firms or employees formed expectations about prices or wages elsewhere. We turned to what we thought was a plausible and convenient hypothesis. For example, if the prices of a company’s competitors were last reported to be higher than in the past, it might be supposed that the company will expect their prices to be higher this time, too, but not that much. This is called “adaptive expectations:” You adapt your expectations to new observations but don’t throw out the past. If inflation went up last month, it might be supposed that inflation will again be high but not that high.

Q: So how did adaptive expectations morph into rational expectations?

A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let’s be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what’s the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted — thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.

Q: One of the issues I have with rational expectations is the assumption that we have perfect information, that there is no cost in acquiring that information. Yet the economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.

A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.

Bloomberg

The rational expectations hypothesis presumes consistent behaviour, where expectations do not display any persistent errors. In the world of rational expectations we are always, on average, hitting the bull’s eye. In the more realistic, open systems view, there is always the possibility (danger) of making mistakes that may turn out to be systematic. It is because of this, presumably, that we put so much emphasis on learning in our modern knowledge societies.

So, where does all this leave us? I think John Kay sums it up pretty well:

A scientific idea is not seminal because it influences the research agenda of PhD students. An important scientific advance yields conclusions that differ from those derived from other theories, and establishes that these divergent conclusions are supported by observation. Yet as Prof Sargent disarmingly observed, “such empirical tests were rejecting too many good models” in the programme he had established with fellow Nobel laureates Bob Lucas and Ed Prescott. In their world, the validity of a theory is demonstrated if, after the event, and often with torturing of data and ad hoc adjustments that are usually called “imperfections”, it can be reconciled with already known facts – “calibrated”. Since almost everything can be “explained” in this way, the theory is indeed universal; no other approach is necessary, or even admissible …

Chicago economics –only for Gods and idiots

10 August, 2015 at 10:37 | Posted in Economics | Comments Off on Chicago economics –only for Gods and idiots

beat2If I ask myself what I could legitimately assume a person to have rational expectations about, the technical answer would be, I think, about the realization of a stationary stochastic process, such as the outcome of the toss of a coin or anything that can be modeled as the outcome of a random process that is stationary. I don’t think that the economic implications of the outbreak of World war II were regarded by most people as the realization of a stationary stochastic process. In that case, the concept of rational expectations does not make any sense. Similarly, the major innovations cannot be thought of as the outcome of a random process. In that case the probability calculus does not apply.

Robert Solow

‘Modern’ macroeconomic theories are as a rule founded on the assumption of  rational expectations — where the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution.

The tiny little problem that there is no hard empirical evidence that verifies these models — cf. Michael Lovell (1986) & Nikolay Gertchev (2007) — usually doesn’t bother its protagonists too much. Rational expectations überpriest Thomas Sargent has the following to say on the epistemological status of the rational expectations hypothesis (emphasis added):

Partly because it focuses on outcomes and does not pretend to have behavioral content, the hypothesis of rational epectations has proved to be a powerful tool for making precise statements about complicated dynamic economic systems.

Precise, yes, in the celestial world of models. But relevant and realistic? I’ll be dipped!

And a few years later, when asked if he thought “that differences among people’s models are important aspects of macroeconomic policy debates”, Sargent replied (emphasis added):

The fact is you simply cannot talk about their differences within the typical rational expectations model. There is a communism of models. All agents within the model, the econometricians, and God share the same model.

Building models on rational expectations either means we are Gods or Idiots. Most of us know we are neither. So, Gods and idiots may share Sargent’s and Lucas’s models, but they certainly aren’t my models.

Economics departments — breeding generation after generation of idiot savants

10 August, 2015 at 10:25 | Posted in Economics | 5 Comments

Modern economics has become increasingly irrelevant to the understanding of the real world. In his seminal book Economics and Reality (1997) Tony Lawson traced this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — as relevant today as it was eighteen years ago.

rocket-science-pic

It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Already back in 1991, Journal of Economic Literature published a study by the Commission on Graduate Education in Economics (COGEE) of the American Economic Association (AEA) — chaired by Anne Krueger and including people like Kenneth Arrow, Edward Leamer, Robert Lucas, Joseph Stiglitz, and Lawrence Summers — focusing on “the extent to which graduate education in economics may have become too removed from real economic problems.” The COGEE members reported from own experience “that it is an underemphasis on the ‘linkages’ between tools, both theory and econometrics, and ‘real world problems’ that is the weakness of graduate education in economics,”  and that both students and faculty sensed “the absence of facts, institutional information, data, real-world issues, applications, and policy problems.” And in conclusion they wrote (emphasis added):

The commission’s fear is that graduate programs may be turning out a generation with too many idiot savants skilled in technique but innocent of real economic issues.

Sorry to say, not much is different today. Economics education is still in dire need of a remake. How about bringing economics back into some contact with reality?

Choosing the wrong pond to play in — H-P filter and RBC

10 August, 2015 at 08:45 | Posted in Economics | Comments Off on Choosing the wrong pond to play in — H-P filter and RBC

I have lost count of the number of times I have heard students and faculty repeat the idea in seminars, that “all models are wrong”. This aphorism, attributed to George Box, is the battle cry of the Minnesota calibrator, a breed of macroeconomist, inspired by Ed Prescott, one of the most important and influential economists of the last century.

All models are wrong … all models are wrong …

Of course all models are wrong. That is trivially true: it is the definition of a model. But the cry has been used for three decades to poke fun at attempts to use serious econometric methods to analyze time series data. Time series methods were inconvenient to the nascent Real Business Cycle Program that Ed pioneered because the models that he favored were, and still are, overwhelmingly rejected by the facts. That is inconvenient.

hpEd’s response was pure genius. If the model and the data are in conflict, the data must be wrong. Time series econometrics, according to Ed, was crushing the acorn before it had time to grow into a tree. His response was not only to reformulate the theory, but also to reformulate the way in which that theory was to be judged. In a puff of calibrator’s smoke, the history of time series econometrics was relegated to the dustbin of history to take its place alongside alchemy, the ether, and the theory of phlogiston.

How did Ed achieve this remarkable feat of prestidigitation? First, he argued that we should focus on a small subset of the properties of the data. Since the work of Ragnar Frisch, economists have recognized that economic time series can be modeled as linear difference equations, hit by random shocks. These time series move together in different ways at different frequencies …

After removing trends, Ed was left with the wiggles. He proposed that we should evaluate our economic theories of business cycles by how well they explain co-movements among the wiggles. When his theory failed to clear the 8ft hurdle of the Olympic high jump, he lowered the bar to 5ft and persuaded us all that leaping over this high school bar was a success.

Keynesians protested. But they did not protest loudly enough and ultimately it became common, even among serious econometricians, to filter their data with the eponymous Hodrick Prescott filter …

By accepting the neo-classical synthesis, Keynesian economists had agreed to play by real business cycle rules. They both accepted that the economy is a self-stabilizing system that, left to itself, would gravitate back to the unique natural rate of unemployment. And for this reason, the Keynesians agreed to play by Ed’s rules. They filtered the data and set the bar at the high school level …

We don’t have to play by Ed’s rules … Once we allow aggregate demand to influence permanently the unemployment rate, the data do not look kindly on either real business cycle models or on the new-Keynesian approach. It’s time to get serious about macroeconomic science and put back the Olympic bar.

Roger Farmer

Teodicé

8 August, 2015 at 11:14 | Posted in Varia | Comments Off on Teodicé

Finns Gud? Den frågan har milt uttryckt varit under utredning en längre tid utan att någon större enighet har kunnat skönjas. Jag själv tillhör den möjligen förtappade grupp av människor som tycks medfött oförmögna att tro.

I min barndom fördes jag inte så sällan till kyrkan även utanför storhelgerna, och redan som tioåring fann jag att jag inne i huvudet ofta agerade sportkommentator under predikningarna. Till prästernas utläggningar lade jag därför konsekvent till saker som ”…men så var det ju inte”, ”…knappast va?”, eller ”…HUR kan någon tro på det här?” när det hela lät lite väl osannolikt, vilket var för det mesta.

fa_2598a216f0da259720c1997fe9f96c11Det verkade också konstigt att Herren kunde vredgas om jag t.ex. råkade glömma att ta av mig toppluvan i kyrkan medan flickorna fick ha sina mössor på. Kunde någon som hade så mycket att göra verkligen vara så petig?

Atleten med tomteskägget på tavlorna såg förvisso lite bister ut – i kontrast till de fånleende små keruberna vars vingar inte direkt antydde intelligent aerodynamisk design. Stå upp, sitt ned, psalmbok fram, prästflytt från altaret till predikstol och tillbaka, bönedrönande i kör – som tifo betraktat var det inte mycket att hänga i julgranen: katolikerna hade ju i alla fall lite action-segment och schysst pyroteknik som hade kunnat liva upp en agnostiskt sinnad pojkvasker.

Det som skavde mest var det jag senare lärde mig kallades för teodicéproblemet, dvs. hur man kan ha en Gud som är både god och allsmäktig samtidigt som det finns ondska och lidande i världen. Svaren på mina relaterade frågor möttes av mummel eller förklaringar vars logik påminde om scouternas mest komplicerade knop …

Mikael Sundström

Kahneman on the Chicago school of libertarian economics

8 August, 2015 at 10:46 | Posted in Economics | 2 Comments

As interpreted by the important Chicago school of economics, faith in human rationality is closely linked to an ideology in which it is unnecessary and even immoral to protect people against their choices. Ratonal people should be free, and they should be responsible for taking care of themselves …

Libertarianism-definition-e1375385430575The assumptin that agents are rational provides the intellectual foundation for the libertarian approach to public policy: do not interfere with the individual’s right to choose, unless the choices harm others … I once heard Gary Becker [argue] that we should consider the possibility of explaining the so-called obesity epidemic by people’s belief that a cure for diabetes will soon become available …

Much is therefore at stake in the debate between the Chicago school and the behavioral economists, who reject the extreme form of the rational-agent model. Freedom is not a contested value; all the participants in the debate are in favor of it. But life is more complex for behavioral economists than for true believers in human rationality. No behavioral economist favors a state that will force its citizens to eat a balanced diet and to watch only television programs that are good for the soul. For behavioral economists, however, freedom has a cost, which is borne by individuals who make bad choices, and by a society that feels obligated to help them. The decision of whether or not to protect individuals against their mistakes therefore presents a dilemma for behavioral economists. The economists of the Chicago school do not face that problem, because rational agents do not make mistakes. For adherents of this school, freedom is free of charge.

Daniel Kahneman

Libertarianism in perspective

7 August, 2015 at 15:46 | Posted in Economics | Comments Off on Libertarianism in perspective

 
reisberg

The Lucas Critique, microfoundations and performativity

7 August, 2015 at 13:33 | Posted in Economics | 5 Comments

The Lucas Critique has justified the micro-foundations approach to macroeconomics for four decades. Put simply, unless you model the macro economy as a result of ‘deep parameters’ of the human psyche, you will never be sure whether your model will apply in a different regulatory or institutional environment. Overcoming the Lucas Critique is apparently achieved by offering a macroeconomic model that stems from a utility function of a representative agent.

gadget-tampon-wtf-05But why should we believe that the ‘deep parameters’ of micro-foundations, the utility functions themselves, are somehow independent of the institutional environment?

You can’t escape Lucas’ critique by plucking a utility function out of the air and giving it to a representative agent unless you believe that utility functions are independent of the social environment.

Which raises the question, how can it be possible for individuals preferences, their utility function, to arise in a social vacuum?

It can’t. The evidence is absolutely clear on this point. Preferences, and even perception, are socially constructed. There simply are no ‘deep parameters’.

The whole micro-foundations exercise has been a waste of time for all involved.

While economics has taken seriously this critique from Lucas, they have generally ignored its logical extension of the performativity of economic analysis. Basically performativity says that the use of an economic model in society to guide decisions itself changes behaviour, thus changing the environment in which the analysis applies. Or more simply, utility functions with change in response to the use of models built upon utility functions.

The easy way to see this in action is in sports. As soon as one coach creates a play that exploits a common behaviour in other teams, using that play changes the other team’s response, and thus the environment in which the coach’s original analysis was relevant.

You can’t escape any of this logic.

The lesson is that to understand economic phenomena requires a better understanding of institutional environments, and historical and social context. The micro-foundations approach has merely been an excuse to continue to conceptualise the economy as self-stabilising and in equilibrium in the face of the Lucas Critique, while any rational response would have been to acknowledge the inherent instability of social processes, of which the performativity of economic analysis itself a part of.

Cameron Murray

Reconstructing macroeconomics

6 August, 2015 at 16:27 | Posted in Economics | 2 Comments

wrong-tool-by-jerome-awOlivier Blanchard says that, currently, DSGE models have “much too much in them to be fully understood”. There is a rationale for studying a model that we do not understand–if and only if it makes predictions that fit the world. If one has such a model that makes reliable predictions, studying it is a not-implausible road to understanding the world, because maybe, just maybe, an understanding of the model will carry an understanding of the world along with it as a bonus. And there is a rationale for taking models we understand and seeing where and how they fit the world in order to help us iterate toward a better model that fits better.

But is there a case for investigating models we (a) do not understand that (b) do not fit the world? Even if we were to reach the point of understanding the model and how it works, what would that gain us?

Brad DeLong

Just questions. And the answers are — No. Nothing.

The paradox of confirmation

6 August, 2015 at 15:58 | Posted in Theory of Science & Methodology | Comments Off on The paradox of confirmation

 

And if you want to know more on the paradox of confirmation, science, and inference, the one book you should read is Peter Lipton‘s Inference to the Best Explanation (2nd ed, Routledge, 2004). A truly great book that has influenced my own scientific thinking tremendously.

If you’re looking for a more comprehensive bibliography on Inference to the Best Explanation, here’s a good one. And for those of you who read Swedish, I self-indulgently recommend this.

Re-reading the Lucas & Sargent manifesto

5 August, 2015 at 13:42 | Posted in Economics | 12 Comments

wrongwayRe-reading the Lucas & Sargent New Classical manifesto ‘After Keynesian Economics’ (1979) some macroeconomists seem to be überimpressed by its “quality” and “persuasiveness.”

Quality and persuasiveness?

Hmm …

Let’s listen to what James Tobin had to say on the Lucas & Sargent kind of macroeconomic analysis:

They try to explain business cycles solely as problems of information, such as asymmetries and imperfections in the information agents have. Those assumptions are just as arbitrary as the institutional rigidities and inertia they fins objectionable in other theories of business fluctuations … I try to point out how incapable the new equilibrium business cycles models are of explaining the most obvious observed facts of cyclical fluctuations … I don’t think that models so far from realistic description should be taken seriously as a guide to policy … I don’t think that there is a way to write down any model which at one hand respects the possible diversity of agents in taste, circumstances, and so on, and at the other hand also grounds behavior rigorously in utility maximization and which has any substantive content to it.

Or listen to what Willem Buiter has to say about the kind of macroeconomics that Lucas & Sargent inaugurated:

The Monetary Policy Committee of the Bank of England I was privileged to be a ‘founder’ external member of during the years 1997-2000 contained, like its successor vintages of external and executive members, quite a strong representation of academic economists and other professional economists with serious technical training and backgrounds. This turned out to be a severe handicap when the central bank had to switch gears and change from being an inflation-targeting central bank under conditions of orderly financial markets to a financial stability-oriented central bank under conditions of widespread market illiquidity and funding illiquidity. Indeed, the typical graduate macroeconomics and monetary economics training received at Anglo-American universities during the past 30 years or so, may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding. It was a privately and socially costly waste of time and other resources.

Most mainstream macroeconomic theoretical innovations since the 1970s (the New Classical rational expectations revolution associated with such names as Robert E. Lucas Jr., Edward Prescott, Thomas Sargent, Robert Barro etc, and the New Keynesian theorizing of Michael Woodford and many others) have turned out to be self-referential, inward-looking distractions at best. Research tended to be motivated by the internal logic, intellectual sunk capital and aesthetic puzzles of established research programmes rather than by a powerful desire to understand how the economy works – let alone how the economy works during times of stress and financial instability. So the economics profession was caught unprepared when the crisis struck.

Contrary to what some überimpressed macroeconomists seem to argue, I would say the recent economic crisis and the fact that Chicago economics has had next to nothing to contribute in understanding it, shows that New Classical economics — in Lakatosian terms — is a degenerative research program in dire need of replacement.

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

In business cycles theory these models are constructed with the purpose of showing that changes in the supply of money “have the capacity to induce depressions or booms” [Lucas 1988:3] not just in these models, but also in real economies. To do so economists are supposed to imagine subjecting their models to some kind of “operational experiment” and “a variety of reactions”. “In general, I believe that one who claims to understand the principles of flight can reasonably be expected to be able to make a flying machine, and that understanding business cycles means the ability to make them too, in roughly the same sense” [Lucas 1981:8]. To Lucas models are the laboratories of economic theories, and after having made a simulacrum-depression Lucas hopes we find it “convincing on its own terms – that what I said would happen in the [model] as a result of my manipulation would in fact happen” [Lucas 1988:4]. The clarity with which the effects are seen is considered “the key advantage of operating in simplified, fictional worlds” [Lucas 1988:5].

On the flipside lies the fact that “we are not really interested in understanding and preventing depressions in hypothetical [models]. We are interested in our own, vastly more complicated society” [Lucas 1988:5]. But how do we bridge the gulf between model and “target system”? According to Lucas we have to be willing to “argue by analogy from what we know about one situation to what we would like to know about another, quite different situation” [Lucas 1988:5]. Progress lies in the pursuit of the ambition to “tell better and better stories” [Lucas 1988:5], simply because that is what economists do.

We are storytellers, operating much of the time in worlds of make believe. We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it is the only way we have found to think seriously about reality. In a way, there is nothing more to this method than maintaining the conviction … that imagination and ideas matter … there is no practical alternative” [Lucas 1988:6].

Lucas has applied this mode of theorizing by constructing “make-believe economic systems” to the age-old question of what causes and constitutes business cycles. According to Lucas the standard for what that means is that one “exhibits understanding of business cycles by constructing a model in the most literal sense: a fully articulated artificial economy, which behaves through time so as to imitate closely the time series behavior of actual economies” [Lucas 1981:219].

To Lucas business cycles is an inherently systemic phenomenon basically characterized by conditional co-variations of different time series. The vision is “the possibility of a unified explanation of business cycles, grounded in the general laws governing market economies, rather than in political or institutional characteristics specific to particular countries or periods” [Lucas 1981:218]. To be able to sustain this view and adopt his “equilibrium approach” he has to define the object of study in a very constrained way. Lucas asserts, e.g., that if one wants to get numerical answers “one needs an explicit, equilibrium account of the business cycles” [Lucas 1981:222]. But his arguments for why it necessarily has to be an equilibrium is not very convincing. The main restriction is that Lucas only deals with purportedly invariable regularities “common to all decentralized market economies” [Lucas 1981:218]. Adopting this definition he can treat business cycles as all alike “with respect to the qualitative behavior of the co-movements among series” [1981:218]. As noted by Hoover [1988:187]:

Lucas’s point is not that all estimated macroeconomic relations are necessarily not invariant. It is rather that, in order to obtain an invariant relation, one must derive the functional form to be estimated from the underlying choices of individual agents. Lucas supposes that this means that one must derive aggregate relations from individual optimization problems taking only tastes and technology as given.

Postulating invariance paves the way for treating various economic entities as stationary stochastic processes (a standard assumption in most modern probabilistic econometric approaches) and the possible application of “economic equilibrium theory.” The result is that Lucas business cycle is a rather watered-down version of what is usually connoted when speaking of business cycles.

Based on the postulates of “self-interest” and “market clearing” Lucas has repeatedly stated that a pure equilibrium method is a necessary intelligibility condition and that disequilibria are somehow “arbitrary” and “unintelligible” [Lucas 1981:225]. Although this might (arguably) be requirements put on models, these requirements are irrelevant and totally without justification vis-à-vis the real world target system. Why should involuntary unemployment, for example, be considered an unintelligible disequilibrium concept? Given the lack of success of these models when empirically applied, what is unintelligible, is rather to pursue in this reinterpretation of the ups and downs in business cycles and labour markets as equilibria. To Keynes involuntary unemployment is not equatable to actors on the labour market becoming irrational non-optimizers. It is basically a reduction in the range of working-options open to workers, regardless of any volitional optimality choices made on their part. Involuntary unemployment is excess supply of labour. That unemployed in Lucas business cycles models only can be conceived of as having chosen leisure over work is not a substantive argument about real world unemployment.

The point at issue [is] whether the concept of involuntary unemployment actually delineates circumstances of economic importance … If the worker’s reservation wage is higher than all offer wages, then he is unemployed. This is his preference given his options. For the new classicals, the unemployed have placed and lost a bet. It is sad perhaps, but optimal [Hoover 1988:59].

Sometimes workers are not employed. That is a real phenomenon and not a “theoretical construct … the task of modern theoretical economics to ‘explain’” [Lucas 1981:243].

All economic theories have to somehow deal with the daunting question of uncertainty and risk. It is “absolutely crucial for understanding business cycles” [Lucas 1981:223]. To be able to practice economics at all, “we need some way … of understanding which decision problem agents are solving” [Lucas 1981:223]. Lucas – in search of a “technical model-building principle” [Lucas 1981:1] – adapts the rational expectations view, according to which agents’ subjective probabilities are identified “with observed frequencies of the events to be forecast” are coincident with “true” probabilities. This hypothesis [Lucas 1981:224]

will most likely be useful in situations in which the probabilities of interest concern a fairly well defined recurrent event, situations of ‘risk’ [where] behavior may be explainable in terms of economic theory … In cases of uncertainty, economic reasoning will be of no value … Insofar as business cycles can be viewed as repeated instances of essentially similar events, it will be reasonable to treat agents as reacting to cyclical changes as ‘risk’, or to assume their expectations are rational, that they have fairly stable arrangements for collecting and processing information, and that they utilize this information in forecasting the future in a stable way, free of systemic and easily correctable biases.

To me this seems much like putting the cart before the horse. Instead of adapting the model to the object – which from both ontological and epistemological considerations seem the natural thing to do – Lucas proceeds in the opposite way and chooses to define his object and construct a model solely to suit own methodological and theoretical preferences. All those – interesting and important – features of business cycles that have anything to do with model-theoretical openness, and a fortiori not possible to squeeze into the closure of the model, are excluded. One might rightly ask what is left of that we in a common sense meaning refer to as business cycles. Einstein’s dictum – “everything should be made as simple as possible but not simpler” falls to mind. Lucas – and neoclassical economics at large – does not heed the implied apt warning.

The development of macro-econometrics has according to Lucas supplied economists with “detailed, quantitatively accurate replicas of the actual economy” thereby enabling us to treat policy recommendations “as though they had been experimentally tested” [Lucas 1981:220]. But if the goal of theory is to be able to make accurate forecasts this “ability of a model to imitate actual behavior” does not give much leverage. What is required is “invariance of the structure of the model under policy variations”. Parametric invariance in an economic model cannot be taken for granted, “but it seems reasonable to hope that neither tastes nor technology vary systematically” [Lucas 1981:220].

The model should enable us to posit contrafactual questions about what would happen if some variable was to change in a specific way. Hence the assumption of structural invariance, that purportedly enables the theoretical economist to do just that. But does it? Lucas appeals to “reasonable hope”, a rather weak justification for a modeler to apply such a far-reaching assumption. To warrant it one would expect an argumentation that this assumption – whether we conceive of it as part of a strategy of “isolation”, “idealization” or “successive approximation” – really establishes a useful relation that we can export or bridge to the target system, the “actual economy.” That argumentation is neither in Lucas, nor – to my knowledge – in the succeeding neoclassical refinements of his “necessarily artificial, abstract, patently ‘unreal’” analogue economies [Lucas 1981:271]. At most we get what Lucas himself calls “inappropriately maligned” casual empiricism in the form of “the method of keeping one’s eyes open.” That is far from sufficient to warrant any credibility in a model pretending to explain the complex and difficult recurrent phenomena we call business cycles. To provide an empirical “illustration” or a “story” to back up your model do not suffice. There are simply too many competing illustrations and stories that could be exhibited or told.

As Lucas has to admit – complaining about the less than ideal contact between theoretical economics and econometrics – even though the “stories” are (purportedly) getting better and better, “the necessary interaction between theory and fact tends not to take place” [Lucas 1981:11].

The basic assumption of this “precise and rigorous” model therefore cannot be considered anything else than an unsubstantiated conjecture as long as it is not supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence have been presented. This is the more tantalizing since Lucas himself stresses that the presumption “seems a sound one to me, but it must be defended on empirical, not logical grounds” [Lucas 1981:12].

And applying a “Lucas critique” on Lucas own model, it is obvious that it too fails. Changing “policy rules” cannot just be presumed not to influence investment and consumption behavior and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as “a FORTRAN program” and “gain some confidence that the component parts of the program are in some sense reliable prior to running it” [Lucas 1981:288] therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters “tastes” and “technology” shows that if you neglect ontological considerations pertaining to the target system, ultimately reality kicks back when at last questions of bridging and exportation of model exercises are laid on the table. No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in “modern mathematical form” [Lucas 1981:7] they do not push science forwards one millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about external validity.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on mathematical-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

References

Hoover, Kevin (1988), The New Classical Macroeconomics. Oxford: Basil Blackwell.

– (2002), Econometrics and reality. In U. Mäki (ed.), Fact and fiction in economics (pp. 152-177). Cambridge: Cambridge University Press.

– (2008), “Idealizing Reduction: The Microfoundations of Macroeconomics. Manuscript, 27 May 2008.

Lucas, Robert (1981), Studies in Business-Cycle Theory. Oxford: Basil Blackwell.

– (1986), Adaptive Behavior and Economic Theory. In Hogarth, Robin & Reder, Melvin (eds) Rational Choice (pp. 217-242). Chicago: The University of Chicago Press.

– (1988), What Economists Do.

Syll, Lars (2015), On the use and misuse of theories and models in economics.

Macroeconomic ad hocery

4 August, 2015 at 08:31 | Posted in Economics | 2 Comments

adhocRobert Lucas is well-known for condemning everything that isn’t microfounded rational expectations macroeconomics as “ad hoc” theorizing.

But instead of rather unsubstantiated recapitulations, it would be refreshing and helpful  if the Chicago übereconomist — for a change — endeavoured to clarify just what he means by “ad hoc.”

The standard meaning — OED — of the term is “for this particular purpose.” But in the hands of New Classical–Real Business Cycles–New Keynesians it seems to be used more to convey the view that modeling with realist and relevant assumptions is somehow equivalent to basing models on “specifics” rather than the “fundamentals” of individual intertemporal optimization and rational expectations.

This is of course pure nonsense, simply because there is no — as yours truly has argued at length e. g. here — macro behaviour that consistently follows from the NC–RBC–NK microfoundations. The only ones that succumb to ad hoc assumptions here are macroeconomists like Lucas et consortes, who believe that macroeconomic behaviour can be adequately analyzed with a fictitious rational-expectations-optimizing-robot-imitation-representative-agent.

And don’t get me wrong on this. I like good fiction. But that doesn’t include New Classical–Real Business Cycles–New Keynesian fiction. That’s just bad fiction.

Leif GW Persson — årets dumstrut

3 August, 2015 at 18:45 | Posted in Economics | Comments Off on Leif GW Persson — årets dumstrut

Varje gång jag läser ännu en förklaring till Greklands ekonomiska kris kommer jag att tänka på hans berättelse om de två sluga skräddarna som skall sy nya kläder till sin Kejsare. Kläder av ett magiskt tyg som är så märkvärdigt att mindre begåvade personer inte ens kan se det. En naken Kejsare som i all sin prakt stoltserar inför en imponerad omgivning ända till han passerar framför det lilla barnet som påpekar att han ju faktiskt går omkring naken.

dumstrut-3Svårare än så är det inte. Betrakta frågan med barnets ogrumlade ögon och ge faderulingen i alla dessa ekonomiska experter som hela tiden men på olika vis lägger ut texten om samma sak …

Det enklaste sättet att vinna väljarnas gillande. Lova dem guld och gröna skogar i form av löner, semestrar, kortare arbetstid, pensioner och offentliga tjänster som de faktiskt inte har råd med samtidigt som du ser till att själv ta för dig den största biten av kakan. När Greklands politiker åker till Bryssel för att trilskas med sina långivare är det inte med allmänna färdmedel, medförd matsäck och övernattning på vandrarhem vilket ju vore både logiskt och rimligt med tanke på deras situation och hur de hamnat där.

Tvärtom – enligt vedertagen grekisk modell och för ännu ett par tiotal lånade miljoner – så genomförs den Canossavandringen med hjälp av privata jetplan och helikoptrar, limousiner, de största sviterna på de bästa hotellen, överdådiga middagar och luncher och varje minister medför givetvis det vanliga entouraget på dryga tiotalet medhjälpare vars egentliga arbetsuppgifter undandrar sig allt utom en grekisk bedömning. Skräddarsydda kostymer, Rolexklockor och inte en tagelskjorta så långt ögat ser.

Leif GW Persson

groda1Herre du milde! Och detta grodors plums och ankors plask ska man behöva läsa år 2015. Man tager sig för pannan!

Nej du Leif GW, ibland är det nog inte så dumt att kunna lite nationalekonomi om man nu ska skriva om det. Fast det vore nog bättre att du höll dig till dina polisdeckare — strålande suverän läsning — och överlät nationalekonomin till folk som vet vad de talar om.

RBC theory — willfully silly obscurantism

2 August, 2015 at 21:30 | Posted in Economics | 10 Comments

Lucas and his school … went even further down the equilibrium rabbit hole, notably with real business cycle theory. And here is where the kind of willful obscurantism Romer is after became the norm. I wrote last year about the remarkable failure of RBC theorists ever to offer an intuitive explanation of how their models work, which I at least hinted was willful:

“But the RBC theorists never seem to go there; it’s right into calibration and statistical moments, with never a break for intuition. And because they never do the simple version, they don’t realize (or at any rate don’t admit to themselves) how fundamentally silly the whole thing sounds, how much it’s at odds with lived experience.”

Paul Krugman

Yours truly, of course, totally agrees with Paul on Lucas’ rabbit hole freshwater school.

And so does Truman F. Bewley:

unemployed-thumbLucas and Rapping (1969) claim that cyclical increases in unemployment occur when workers quit their jobs because wages or salaries fall below expectations …

According to this explanation, when wages are unusually low, people become unemployed in order to enjoy free time, substituting leisure for income at a time when they lose the least income …

According to the theory, quits into unemployment increase during recessions, whereas historically quits decrease sharply and roughly half of unremployed workers become jobless because they are laid off … During the recession I studied, people were even afraid to change jobs because new ones might prove unstable and lead to unemployment …

If wages and salaries hardly ever fall, the intertemporal substitution theory is widely applicable only if the unemployed prefer jobless leisure to continued employment at their old pay. However, the attitude and circumstances of the unemployed are not consistent with their having made this choice …

In real business cycle theory, unemployment is interpreted as leisure optimally selected by workers, as in the Lucas-Rapping model. It has proved difficult to construct business cycle models consistent with this assumption and with real wage fluctuations as small as they are in reality, relative to fluctuations in employment.

This is, of course, only what you would expect of New Classical Chicago economists.

So, what’s the problem?

The problem is that sadly enough this extraterrestial view of unemployment is actually shared by so called New Keynesians — a school of which Krugman considers himself a member — whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfunded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

The final court of appeal for macroeconomic models is the real world.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

To Keynes this was self-evident. But obviously not so to New Classical and ‘New Keynesian’ economists.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.