Rigour — a poor substitute for relevance and realism

27 July, 2014 at 18:47 | Posted in Economics | 1 Comment

What is science? One brief definition runs: “A systematic knowledge of the physical or material world.” Most definitions emphasize the two elements in this definition: (1) “systematic knowledge” about (2) the real world. Without pushing this definitional question to its metaphysical limits, I merely want to suggest that if economics is to be a science, it must not only develop analytical tools but must also apply them to a world that is now observable or that can be made observable through improved methods of observation and measurement. Or in the words of the Hungarian mathematical economist Janos Kornai, “In the real sciences, the criterion is not whether the proposition is logically true and tautologically deducible from earlier assumptions. The criterion of ‘truth’ is, whether or not the proposition corresponds to reality” …

20120129-xan4hh6p12eraupcmu5pbg38uj

One of our most distinguished historians of economic thought, George Stigler, has stated that: “The dominant influence upon the working range of economic theorists is the set of internal values and pressures of the discipline. The subjects of study are posed by the unfolding course of scientific developments.” He goes on to add: “This is not to say that the environment is without influence …” But, he continues, “whether a fact or development is significant depends primarily on its relevance to current economic theory.” What a curious relating of rigor to relevance! Whether the real world matters depends presumably on “its relevance to current economic theory.” Many if not most of today’s economic theorists seem to agree with this ordering of priorities …

Today, rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant … The theoretical analysis in much of this literature rests on assumptions that also fly in the face of the facts … Another related recent development in which theory proceeds with impeccable logic from unrealistic assumptions to conclusions that contradict the historical record, is the recent work on rational expectations …

I have scolded economists for what I think are the sins that too many of them commit, and I have tried to point the way to at least partial redemption. This road to salvation will not be an easy one for those who have been seduced by the siren of mathematical elegance or those who all too often seek to test unrealistic models without much regard for the quality or relevance of the data they feed into their equations. But let us all continue to worship at the altar of science. I ask only that our credo be: “relevance with as much rigor as possible,” and not “rigor regardless of relevance.” And let us not be afraid to ask — and to try to answer the really big questions.

Robert A. Gordon

Nick Rowe on the history of “New Keynesianism”

27 July, 2014 at 09:52 | Posted in Economics | Leave a comment

Stage 0. Late 1960′s. The Phelps volume, and Milton Friedman’s paper (pdf), both thinking about the microfoundations of the Phillips Curve, the difference between actual and expected inflation, and the role of monetary policy. This was the ancestral homeland of both New Keynesian and New Classical macroeconomics, which could not be distinguished at this stage …

nk-2-2Stage 1. Mid 1970′s. Now we see the difference. A distinct New Keynesian approach emerges. New Keynesians assume that prices (and/or wages) are set in advance, at expected market-clearing levels, before the shocks are known. This means that monetary policy can respond to those shocks, and help prevent undesirable fluctuations in output and employment. Even under rational expectations …

Stage 2. Late 1980′s. New Keynesians introduce monopolistic competition. This has two big advantages. First, you can now easily model price-setting firms as choosing a price to maximize profit… Second, because if a positive demand shock hits a perfectly competitive market, where prices are fixed at what was the expected market-clearing level, firms would ration sales, and you get a drop in output and employment, rather than a boom. And the world doesn’t seem to look like that.

Stage 3. Early 2000′s. New Keynesians introduce monetary policy without money. They become Neo-Wicksellians … There were two advantages to doing this. First, it let them model households’ and firms’ choices without needing to model the demand for money and the supply of money. Second, it made it easier to talk to central bankers who already thought of central banks as setting interest rates.

Which brings us to the End of History.

What about microfoundations? Well, it was an underlying theme, but there is nothing distinctively New Keynesian about that theme …

Likewise with rational expectations. New Keynesians just went with the flow.

Nick Rowe

Although I find Rowe’s macro history interesting — and to a large extent in line with the one I give in my own history of economics books — I also think the fact that on microfoundations “there is nothing distinctively New Keynesian” and that “New Keynesians just went with the flow” on that theme, deserves a comment.

Where “New Keynesian” economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterize all open social systems – including the economic system. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the idividual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms. And although one may easily agree with e.g. Paul Krugman’s emphasis on simple models, the simplifications used may have to be simplifications adequate for macroeconomics and not those adequate for microeconomics.

In microeconomics we know that aggregation really presupposes homothetic an identical preferences, something that almost never exist in real economies. The results given by these assumptions are therefore not robust and do not capture the underlying mechanisms at work in any real economy. And models that are critically based on particular and odd assumptions – and are neither robust nor congruent to real world economies – are of questionable value.

Even if economies naturally presuppose individuals, it does not follow that we can infer or explain macroeconomic phenomena solely from knowledge of these individuals. Macroeconomics is to a large extent emergent and cannot be reduced to a simple summation of micro-phenomena. Moreover, even these microfoundations aren’t immutable. The “deep parameters” of “New Keynesian” DSGE models– “tastes” and “technology” – are not really the bedrock of constancy that they believe (pretend) them to be.

So — I cannot concur with Paul Krugman, Mike Woodford, Greg Mankiw and other sorta-kinda “New Keynesians” when they more or less try to reduce Keynesian economics to “intertemporal maximization modified with sticky prices and a few other deviations”. And I’m certainly not the only ones thinking in these terms:

In a world that is importantly indeterminate — a world in which even some central things, such as the ‘rate and direction’ of innovation, and thus of productivity advances, are not predetermined — some models are better than others in outlining the structure of relationships. But even our models cannot offer forecasts of the future levels of the real price … and the real wage … in relation to present levels … As Keynes, when writing on this point, put it, ‘we simply do not know’ …

Phelps_Mass-Flourishing_author-photoDoes this finding mean that the ‘natural’ level of (un)employment no longer exists? That depends on what we mean by ‘natural.’ If we mean some immutable central tendency, then it never existed … It is ironic that the originators of models of the natural rate, whose formulations did not explicitly exclude that background expectations of future capital goods prices and future wages might be quite wrong, stand accused of not appreciating that any sort of economic equilibrium is to some extent a social phenomenon—a creature of beliefs, optimism, the policy climate, and so forth—while today’s crude Keynesians, despite their mechanical deterministic approach, wrap themselves in the mantle of Keynes, who, with his profound sense of indeterminacy and, consequently, radical uncertainty, was worlds away from their thinking.

Edmund Phelps

Macroeconomic just-so stories

26 July, 2014 at 21:13 | Posted in Economics | Leave a comment

pinnocThus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? Thus your standard New Keynesian model will calculate The expected path of consumption as the solution to some Euler equation plus an intertemporal budget constraint, with current wealth and the projected real interest rate path as the only factors that matter. This is fine if you want to demonstrate that the model can produce macroeconomic pathologies. But is it a not-stupid thing to do if you want your model to fit reality?

I remember attending the first lecture in Tom Sargent’s evening macroeconomics class back when I was in undergraduate: very smart man from whom I have learned the enormous amount, and well deserving his Nobel Prize. But…

He said … we were going to build a rigorous, micro founded model of the demand for money: We would assume that everyone lived for two periods, worked in the first period when they were young and sold what they produced to the old, held money as they aged, and then when they were old use their money to buy the goods newly produced by the new generation of young. Tom called this “microfoundations” and thought it gave powerful insights into the demand for money that you could not get from money-in-the-utility-function models.

I thought that it was a just-so story, and that whatever insights it purchased for you were probably not things you really wanted to buy. I thought it was dangerous to presume that you understood something because you had “microfoundations” when those microfoundations were wrong. After all, Ptolemaic astronomy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Brad DeLong

On the value of search models

25 July, 2014 at 14:08 | Posted in Economics | 1 Comment

The search-and-matching model works like this: one class of agents, “workers,” is either searching for a job or employed while another class of agents, “firms,” is either vacant, meaning it has a vacancy posted, or filled, in which case it employs a worker and together they hum along productively. In the earliest formulation of the model, the only economic decision made by either agent was whether a firm would post a vacancy in an attempt to match with a worker or would choose not to, thus remaining inactive.search Otherwise, both searching workers and vacant firms mindlessly wait until they match up, then commence a productive relationship that lasts until their match spontaneously dissolves. Then the worker goes back to searching and the firm makes its decision about whether to post a vacancy or not once again.

The reason why search-and-matching labor market models have something to say about the recent history of the labor market is because they are a good deal richer than the simple search-based story, which is the reason why Federal Reserve Bank of Richmond economist Karthik Athreya, whose book sparked this discussion, said that “search is not really about searching.” Specifically, they allow for alternative theories of wage-setting, a factual timeline for unemployment spells and what determines their duration, a rich set of labor market outcomes beyond employment and unemployment, and an implementable notion of power that is often a critical missing piece of economic modeling.

Like all economic models, the search-and-matching model is a simplification, even a ludicrous one … So do we search theorists have a big problem?

Notice that my summary of the model left out one big thing: how the fruits of the productive employment relationship are split between the worker and the firm. This is by far the biggest controversy in the field of search theory. The assumption made by the earliest search-and-matching models is that the “surplus” the two agents generate is split between the parties in the optimal way, where the optimality concept is defined within the model but with some relationship to a more general intuition about what each party would want (that optimal way is known as the “Nash Bargain” after the Nobel-Prize-winning mathematical theorist John Nash) …

The problem is that this theory of wage setting is an empirical disaster. Not only is it inconsistent with investigations into how actual wages are actually set, but it generates false predictions about unemployment spells that crucially fail to line up with what happens to unemployment during recessions (it goes up, and it stays high for a long time, when the optimal theory of wages says that wages should do the adjusting). Refinements that make the theory with Nash Bargaining consistent with the data on unemployment in recessions yield their own big empirical problem: those refinements imply that being unemployed isn’t really that bad for workers, which everyone who is sentient knows to be untrue.

So the search-and-matching model has a crazy theory about how wages are set, and that makes it a crazy model of how labor markets work, right? No. What the search-and-matching theory has, and what its alternatives lack for the most part, is indeterminacy about how wages are set. The “Nash Bargain” theory is optimal, but it’s not necessary—other wage-setting assumptions can be used to resolve the indeterminacy. And if economists can get their minds around the idea that the “market solution” is not always optimal then they can make real headway with the search-and-matching approach precisely because it’s consistent with those alternatives.

Marshall Steinbaum

[h/t Brad DeLong & Dwayne Woods]

James Heckman — the ultimate take down of teflon-coated defenders of rational expectations

24 July, 2014 at 21:16 | Posted in Economics | 4 Comments

heckman

James Heckman, winner of the “Nobel Prize” in economics (2000), did an inteview with John Cassidy in 2010. It’s an interesting read (Cassidy’s words in italics):

What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?

I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”

It became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lard Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.

What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?

Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.

What about you? When rational expectations was sweeping economics, what was your reaction to it? I know you are primarily a micro guy, but what did you think?

What struck me was that we knew Keynesian theory was still alive in the banks and on Wall Street. Economists in those areas relied on Keynesian models to make short-run forecasts. It seemed strange to me that they would continue to do this if it had been theoretically proven that these models didn’t work.

What about the efficient-markets hypothesis? Did Chicago economists go too far in promoting that theory, too?

Some did. But there is a lot of diversity here. You can go office to office and get a different view.

[Heckman brought up the memoir of the late Fischer Black, one of the founders of the Black-Scholes option-pricing model, in which he says that financial markets tend to wander around, and don’t stick closely to economics fundamentals.]

[Black] was very close to the markets, and he had a feel for them, and he was very skeptical. And he was a Chicago economist. But there was an element of dogma in support of the efficient-market hypothesis. People like Raghu [Rajan] and Ned Gramlich [a former governor of the Federal Reserve, who died in 2007] were warning something was wrong, and they were ignored. There was sort of a culture of efficient markets—on Wall Street, in Washington, and in parts of academia, including Chicago.

What was the reaction here when the crisis struck?

Everybody was blindsided by the magnitude of what happened. But it wasn’t just here. The whole profession was blindsided. I don’t think Joe Stiglitz was forecasting a collapse in the mortgage market and large-scale banking collapses.

So, today, what survives of the Chicago School? What is left?

I think the tradition of incorporating theory into your economic thinking and confronting it with data—that is still very much alive. It might be in the study of wage inequality, or labor supply responses to taxes, or whatever. And the idea that people respond rationally to incentives is also still central. Nothing has invalidated that—on the contrary.

So, I think the underlying ideas of the Chicago School are still very powerful. The basis of the rocket is still intact. It is what I see as the booster stage—the rational-expectation hypothesis and the vulgar versions of the efficient-markets hypothesis that have run into trouble. They have taken a beating—no doubt about that. I think that what happened is that people got too far away from the data, and confronting ideas with data. That part of the Chicago tradition was neglected, and it was a strong part of the tradition.

When Bob Lucas was writing that the Great Depression was people taking extended vacations—refusing to take available jobs at low wages—there was another Chicago economist, Albert Rees, who was writing in the Chicago Journal saying, No, wait a minute. There is a lot of evidence that this is not true.

Milton Friedman—he was a macro theorist, but he was less driven by theory and by the desire to construct a single overarching theory than by attempting to answer empirical questions. Again, if you read his empirical books they are full of empirical data. That side of his legacy was neglected, I think.

When Friedman died, a couple of years ago, we had a symposium for the alumni devoted to the Friedman legacy. I was talking about the permanent income hypothesis; Lucas was talking about rational expectations. We have some bright alums. One woman got up and said, “Look at the evidence on 401k plans and how people misuse them, or don’t use them. Are you really saying that people look ahead and plan ahead rationally?” And Lucas said, “Yes, that’s what the theory of rational expectations says, and that’s part of Friedman’s legacy.” I said, “No, it isn’t. He was much more empirically minded than that.” People took one part of his legacy and forgot the rest. They moved too far away from the data.

 

Yes indeed, they certainly “moved too far away from the data.”

In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:

it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.

And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:

More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.

For even more on the issue, permit me to self-indulgently recommend reading my article Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world in real-world economics review no. 62.

Chicago Follies (XI)

22 July, 2014 at 23:48 | Posted in Economics | 2 Comments

In their latest book, Think Like a Freak, co-authors Steven Levitt and Stephen Dubner tell a story about meeting David Cameron in London before he was Prime Minister. They told him that the U.K.’s National Health Service — free, unlimited, lifetime heath care — was laudable but didn’t make practical sense.

“We tried to make our point with a thought experiment,” they write. “We suggested to Mr. Cameron that he consider a similar policy in a different arena. What if, for instance…everyone were allowed to go down to the car dealership whenever they wanted and pick out any new model, free of charge, and drive it home?”

1643.Lebowski.jpg-610x0Rather than seeing the humor and realizing that health care is just like any other part of the economy, Cameron abruptly ended the meeting, demon-strating one of the risks of ‘thinking like a freak,’ Dubner says in the accompanying video.

“Cameron has been open to [some] inventive thinking but if you start to look at things in a different way you’ll get some strange looks,” he says. “Tread with caution.”

So what do Dubner and Levitt make of the Affordable Care Act, aka Obamacare, which has been described as a radical rethinking of America’s health care system?

“I do not think it’s a good approach at all,” says Levitt, a professor of economics at the University of Chicago. “Fundamentally with health care, until people have to pay for what they’re buying it’s not going to work. Purchasing health care is almost exactly like purchasing any other good in the economy. If we’re going to pretend there’s a market for it, let’s just make a real market for it.”

Aaron Task

Portraying health care as “just like any other part of the economy” is of course nothing but total horseshit. So, instead of “thinking like a freak,” why not e. g. read what Kenneth Arrow wrote on the issue of medical care already back in 1963?

Under ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. ???????????????????????????????????????????????????????????????????????????????????????????????????????????????In its absence he wants to have some guarantee that at leats the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.

One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if  he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act  as thoroughly in the buyer’s  behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.

Kenneth Arrow, “Uncertainty and the Welfare Economics of Medical Care”. American Economic Review, 53 (5).

The Sonnenschein-Mantel-Debreu results after forty years

21 July, 2014 at 16:39 | Posted in Economics | 1 Comment

Along with the Arrow-Debreu existence theorem and some results on regular economies, SMD theory fills in many of the gaps we might have in our understanding of general equilibrium theory …

It is also a deeply negative result. SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

24958274Given how sweeping the changes wrought by SMD theory seem to be, it is understand-able that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.” There was a realization of a similar gap in the foundations of empirical economics. General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still – something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds — not so little — of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

Austrian Newspeak

21 July, 2014 at 14:49 | Posted in Economics | 1 Comment

I see that Robert Murphy of the Mises Institute has taken the time to pen a thoughtful critique of my gentle admonishment of followers of the school of quasi-economic thought commonly known as “Austrianism” …

Much of my original article discussed the failed Austrian prediction that QE would cause inflation (i.e., a rise in the general level of consumer prices). Robert reiterates four standard Austrian defenses:

newspeak21. Consumer prices rose more than the official statistics suggest.

2. Asset prices rose.

3. “Inflation” doesn’t mean “a rise in the general level of consumer prices,” it means “an increase in the monetary base”, so QE is inflation by definition.

4. Austrians do not depend on evidence to refute their theories; the theories are deduced from pure logic.

Noah Smith

Makes me come to think of — wonder why  – Keynes’s review of Austrian übereconomist Friedrich von Hayek’s Prices and Production:

The book, as it stands, seems to me to be one of the most frightful muddles I have ever read, with scarcely a sound proposition in it beginning with page 45, and yet it remains a book of some interest, which is likely to leave its mark on the mind of the reader. It is an extraordinary example of how, starting with a mistake, a remorseless logician can end up in bedlam …

J.M. Keynes, Economica 34 (1931)

Expected utility theory

21 July, 2014 at 11:58 | Posted in Economics | Leave a comment

rabinIn Matthew Rabin’s modern classic Risk Aversion and Expected-Utility Theory: A Calibration Theorem it is forcefully and convincingly shown that expected utility theory does not explain actual behaviour and choices.

What is still surprising, however, is that although the expected utility theory obviously is descriptively inadequate and doesn’t pass the Smell Test, colleagues all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

That cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them!

Rabin writes:

Using expected-utility theory, economists model risk aversion as arising solely because the utility function over wealth is concave. This diminishing-marginal-utility-of-wealth theory of risk aversion is psychologically intuitive, and surely helps explain some of our aversion to large-scale risk: We dislike vast uncertainty in lifetime wealth because a dollar that helps us avoid poverty is more valuable than a dollar that helps us become very rich.

Yet this theory also implies that people are approximately risk neutral when stakes are small. Arrow (1971, p. 100) shows that an expected-utility maximizer with a differentiable utility function will always want to take a sufficiently small stake in any positive-expected-value bet. That is, expected-utility maximizers are (almost everywhere) arbitrarily close to risk neutral when stakes are arbitrarily small. While most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for negligible stakes, but for quite sizable and economically important stakes. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality.While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition among some subsets of researchers, and has been illustrated in writing in a variety of different contexts using standard utility functions.

In this paper, I reinforce this previous research by presenting a theorem which calibrates a relationship between risk attitudes over small and large stakes. The theorem shows that, within the expected-utility model, anything but virtual risk neutrality over modest stakes implies manifestly unrealistic risk aversion over large stakes. The theorem is entirely ‘‘non-parametric’’, assuming nothing about the utility function except concavity. In the next section I illustrate implications of the theorem with examples of the form ‘‘If an expected-utility maximizer always turns down modest-stakes gamble X, she will always turn down large-stakes gamble Y.’’ Suppose that, from any initial wealth level, a person turns down gambles where she loses $100 or gains $110, each with 50% probability. Then she will turn down 50-50 bets of losing $1,000 or gaining any sum of money. A person who would always turn down 50-50 lose $1,000/gain $1,050 bets would always turn down 50-50 bets of losing $20,000 or gaining any sum. These are implausible degrees of risk aversion. The theorem not only yields implications if we know somebody will turn down a bet for all initial wealth levels. Suppose we knew a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than $350,000, but knew nothing about the degree of her risk aversion for wealth levels above $350,000. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670.

The intuition for such examples, and for the theorem itself, is that within the expected-utility framework turning down a modest-stakes gamble means that the marginal utility of money must diminish very quickly for small changes in wealth. For instance, if you reject a 50-50 lose $10/gain $11 gamble because of diminishing marginal utility, it must be that you value the 11th dollar above your current wealth by at most 10/11 as much as you valued the 10th-to-last-dollar of your current wealth.

Iterating this observation, if you have the same aversion to the lose $10/gain $11 bet if you were $21 wealthier, you value the 32nd dollar above your current wealth by at most 10/11 x 10/11 ~ 5/6 as much as your 10th-to-last dollar. You will value your 220th dollar by at most 3/20 as much as your last dollar, and your 880 dollar by at most 1/2000 of your last dollar. This is an absurd rate for the value of money to deteriorate — and the theorem shows the rate of deterioration implied by expected-utility theory is actually quicker than this. Indeed, the theorem is really just an algebraic articulation of how implausible it is that the consumption value of a dollar changes significantly as a function of whether your lifetime wealth is $10, $100, or even $1,000 higher or lower. From such observations we should conclude that aversion to modest-stakes risk has nothing to do with the diminishing marginal utility of wealth.

Expected-utility theory seems to be a useful and adequate model of risk aversion for many purposes, and it is especially attractive in lieu of an equally tractable alternative model. ‘‘Extremelyconcave expected utility’’ may even be useful as a parsimonious tool for modeling aversion to modest-scale risk. But this and previous papers make clear that expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes. Moreover, when the specific structure of expected-utility theory is used to analyze situations involving modest stakes — such as in research that assumes that large-stake and modest-stake risk attitudes derive from the same utility-for-wealth function — it can be very misleading. In the concluding section, I discuss a few examples of such research where the expected-utility hypothesis is detrimentally maintained, and speculate very briefly on what set of ingredients may be needed to provide a better account of risk attitudes. In the next section, I discuss the theorem and illustrate its implications.

Expected-utility theory makes wrong predictions about the relationship between risk aversion over modest stakes and risk aversion over large stakes. Hence, when measuring risk attitudes maintaining the expected-utility hypothesis, differences in estimates of risk attitudes may come from differences in the scale of risk comprising data sets, rather than from differences in risk attitudes of the people being studied. Data sets dominated by modest-risk investment opportunities are likely to yield much higher estimates of risk aversion than data sets dominated by larger-scale investment opportunities. So not only are standard measures of risk aversion somewhat hard to interpret given that people are not expected-utility maximizers, but even attempts to compare risk attitudes so as to compare across groups will be misleading unless economists pay due attention to the theory’s calibrational problems.

Indeed, what is empirically the most firmly established feature of risk preferences, loss aversion, is a departure from expected-utility theory that provides a direct explanation for modest-scale risk aversion. Loss aversion says that people are significantly more averse to losses relative to the status quo than they are attracted by gains, and more generally that people’s utilities are determined by changes in wealth rather than absolute levels. Preferences incorporating loss aversion can reconcile significant small-scale risk aversion with reasonable degrees of large-scale risk aversion … Variants of this or other models of risk attitudes can provide useful alternatives to expected-utility theory that can reconcile plausible risk attitudes over large stakes with non-trivial risk aversion over modest stakes.

Macroeconomic quackery

20 July, 2014 at 13:41 | Posted in Economics | 2 Comments

In a recent interview Chicago übereconomist  Robert Lucas said

the evidence on postwar recessions … overwhelmingly supports the dominant importance of real shocks.

So, according to Lucas, changes in tastes and technologies should be able to explain the main fluctuations in e.g. the unemployment that we have seen during the last six or seven decades. But really — not even a Nobel laureate could in his wildest imagination come up with any warranted and justified explanation solely based on changes in tastes and technologies.

How do we protect ourselves from this kind of scientific nonsense? In The Scientific Illusion in Empirical Macroeconomics Larry Summers has a suggestion well worth considering:

Modern scientific macroeconomics sees a (the?) crucial role of theory as the development of pseudo worlds or in Lucas’s (1980b) phrase the “provision of fully articulated, artificial economic systems that can serve as laboratories in which policies that would be prohibitively expensive to experiment with in actual economies can be tested out at much lower cost” and explicitly rejects the view that “theory is a collection of assertions about the actual economy” …

image

A great deal of the theoretical macroeconomics done by those professing to strive for rigor and generality, neither starts from empirical observation nor concludes with empirically verifiable prediction …

The typical approach is to write down a set of assumptions that seem in some sense reasonable, but are not subject to empirical test … and then derive their implications and report them as a conclusion. Since it is usually admitted that many considerations are omitted, the conclusion is rarely treated as a prediction …

However, an infinity of models can be created to justify any particular set of empirical predictions … What then do these exercises teach us about the world? … If empirical testing is ruled out, and persuasion is not attempted, in the end I am not sure these theoretical exercises teach us anything at all about the world we live in …

Reliance on deductive reasoning rather than theory based on empirical evidence is particularly pernicious when economists insist that the only meaningful questions are the ones their most recent models are designed to address. Serious economists who respond to questions about how today’s policies will affect tomorrow’s economy by taking refuge in technobabble about how the question is meaningless in a dynamic games context abdicate the field to those who are less timid. No small part of our current economic difficulties can be traced to ignorant zealots who gained influence by providing answers to questions that others labeled as meaningless or difficult. Sound theory based on evidence is surely our best protection against such quackery.

Added 23:00 GMT: Commenting on this post, Brad DeLong writes:

What is Lucas talking about?

If you go to Robert Lucas’s Nobel Prize Lecture, there is an admission that his own theory that monetary (and other demand) shocks drove business cycles because unanticipated monetary expansions and contractions caused people to become confused about the real prices they faced simply did not work:

Robert Lucas (1995): Monetary Neutrality:
“Anticipated monetary expansions … are not associated with the kind of stimulus to employment and production that Hume described. Unanticipated monetary expansions, on the other hand, can stimulate production as, symmetrically, unanticipated contractions can induce depression. The importance of this distinction between anticipated and unanticipated monetary changes is an implication of every one of the many different models, all using rational expectations, that were developed during the 1970s to account for short-term trade-offs…. The discovery of the central role of the distinction between anticipated and unanticipated money shocks resulted from the attempts, on the part of many researchers, to formulate mathematically explicit models that were capable of addressing the issues raised by Hume. But I think it is clear that none of the specific models that captured this distinction in the 1970s can now be viewed as a satisfactory theory of business cycles”

And Lucas explicitly links that analytical failure to the rise of attempts to identify real-side causes:

“Perhaps in part as a response to the difficulties with the monetary-based business cycle models of the 1970s, much recent research has followed the lead of Kydland and Prescott (1982) and emphasized the effects of purely real forces on employ- ment and production. This research has shown how general equilibrium reasoning can add discipline to the study of an economy’s distributed lag response to shocks, as well as to the study of the nature of the shocks themselves…. Progress will result from the continued effort to formulate explicit theories that fit the facts, and that the best and most practical macroeconomics will make use of developments in basic economic theory.”

But these real-side theories do not appear to me to “fit the facts” at all.

And yet Lucas’s overall conclusion is:

“In a period like the post-World War II years in the United States, real output fluctuations are modest enough to be attributable, possibly, to real sources. There is no need to appeal to money shocks to account for these movements”

It would make sense to say that there is “no need to appeal to money shocks” only if there were a well-developed theory and models by which pre-2008 post-WWII business-cycle fluctuations are modeled as and explained by identified real shocks. But there isn’t. All Lucas will say is that post-WWII pre-2008 business-cycle fluctuations are “possibly” “attributable… to real shocks” because they are “modest enough”. And he says this even though:

“An event like the Great Depression of 1929-1933 is far beyond anything that can be attributed to shocks to tastes and technology. One needs some other possibilities. Monetary contractions are attractive as the key shocks in the 1929-1933 years, and in other severe depressions, because there do not seem to be any other candidates”

as if 2008-2009 were clearly of a different order of magnitude with a profoundly different signature in the time series than, say, 1979-1982.

Why does he think any of these things?

Yes, indeed, how could any person think any of those things …

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.