## Bayesian inference gone awry

24 Jul, 2014 at 10:57 | Posted in Theory of Science & Methodology | 2 CommentsThere is a nice YouTube video with Tony O’Hagan interviewing Dennis Lindley. Of course, Dennis is a legend and his impact on the field of statistics is huge.

At one point, Tony points out that some people liken Bayesian inference to a religion. Dennis claims this is false. Bayesian inference, he correctly points out, starts with some basic axioms and then the rest follows by deduction. This is logic, not religion.I agree that the mathematics of Bayesian inference is based on sound logic. But, with all due respect, I think Dennis misunderstood the question. When people say that “Bayesian inference is like a religion,” they are not referring to the logic of Bayesian inference. They are referring to how adherents of Bayesian inference behave.

(As an aside, detractors of Bayesian inference do not deny the correctness of the logic. They just don’t think the axioms are relevant for data analysis. For example, no one doubts the axioms of Peano arithmetic. But that doesn’t imply that arithmetic is the foundation of statistical inference. But I digress.)

The vast majority of Bayesians are pragmatic, reasonable people. But there is a sub-group of die-hard Bayesians who do treat Bayesian inference like a religion. By this I mean:

They are very cliquish.

They have a strong emotional attachment to Bayesian inference.

They are overly sensitive to criticism.

They are unwilling to entertain the idea that Bayesian inference might have flaws.

When someone criticizes Bayes, they think that critic just “doesn’t get it.”

They mock people with differing opinions …No evidence you can provide would ever make the die-hards doubt their ideas. To them, Sir David Cox, Brad Efron and other giants in our field who have doubts about Bayesian inference, are not taken seriously because they “just don’t get it.”

So is Bayesian inference a religion? For most Bayesians: no. But for the thin-skinned, inflexible die-hards who have attached themselves so strongly to their approach to inference that they make fun of, or get mad at, critics: yes, it is a religion.

## Bayesianism — preposterous mumbo jumbo

23 Jul, 2014 at 09:46 | Posted in Theory of Science & Methodology | 2 CommentsNeoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by **Ramsey** (1931), **de Finetti** (1937) or **Savage** (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump” argument – susceptible to being ruined by some clever “bookie”.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

The view that Bayesian decision theory is only genuinely valid in a

small worldwas asserted very firmly by Leonard Savage when laying down the principles of the theory in his path-breakingFoundations of Statistics. He makes the distinction between small and large worlds in a folksy way by quoting the proverbs ”Look before you leap” and ”Cross that bridge when you come to it”. You are in a small world if it is feasible always to look before you leap. You are in a large world if there are some bridges that you cannot cross before you come to them.As Savage comments, when proverbs conflict, it is proverbially true that there is some truth in both—that they apply in different contexts. He then argues that some decision situations are best modeled in terms of a small world, but others are not. He explicitly rejects the idea that all worlds can be treated as small as both ”ridiculous” and ”preposterous” … Frank Knight draws a similar distinction between making decision under risk or uncertainty …

Bayesianism is understood [here] to be the philosophical principle that Bayesian methods are always appropriate in all decision problems, regardless of whether the relevant set of states in the relevant world is large or small. For example, the world in which financial economics is set is obviously large in Savage’s sense, but the suggestion that there might be something questionable about the standard use of Bayesian updating in financial models is commonly greeted with incredulity or laughter.

Someone who acts as if Bayesianism were correct will be said to be a Bayesianite. It is important to distinguish a Bayesian like myself—someone convinced by Savage’s arguments that Bayesian decision theory makes sense in small worlds—from a Bayesianite. In particular, a Bayesian need not join the more extreme Bayesianites in proceeding as though:

• All worlds are small.

• Rationality endows agents with prior probabilities.

• Rational learning consists simply in using Bayes’ rule to convert a set of prior

probabilities into posterior probabilities after registering some new data.Bayesianites are often understandably reluctant to make an explicit commitment to these principles when they are stated so baldly, because it then becomes evi-dent that they are implicitly claiming that David Hume was wrong to argue that the principle of scientific induction cannot be justified by rational argument …

Bayesianites believe that the subjective probabilities of Bayesian decision theory can be reinterpreted as logical probabilities without any hassle. Its adherents therefore hold that Bayes’ rule is the solution to the problem of scientific induction. No support for such a view is to be found in Savage’s theory—nor in the earlier theories of Ramsey, de Finetti, or von Neumann and Morgenstern. Savage’s theory is entirely and exclusively a consistency theory. It says nothing about how decision-makers come to have the beliefs ascribed to them; it asserts only that, if the decisions taken are consistent (in a sense made precise by a list of axioms), then they act as though maximizing expected utility relative to a subjective probability distribution …

A reasonable decision-maker will presumably wish to avoid inconsistencies. A Bayesianite therefore assumes that it is enough to assign prior beliefs to as decisionmaker, and then forget the problem of where beliefs come from. Consistency then forces any new data that may appear to be incorporated into the system via Bayesian updating. That is, a posterior distribution is obtained from the prior distribution using Bayes’ rule.

The naiveté of this approach doesn’t consist in using Bayes’ rule, whose validity as a piece of algebra isn’t in question. It lies in supposing that the problem of where the priors came from can be quietly shelved.

Savage did argue that his descriptive theory of rational decision-making could be of practical assistance in helping decision-makers form their beliefs, but he didn’t argue that the decision-maker’s problem was simply that of selecting a prior from a limited stock of standard distributions with little or nothing in the way of soulsearching. His position was rather that one comes to a decision problem with a whole set of subjective beliefs derived from one’s previous experience that may or may not be consistent …

But why should we wish to adjust our gut-feelings using Savage’s methodology? In particular, why should a rational decision-maker wish to be consistent? After all, scientists aren’t consistent, on the grounds that it isn’t clever to be consistently wrong. When surprised by data that shows current theories to be in error, they seek new theories that are inconsistent with the old theories. Consistency, from this point of view, is only a virtue if the possibility of being surprised can somehow be eliminated. This is the reason for distinguishing between large and small worlds. Only in the latter is consistency an unqualified virtue.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of **Keynes**’ *A Treatise on Probability* (1921) and *General Theory* (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism

mustplay an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

## Chicago Follies (XI)

22 Jul, 2014 at 23:48 | Posted in Economics | 2 CommentsIn their latest book,

Think Like a Freak, co-authors Steven Levitt and Stephen Dubner tell a story about meeting David Cameron in London before he was Prime Minister. They told him that the U.K.’s National Health Service — free, unlimited, lifetime heath care — was laudable but didn’t make practical sense.“We tried to make our point with a thought experiment,” they write. “We suggested to Mr. Cameron that he consider a similar policy in a different arena. What if, for instance…everyone were allowed to go down to the car dealership whenever they wanted and pick out any new model, free of charge, and drive it home?”

Rather than seeing the humor and realizing that health care is just like any other part of the economy, Cameron abruptly ended the meeting, demon-strating one of the risks of ‘thinking like a freak,’ Dubner says in the accompanying video.

“Cameron has been open to [some] inventive thinking but if you start to look at things in a different way you’ll get some strange looks,” he says. “Tread with caution.”

So what do Dubner and Levitt make of the Affordable Care Act, aka Obamacare, which has been described as a radical rethinking of America’s health care system?

“I do not think it’s a good approach at all,” says Levitt, a professor of economics at the University of Chicago. “Fundamentally with health care, until people have to pay for what they’re buying it’s not going to work. Purchasing health care is almost exactly like purchasing any other good in the economy. If we’re going to pretend there’s a market for it, let’s just make a real market for it.”

Portraying health care as “just like any other part of the economy” is of course nothing but total horseshit. So, instead of “thinking like a freak,” why not e. g. read what **Kenneth Arrow** wrote on the issue of medical care already back in 1963?

Under ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. In its absence he wants to have some guarantee that at leats the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.

One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act as thoroughly in the buyer’s behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.

Kenneth Arrow, “Uncertainty and the Welfare Economics of Medical Care”.

American Economic Review,53(5).

## Bayesianism — a dangerous religion that harms science

22 Jul, 2014 at 19:57 | Posted in Theory of Science & Methodology | 9 CommentsOne of my favourite bloggers — Noah Smith — has a nice post up today on Bayesianism:

Consider Proposition H: “God is watching out for me, and has a special purpose for me and me alone. Therefore, God will not let me die. No matter how dangerous a threat seems, it cannot possibly kill me, because God is looking out for me – and only me – at all times.”

Suppose that you believe that there is a nonzero probability that H is true. And suppose you are a Bayesian – you update your beliefs according to Bayes’ Rule. As you survive longer and longer – as more and more threats fail to kill you – your belief about the probability that H is true must increase and increase. It’s just mechanical application of Bayes’ Rule:P(H|E) = (P(E|H)P(H))/P(E)

Here, E is “not being killed,” P(E|H)=1, and P(H) is assumed not to be zero. P(E) is less than 1, since under a number of alternative hypotheses you might get killed (if you have a philosophical problem with this due to the fact that anyone who observes any evidence must not be dead, just slightly tweak H so that it’s possible to receive a “mortal wound”).

So P(H|E) is greater than P(H) – every moment that you fail to die increases your subjective probability that you are an invincible superman, the chosen of God. This is totally and completely rational, at least by the Bayesian definition of rationality.

The nodal point here is — of course — that although Bayes’ Rule is *mathematically* unquestionable, that doesn’t qualify it as indisputably applicable to *scientific* questions. As another of my favourite bloggers — statistician Andrew Gelman — puts it:

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that di erent methods work well in different settings … Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective belief, and second, that it’s not clear how to assess subjective knowledge in any case.

Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an allpurpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for a generation of statistics to be ignorant of experimental design and analysis of variance, instead becoming experts on the convergence of the Gibbs sampler. In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer scientists, and others to move in and fill in the gap …

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence …

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16 …

## Understanding discrete random variables (student stuff)

22 Jul, 2014 at 11:13 | Posted in Statistics & Econometrics | Comments Off on Understanding discrete random variables (student stuff)

## The Sonnenschein-Mantel-Debreu results after forty years

21 Jul, 2014 at 16:39 | Posted in Economics | 1 CommentAlong with the Arrow-Debreu existence theorem and some results on regular economies, SMD theory fills in many of the gaps we might have in our understanding of general equilibrium theory …

It is also a deeply negative result. SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

Given how sweeping the changes wrought by SMD theory seem to be, it is understand-able that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.” There was a realization of a similar gap in the foundations of empirical economics. General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) *unequivocally *showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a *real solution* to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an *evasion* whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds — not so little — of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

## Austrian Newspeak

21 Jul, 2014 at 14:49 | Posted in Economics | 1 CommentI see that Robert Murphy of the Mises Institute has taken the time to pen a thoughtful critique of my gentle admonishment of followers of the school of quasi-economic thought commonly known as “Austrianism” …

Much of my original article discussed the failed Austrian prediction that QE would cause inflation (i.e., a rise in the general level of consumer prices). Robert reiterates four standard Austrian defenses:

1. Consumer prices rose more than the official statistics suggest.

2. Asset prices rose.

3. “Inflation” doesn’t mean “a rise in the general level of consumer prices,” it means “an increase in the monetary base”, so QE is inflation by definition.

4. Austrians do not depend on evidence to refute their theories; the theories are deduced from pure logic.

Makes me come to think of — wonder why — Keynes’s review of Austrian übereconomist Friedrich von Hayek’s *Prices and Production*:

The book, as it stands, seems to me to be one of the most frightful muddles I have ever read, with scarcely a sound proposition in it beginning with page 45, and yet it remains a book of some interest, which is likely to leave its mark on the mind of the reader. It is an extraordinary example of how, starting with a mistake, a remorseless logician can end up in bedlam …

J.M. Keynes, Economica 34 (1931)

## Ett sällsynt vidrigt försök att sätta munkavle på svenska lärare har slagits tillbaka

21 Jul, 2014 at 13:56 | Posted in Education & School | Comments Off on Ett sällsynt vidrigt försök att sätta munkavle på svenska lärare har slagits tillbakaFör några dagar sedan kunde vi som följer läraren, författaren och skolstrategen Per Kornhall på Facebook se att han och Lärarnas Riksförbund vunnit den första delsegern mot Upplands Väsby kommun. Arbetsdomstolen underkänner Väsby kommuns skäl för avsked

Nu läser vi om det i lokaltidningen Vi i Väsby. I ”Per Kornhall får tillbaka jobbet” skriver tidningen att Arbetsdomstolen kommit med ett så kallat interimistiskt beslut, som gäller fram till dess att konflikten är löst. AD dömer att Kornhall ska få ha kvar sin anställning på kommunen från den 11 juli till dess att tvisten slutligen avgjorts.

Per Kornhall och Lärarnas Riksförbund som företräder honom, har fått rätt på alla punkter mot arbetsgivaren Upplands Väsby, som nu kommit ut i riksmedia som en arbetsplats där man vill tysta sina medarbetare. Det har skrivits om hans fall inte bara i lokaltidningen utan också i riksmedia. Kornhall är en välkänd och respekterad författare och debattör i skolfrågor.

När Vi i Väsby frågar om han framöver skulle kunna tänka sig att komma tillbaka till arbetet på kommunen blir svaret:

”Som läget är nu är jag naturligtvis inte intresserad av att komma tillbaka så som jag blivit behandlad.” Vi förstår honom.Man kan hoppas att andra lärare och anställda i skolan vågar höja rösten efter detta. Tidigare har det funnits oroande tecken på att inte bara i privata skolor utan också i kommuner känner sig lärare åtsatta och tystade.

Se rapporten ”Tystade lärare” som togs fram av Lärarnas Riksförbund och som visade att en majoritet av de svarande lärarna, nästan 70 procent av de friskoleanställda och 53 procent av de kommunala lärarna, inte skulle våga uttala sig offentligt i media om de var missnöjda med sin arbetsgivare/skola.

Per Kornhall har gjort det lättare för lärare och andra engagerade i skolan att stå emot hot från okunniga arbetsgivare och höja sina röster när de ansvariga inte sköter sig.

Helena von Schantz skriver — som vanligt klokt och personligt — mer om detta sällsynt vidriga försök att sätta munkavle på en visselblåsare i svensk skola.

## Expected utility theory

21 Jul, 2014 at 11:58 | Posted in Economics | Comments Off on Expected utility theoryIn Matthew Rabin’s modern classic *Risk Aversion and Expected-Utility Theory: **A Calibration Theorem *it is forcefully and convincingly shown that expected utility theory does not explain actual behaviour and choices.

What is still surprising, however, is that although the expected utility theory obviously is descriptively inadequate and doesn’t pass the Smell Test, colleagues all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

That cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them!

Rabin writes:

Using expected-utility theory, economists model risk aversion as arising solely because the utility function over wealth is concave. This diminishing-marginal-utility-of-wealth theory of risk aversion is psychologically intuitive, and surely helps explain some of our aversion to large-scale risk: We dislike vast uncertainty in lifetime wealth because a dollar that helps us avoid poverty is more valuable than a dollar that helps us become very rich.

Yet this theory also implies that people are approximately risk neutral when stakes are small. Arrow (1971, p. 100) shows that an expected-utility maximizer with a differentiable utility function will always want to take a sufficiently small stake in any positive-expected-value bet. That is, expected-utility maximizers are (almost everywhere) arbitrarily close to risk neutral when stakes are arbitrarily small. While most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for negligible stakes, but for quite sizable and economically important stakes. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality.While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition among some subsets of researchers, and has been illustrated in writing in a variety of different contexts using standard utility functions.

In this paper, I reinforce this previous research by presenting a theorem which calibrates a relationship between risk attitudes over small and large stakes. The theorem shows that, within the expected-utility model, anything but virtual risk neutrality over modest stakes implies manifestly unrealistic risk aversion over large stakes. The theorem is entirely ‘‘non-parametric’’, assuming nothing about the utility function except concavity. In the next section I illustrate implications of the theorem with examples of the form ‘‘If an expected-utility maximizer always turns down modest-stakes gamble X, she will always turn down large-stakes gamble Y.’’ Suppose that, from any initial wealth level, a person turns down gambles where she loses $100 or gains $110, each with 50% probability. Then she will turn down 50-50 bets of losing $1,000 or gaining any sum of money. A person who would always turn down 50-50 lose $1,000/gain $1,050 bets would always turn down 50-50 bets of losing $20,000 or gaining any sum. These are implausible degrees of risk aversion. The theorem not only yields implications if we know somebody will turn down a bet for all initial wealth levels. Suppose we knew a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than $350,000, but knew nothing about the degree of her risk aversion for wealth levels above $350,000. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670.

The intuition for such examples, and for the theorem itself, is that within the expected-utility framework turning down a modest-stakes gamble means that the marginal utility of money must diminish very quickly for small changes in wealth. For instance, if you reject a 50-50 lose $10/gain $11 gamble because of diminishing marginal utility, it must be that you value the 11th dollar above your current wealth by at most 10/11 as much as you valued the 10th-to-last-dollar of your current wealth.

Iterating this observation, if you have the same aversion to the lose $10/gain $11 bet if you were $21 wealthier, you value the 32nd dollar above your current wealth by at most 10/11 x 10/11 ~ 5/6 as much as your 10th-to-last dollar. You will value your 220th dollar by at most 3/20 as much as your last dollar, and your 880 dollar by at most 1/2000 of your last dollar. This is an absurd rate for the value of money to deteriorate — and the theorem shows the rate of deterioration implied by expected-utility theory is actually quicker than this. Indeed, the theorem is really just an algebraic articulation of how implausible it is that the consumption value of a dollar changes significantly as a function of whether your lifetime wealth is $10, $100, or even $1,000 higher or lower. From such observations we should conclude that aversion to modest-stakes risk has nothing to do with the diminishing marginal utility of wealth.

Expected-utility theory seems to be a useful and adequate model of risk aversion for many purposes, and it is especially attractive in lieu of an equally tractable alternative model. ‘‘Extremelyconcave expected utility’’ may even be useful as a parsimonious tool for modeling aversion to modest-scale risk. But this and previous papers make clear that expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes. Moreover, when the specific structure of expected-utility theory is used to analyze situations involving modest stakes — such as in research that assumes that large-stake and modest-stake risk attitudes derive from the same utility-for-wealth function — it can be very misleading. In the concluding section, I discuss a few examples of such research where the expected-utility hypothesis is detrimentally maintained, and speculate very briefly on what set of ingredients may be needed to provide a better account of risk attitudes. In the next section, I discuss the theorem and illustrate its implications.

…

Expected-utility theory makes wrong predictions about the relationship between risk aversion over modest stakes and risk aversion over large stakes. Hence, when measuring risk attitudes maintaining the expected-utility hypothesis, differences in estimates of risk attitudes may come from differences in the scale of risk comprising data sets, rather than from differences in risk attitudes of the people being studied. Data sets dominated by modest-risk investment opportunities are likely to yield much higher estimates of risk aversion than data sets dominated by larger-scale investment opportunities. So not only are standard measures of risk aversion somewhat hard to interpret given that people are not expected-utility maximizers, but even attempts to compare risk attitudes so as to compare across groups will be misleading unless economists pay due attention to the theory’s calibrational problems.

…

Indeed, what is empirically the most firmly established feature of risk preferences, loss aversion, is a departure from expected-utility theory that provides a direct explanation for modest-scale risk aversion. Loss aversion says that people are significantly more averse to losses relative to the status quo than they are attracted by gains, and more generally that people’s utilities are determined by changes in wealth rather than absolute levels. Preferences incorporating loss aversion can reconcile significant small-scale risk aversion with reasonable degrees of large-scale risk aversion … Variants of this or other models of risk attitudes can provide useful alternatives to expected-utility theory that can reconcile plausible risk attitudes over large stakes with non-trivial risk aversion over modest stakes.

## Methodological arrogance

20 Jul, 2014 at 14:40 | Posted in Theory of Science & Methodology | Comments Off on Methodological arroganceSo what do I mean by methodo-logical arrogance? I mean an attitude that invokes micro-foundations as a methodo-logical principle — philosophical reductionism in Popper’s terminology — while dismissing non-microfounded macromodels as unscientific. To be sure, the progress of science may enable us to reformulate (and perhaps improve) explanations of certain higher-level phenomena by expressing those relationships in terms of lower-level concepts. That is what Popper calls scientific reduction. But scientific reduction is very different from rejecting, on methodological principle, any explanation not expressed in terms of more basic concepts.

And whenever macrotheory seems inconsistent with microtheory, the inconsistency poses a problem to be solved. Solving the problem will advance our understanding. But simply to reject the macrotheory on methodological principle without evidence that the microfounded theory gives a better explanation of the observed phenomena than the non-microfounded macrotheory … is arrogant. Microfoundations for macroeconomics should result from progress in economic theory, not from a dubious methodological precept.

For more on microfoundations and methodological arrogance, read yours truly’s Micro versus Macro in *Real-World Economics Review *(issue no. 66, January 2014).

## Macroeconomic quackery

20 Jul, 2014 at 13:41 | Posted in Economics | 2 CommentsIn a recent interview Chicago übereconomist Robert Lucas said

the evidence on postwar recessions … overwhelmingly supports the dominant importance of real shocks.

So, according to Lucas, changes in tastes and technologies should be able to explain the main fluctuations in e.g. the unemployment that we have seen during the last six or seven decades. But really — not even a Nobel laureate could in his wildest imagination come up with any warranted and justified explanation solely based on changes in tastes and technologies.

How do we protect ourselves from this kind of scientific nonsense? In The Scientific Illusion in Empirical Macroeconomics Larry Summers has a suggestion well worth considering:

Modern scientific macroeconomics sees a (the?) crucial role of theory as the development of pseudo worlds or in Lucas’s (1980b) phrase the “provision of fully articulated, artificial economic systems that can serve as laboratories in which policies that would be prohibitively expensive to experiment with in actual economies can be tested out at much lower cost” and explicitly rejects the view that “theory is a collection of assertions about the actual economy” …

A great deal of the theoretical macroeconomics done by those professing to strive for rigor and generality, neither starts from empirical observation nor concludes with empirically verifiable prediction …

The typical approach is to write down a set of assumptions that seem in some sense reasonable, but are not subject to empirical test … and then derive their implications and report them as a conclusion. Since it is usually admitted that many considerations are omitted, the conclusion is rarely treated as a prediction …

However, an infinity of models can be created to justify any particular set of empirical predictions … What then do these exercises teach us about the world? … If empirical testing is ruled out, and persuasion is not attempted, in the end I am not sure these theoretical exercises teach us anything at all about the world we live in …

Reliance on deductive reasoning rather than theory based on empirical evidence is particularly pernicious when economists insist that the only meaningful questions are the ones their most recent models are designed to address. Serious economists who respond to questions about how today’s policies will affect tomorrow’s economy by taking refuge in technobabble about how the question is meaningless in a dynamic games context abdicate the field to those who are less timid. No small part of our current economic difficulties can be traced to ignorant zealots who gained influence by providing answers to questions that others labeled as meaningless or difficult. Sound theory based on evidence is surely our best protection against such quackery.

**Added 23:00 GMT: **Commenting on this post, Brad DeLong writes:

What is Lucas talking about?

If you go to Robert Lucas’s Nobel Prize Lecture, there is an admission that his own theory that monetary (and other demand) shocks drove business cycles because unanticipated monetary expansions and contractions caused people to become confused about the real prices they faced simply did not work:

Robert Lucas (1995): Monetary Neutrality:

“Anticipated monetary expansions … are not associated with the kind of stimulus to employment and production that Hume described. Unanticipated monetary expansions, on the other hand, can stimulate production as, symmetrically, unanticipated contractions can induce depression. The importance of this distinction between anticipated and unanticipated monetary changes is an implication of every one of the many different models, all using rational expectations, that were developed during the 1970s to account for short-term trade-offs…. The discovery of the central role of the distinction between anticipated and unanticipated money shocks resulted from the attempts, on the part of many researchers, to formulate mathematically explicit models that were capable of addressing the issues raised by Hume. But I think it is clear that none of the specific models that captured this distinction in the 1970s can now be viewed as a satisfactory theory of business cycles”And Lucas explicitly links that analytical failure to the rise of attempts to identify real-side causes:

“Perhaps in part as a response to the difficulties with the monetary-based business cycle models of the 1970s, much recent research has followed the lead of Kydland and Prescott (1982) and emphasized the effects of purely real forces on employ- ment and production. This research has shown how general equilibrium reasoning can add discipline to the study of an economy’s distributed lag response to shocks, as well as to the study of the nature of the shocks themselves…. Progress will result from the continued effort to formulate explicit theories that fit the facts, and that the best and most practical macroeconomics will make use of developments in basic economic theory.”

But these real-side theories do not appear to me to “fit the facts” at all.

And yet Lucas’s overall conclusion is:

“In a period like the post-World War II years in the United States, real output fluctuations are modest enough to be attributable, possibly, to real sources. There is no need to appeal to money shocks to account for these movements”

It would make sense to say that there is “no need to appeal to money shocks” only if there were a well-developed theory and models by which pre-2008 post-WWII business-cycle fluctuations are modeled as and explained by identified real shocks. But there isn’t. All Lucas will say is that post-WWII pre-2008 business-cycle fluctuations are “possibly” “attributable… to real shocks” because they are “modest enough”. And he says this even though:

“An event like the Great Depression of 1929-1933 is far beyond anything that can be attributed to shocks to tastes and technology. One needs some other possibilities. Monetary contractions are attractive as the key shocks in the 1929-1933 years, and in other severe depressions, because there do not seem to be any other candidates”

as if 2008-2009 were clearly of a different order of magnitude with a profoundly different signature in the time series than, say, 1979-1982.

Why does he think any of these things?

Yes, indeed, how could any person think any of those things …

## Peter Dorman on economists’ obsession with homogeneity and average effects

19 Jul, 2014 at 20:41 | Posted in Economics | 6 CommentsPeter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focussed on economists’ infatuation with homogeneity and averages:

You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + …. + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …

The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under *ceteris paribus* conditions and *a fortiori* only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes writes (emphasis added) in *Treatise on Probability *(1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of

atomismand limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to the causal process lying behind events and disclose the causal forces behind what appears to be simple facts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps **unobservable** and **non-additive**, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being **atomistic **and** additive**. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

## Till Isagel

19 Jul, 2014 at 09:54 | Posted in Varia | Comments Off on Till Isagel

Underbara tonsättningar av vår kanske främste diktarspråkekvilibrist — Harry Martinson

[h/t Jan Milch]

## Chicago Follies (X)

19 Jul, 2014 at 08:34 | Posted in Economics | Comments Off on Chicago Follies (X)

Although I never believed it when I was young and held scholars in great respect, it does seem to be the case that ideology plays a large role in economics. How else to explain Chicago’s acceptance of not only general equilibrium but a particularly simplified version of it as ‘true’ or as a good enough approximation to the truth? Or how to explain the belief that the only correct models are linear and that the von Neuman prices are those to which actual prices converge pretty smartly? This belief unites Chicago and the Classicals; both think that the ‘long-run’ is the appropriate period in which to carry out analysis. There is no empirical or theoretical proof of the correctness of this. But both camps want to make an ideological point. To my mind that is a pity since clearly it reduces the credibility of the subject and its practitioners.

**Out of the 74 persons that have been awarded “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,” 28 — almost 40 % — have been affiliated to The University of Chicago.**

**The world is really a small place when it comes to economics …**

Blog at WordPress.com.

Entries and Comments feeds.