Statistical inference

30 January, 2014 at 15:40 | Posted in Statistics & Econometrics | Comments Off on Statistical inference

Sampling distributions are the key to understanding inferential statistics. Once you’ve grasped how we use sampling distributions for making hypothesis testing possible, well, then you’ve understood the most important part of the logic of statistical inference — and the rest is really just a piece of cake!

We Praise Thee

30 January, 2014 at 14:36 | Posted in Varia | Comments Off on We Praise Thee

 

Preferences that make economic models explode

30 January, 2014 at 08:45 | Posted in Economics | 4 Comments

Commenting on experiments — showing time-preferences-switching framing effects — performed by experimental economist David Eil, Noah Smith writes:

Now, here’s the thing…it gets worse … I’ve heard whispers that a number of researchers have done experiments in which choices can be re-framed in order to obtain the dreaded negative time preferences, where people actually care more about the future than the present! Negative time preferences would cause most of our economic models to explode, and if these preferences can be created with simple re-framing, then it bodes ill for the entire project of trying to model individuals’ choices over time.

valuefunctionThis matters a lot for finance research. One of the big questions facing finance researchers is why asset prices bounce around so much. The two most common answers are A) time-varying risk premia, and B) behavioral “sentiment”. But Eil’s result, and other results like it, could be bad news for both efficient-market theory and behavioral finance. Because if aggregate preferences themselves are unstable due to a host of different framing effects, then time-varying risk premia can’t be modeled in any intelligible way, nor can behavioral sentiment be measured. In other words, the behavior of asset prices may truly be inexplicable (since we can’t observe all the multitude of things that might cause framing effects).

It’s a scary thought to contemplate, but to dismiss the results of experiments like Eil’s would be a mistake! It may turn out that the whole way modern economics models human behavior is good only in some situations, and not in others.

Bad news indeed. But hardly new.

In neoclassical theory preferences are standardly expressed in the form of a utility function. But although the expected utility theory has been known for a long time to be both theoretically and descriptively inadequate, neoclassical economists all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

What most of them try to do in face of the obvious theoretical and behavioural inadequacies of the expected utility theory, is to marginally mend it. But that cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them! As Matthew Rabin and Richard Thaler have it in Risk Aversion:

It is time for economists to recognize that expected utility is an ex-hypothesis, so that we can concentrate our energies on the important task of developing better descriptive models of choice under uncertainty.

In his modern classic Risk Aversion and Expected-Utility Theory: A Calibration Theorem Matthew Rabin  writes:

Using expected-utility theory, economists model risk aversion as arising solely because the utility function over wealth is concave. This diminishing-marginal-utility-of-wealth theory of risk aversion is psychologically intuitive, and surely helps explain some of our aversion to large-scale risk: We dislike vast uncertainty in lifetime wealth because a dollar that helps us avoid poverty is more valuable than a dollar that helps us become very rich.

Yet this theory also implies that people are approximately risk neutral when stakes are small … While most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for negligible stakes, but for quite sizable and economically important stakes. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality.While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition among some subsets of researchers, and has been illustrated in writing in a variety of different contexts using standard utility functions …

Expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes. Moreover, when the specific structure of expected-utility theory is used to analyze situations involving modest stakes — such as in research that assumes that large-stake and modest-stake risk attitudes derive from the same utility-for-wealth function — it can be very misleading.

In a similar vein, Daniel Kahneman writes — in Thinking, Fast and Slow — that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by changes in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have an almost axiomatic status within economic theory?

The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

The works of people like Rabin, Thaler and Kahneman, show that expected utility theory is indeed transmogrifying truth. It’s an “ex-hypthesis”  — or as Monty Python has it:

ex-ParrotThis parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off the twig! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-PARROT!!

Encounters with R. A. Fisher

29 January, 2014 at 19:46 | Posted in Statistics & Econometrics | Comments Off on Encounters with R. A. Fisher

 

Keynes & MMT

29 January, 2014 at 18:39 | Posted in Economics | 4 Comments

[Bendixen says the] old ‘metallist’ view of money is superstitious, and Dr. Bendixen trounces it with the vigour of a convert. Money is the creation of the State; it is not true to say that gold is international currency, for international contracts are never made in terms of gold, but always in terms of some national monetary unit; there is no essential or important distinction between notes and metallic money; money is the measure of value, but to regard it as having value itself is a relic of the view that the value of money is regulated by the value of the substance of which it is made, and is like confusing a theatre ticket with the performance. With the exception of the last, the only true interpretation of which is purely dialectical, these ideas are undoubtedly of the right complexion. It is probably true that the old ‘metallist’ view and the theories of regulation of note issue based on it do greatly stand in the way of currency reform, whether we are thinking of economy and elasticity or of a change in the standard; and a gospel which can be made the basis of a crusade on these lines is likely to be very useful to the world, whatever its crudities or terminology.

J. M. Keynes, “Theorie des Geldes und der Umlaufsmittel. by Ludwig von Mises; Geld und Kapital. by Friedrich Bendixen” (review), Economic Journal, 1914

Is calibration really a scientific advance? I’ll be dipped!

29 January, 2014 at 12:47 | Posted in Economics | Comments Off on Is calibration really a scientific advance? I’ll be dipped!

Noah Smith had a post up yesterday lamenting Nobel laureate Ed Prescott:

The 2004 prize went partly to Ed Prescott, the inventor of Real Business Cycle theory. That theory assumes that monetary policy doesn’t have an effect on GDP. Since RBC theory came out in 1982, a number of different people have added “frictions” to the model to make it so that monetary policy does have real effects. But Prescott has stayed true to the absolutist view that no such effects exist. In an email to a New York Times reporter, he very recently wrote the following:

“It is an established scientific fact that monetary policy has had virtually no effect on output and employment in the U.S. since the formation of the Fed,” Professor Prescott, also on the faculty of Arizona State University, wrote in an email. Bond buying [by the Fed], he wrote, “is as effective in bringing prosperity as rain dancing is in bringing rain.”

Wow! Prescott definitely falls into the category of people whom Miles Kimball and I referred to as “purist” Freshwater macroeconomists. Prescott has made some…odd…claims in recent years, but these recent remarks were totally consistent with his prize-winning research.

Odd claims indeed. True. There are many kinds of useless economics held in high regard within mainstream economics establishment today. Few – if any – are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Interviewed by Seppo Honkapohja and Lee Evans — in  Macroeconomic Dynamics (2005, vol. 9) — Thomas Sargent answered the question if calibration was an advance in macroeconomics:

In many ways, yes … The unstated case for calibration was that it was a way to continue the process of acquiring experience in matching rational expectations models to data by lowering our standards relative to maximum likelihood, and emphasizing those features of the data that our models could capture. Instead of trumpeting their failures in terms of dismal likelihood ratio statistics, celebrate the features that they could capture and focus attention on the next unexplained feature that ought to be explained. One can argue that this was a sensible response… a sequential plan of attack: let’s first devote resources to learning how to create a range of compelling equilibrium models to incorporate interesting mechanisms. We’ll be careful about the estimation in later years when we have mastered the modelling technology…

But is the Lucas-Kydland-Prescott-Sargent calibration really an advance?

Let’s see what two eminent econometricians have to say. In Journal of Economic Perspective (1996, vol. 10) Lars Peter Hansen and James J. Hickman writes:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Error-probabilistic statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

And this is the verdict of Paul Krugman :

The point is that if you have a conceptual model of some aspect of the world, which you know is at best an approximation, it’s OK to see what that model would say if you tried to make it numerically realistic in some dimensions.

But doing this gives you very little help in deciding whether you are more or less on the right analytical track. I was going to say no help, but it is true that a calibration exercise is informative when it fails: if there’s no way to squeeze the relevant data into your model, or the calibrated model makes predictions that you know on other grounds are ludicrous, something was gained. But no way is calibration a substitute for actual econometrics that tests your view about how the world works.

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Sargent and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

Instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is, as Paul Krugman has it, that economists often mistake

beauty, clad in impressive looking mathematics, for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

Or as Noah Smith puts it:

But OK, suppose for a moment – just imagine – that somewhere, on some other planet, there was a group of alien macroeconomists who made a bunch of theories that were completely wrong, and were not even close to anything that could actually describe the business cycles on that planet. And suppose that the hypothetical aliens kept comparing their nonsense theories to data, and they kept getting rejected by the data, but the aliens still found the nonsense theories very cool and very a prioriconvincing, and they kept at it, finding “puzzles”, estimating parameter values, making slightly different nonsense models, etc., in a neverending cycle of brilliant non-discovery.

Now tell me: In principle, how should those aliens tell the difference between their situation, and our own? That’s the question that I think we need to be asking, and that a number of people on the “periphery” of macro are now asking out loud.

Det avreglerade järnvägssystemet har havererat — tillsätt en kriskommission!

28 January, 2014 at 12:41 | Posted in Politics & Society | 2 Comments

Yours truly har idag — tillsammans med bl. a. Jan Du Rietz, Sven Jernberg och Hans Albin Larsson — en artikel i Götborgs-Posten om det svenska järnvägseländet:

L-TRAFIKVERKET2_0 Staten har ”satsat” ett par miljarder kronor i dagens penningvärde på att lägga ned och riva banor, triangelspår och mötesspår, vilket minskat järnvägens kapacitet och gett ett stelt och störningskänsligt trafiksystem. Det har ihop med ofta misskötta och olämpliga tåg och otillräcklig redundans i tekniska hjälpsystem gett inställda och försenade tåg vilket blir kostsamt för resenärerna, näringslivet och samhället. Alla problem gör dessutom att allt fler kunder flyr tågtrafiken.

Bildandet av Trafikverket hotar bli dödsstöten för järnvägen. Verket har fått ett för en myndighet omöjligt politiskt uppdrag när det gäller att välja mellan väg och järnväg. Trafikverket har inte heller uppgiften att restaurera järnvägssystemet och saknar dessutom kompetens för detta.

En kriskommission med extraordinära befogenheter som kan koncentrera sig på järnvägens problem krävs. Regeringen bör bland annat kalla in experter från fungerande ”järnvägsländer” för att klara en nödvändig omstrukturering av hela järnvägssystemet.

Ode à la patrie

26 January, 2014 at 16:28 | Posted in Varia | 2 Comments

 

The failure of DSGE macroeconomics

24 January, 2014 at 10:56 | Posted in Economics | 5 Comments

As 2014 begins, it’s clear enough that any theory in which mass unemployment or (in the US case) withdrawal from the labour force can only occur in the short run is inconsistent with the evidence. Given that unions are weaker than they have been for a century or so, and that severe cuts to social welfare benefits have been imposed in most countries, the traditional rightwing explanation that labour market inflexibility [arising from minimum wage laws or unions], is the cause of unemployment, appeals only to ideologues (who are, unfortunately, plentiful) …

wrong-tool-by-jerome-awAfter the Global Financial Crisis, it became clear that the concessions made by the New Keynesians were ill-advised in both theoretical and political terms. In theoretical terms, the DSGE models developed during the spurious “Great Moderation” were entirely inconsistent with the experience of the New Depression. The problem was not just a failure of prediction: the models simply did not allow for depressions that permanently shift the economy from its previous long term growth path. In political terms, it turned out that the seeming convergence with the New Classical school was an illusion. Faced with the need to respond to the New Depression, most of the New Classical school retreated to pre-Keynesian positions based on versions of Say’s Law (supply creates its own demand) that Say himself would have rejected, and advocated austerity policies in the face of overwhelming evidence that they were not working …

Relative to DSGE, the key point is that there is no unique long-run equilibrium growth path, determined by technology and preferences, to which the economy is bound to return. In particular, the loss of productive capacity, skills and so on in the current depression is, for all practical purposes, permanent. But if there is no exogenously determined (though maybe still stochastic) growth path for the economy, economic agents (workers and firms) can’t make the kind of long-term plans required of them in standard life-cycle models. They have to rely on heuristics and rules of thumb … This is, in my view, the most important point made by post-Keynesians and ignored by Old Old Keynesians.

John Quiggin/Crooked Timber

On DSGE and the art of using absolutely ridiculous modeling assumptions

23 January, 2014 at 23:08 | Posted in Economics, Theory of Science & Methodology | 4 Comments

Reading some of the comments — by Noah Smith, David Andolfatto and others — on my post Why Wall Street shorts economists and their DSGE models, I — as usual — get the feeling that mainstream economists when facing anomalies think that there is always some further “technical fix” that will get them out of the quagmire. But are these elaborations and amendments on something basically wrong really going to solve the problem? I doubt it. Acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

When criticizing the basic workhorse DSGE model for its inability to explain involuntary unemployment, some DSGE defenders maintain that later elaborations — e.g. newer search models — manage to do just that. I strongly disagree. One of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — nothing but nonsense on stilts?

The whole strategy reminds me not so little of the following little tale:

Time after time you hear people speaking in baffled terms about mathematical models that somehow didn’t warn us in time, that were too complicated to understand, and so on. If you have somehow missed such public displays of throwing the model (and quants) under the bus, stay tuned below for examples.
But this is far from the case – most of the really enormous failures of models are explained by people lying …
truth_and_lies
A common response to these problems is to call for those models to be revamped, to add features that will cover previously unforeseen issues, and generally speaking, to make them more complex.

For a person like myself, who gets paid to “fix the model,” it’s tempting to do just that, to assume the role of the hero who is going to set everything right with a few brilliant ideas and some excellent training data.

Unfortunately, reality is staring me in the face, and it’s telling me that we don’t need more complicated models.

If I go to the trouble of fixing up a model, say by adding counterparty risk considerations, then I’m implicitly assuming the problem with the existing models is that they’re being used honestly but aren’t mathematically up to the task.

If we replace okay models with more complicated models, as many people are suggesting we do, without first addressing the lying problem, it will only allow people to lie even more. This is because the complexity of a model itself is an obstacle to understanding its results, and more complex models allow more manipulation …

I used to work at Riskmetrics, where I saw first-hand how people lie with risk models. But that’s not the only thing I worked on. I also helped out building an analytical wealth management product. This software was sold to banks, and was used by professional “wealth managers” to help people (usually rich people, but not mega-rich people) plan for retirement.

We had a bunch of bells and whistles in the software to impress the clients – Monte Carlo simulations, fancy optimization tools, and more. But in the end, the bank’s and their wealth managers put in their own market assumptions when they used it. Specifically, they put in the forecast market growth for stocks, bonds, alternative investing, etc., as well as the assumed volatility of those categories and indeed the entire covariance matrix representing how correlated the market constituents are to each other.

The result is this: no matter how honest I would try to be with my modeling, I had no way of preventing the model from being misused and misleading to the clients. And it was indeed misused: wealth managers put in absolutely ridiculous assumptions of fantastic returns with vanishingly small risk.

Cathy O’Neil

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.