‘The greatest mathematical discovery of all time’

15 maj, 2015 kl. 13:59 | Publicerat i Economics | 3 kommentarer

 

Simple. Beautiful. Einstein was right.

DSGE quagmire

15 maj, 2015 kl. 08:52 | Publicerat i Economics | 2 kommentarer

Given that unions are weaker than they have been for a century or so, and that severe cuts to social welfare benefits have been imposed in most countries, the traditional rightwing explanation that labour market inflexibility [arising from minimum wage laws or unions], is the cause of unemployment, appeals only to ideologues (who are, unfortunately, plentiful) …

wrong-tool-by-jerome-awAfter the Global Financial Crisis, it became clear that the concessions made by the New Keynesians were ill-advised in both theoretical and political terms. In theoretical terms, the DSGE models developed during the spurious “Great Moderation” were entirely inconsistent with the experience of the New Depression. The problem was not just a failure of prediction: the models simply did not allow for depressions that permanently shift the economy from its previous long term growth path. In political terms, it turned out that the seeming convergence with the New Classical school was an illusion. Faced with the need to respond to the New Depression, most of the New Classical school retreated to pre-Keynesian positions based on versions of Say’s Law (supply creates its own demand) that Say himself would have rejected, and advocated austerity policies in the face of overwhelming evidence that they were not working …

Relative to DSGE, the key point is that there is no unique long-run equilibrium growth path, determined by technology and preferences, to which the economy is bound to return. In particular, the loss of productive capacity, skills and so on in the current depression is, for all practical purposes, permanent. But if there is no exogenously determined (though maybe still stochastic) growth path for the economy, economic agents (workers and firms) can’t make the kind of long-term plans required of them in standard life-cycle models. They have to rely on heuristics and rules of thumb … This is, in my view, the most important point made by post-Keynesians and ignored by Old Old Keynesians.

John Quiggin

Debating moden economics, yours truly often gets the feeling that mainstream economists, when facing anomalies, think that there is always some further ”technical fix” that will get them out of the quagmire. But are these elaborations and amendments on something basically wrong really going to solve the problem? I doubt it. Acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

When criticizing the basic workhorse DSGE model for its inability to explain involuntary unemployment, some DSGE defenders maintain that later elaborations — e.g. newer search models — manage to do just that. I strongly disagree. One of the more conspicuous problems with those ”solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model-immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — nothing but nonsense on stilts?

The whole strategy reminds me not so little of the following little tale:

Time after time you hear people speaking in baffled terms about mathematical models that somehow didn’t warn us in time, that were too complicated to understand, and so on. If you have somehow missed such public displays of throwing the model (and quants) under the bus, stay tuned below for examples.
But this is far from the case – most of the really enormous failures of models are explained by people lying …
truth_and_lies
A common response to these problems is to call for those models to be revamped, to add features that will cover previously unforeseen issues, and generally speaking, to make them more complex.

For a person like myself, who gets paid to “fix the model,” it’s tempting to do just that, to assume the role of the hero who is going to set everything right with a few brilliant ideas and some excellent training data.

Unfortunately, reality is staring me in the face, and it’s telling me that we don’t need more complicated models.

If I go to the trouble of fixing up a model, say by adding counterparty risk considerations, then I’m implicitly assuming the problem with the existing models is that they’re being used honestly but aren’t mathematically up to the task.

If we replace okay models with more complicated models, as many people are suggesting we do, without first addressing the lying problem, it will only allow people to lie even more. This is because the complexity of a model itself is an obstacle to understanding its results, and more complex models allow more manipulation …

I used to work at Riskmetrics, where I saw first-hand how people lie with risk models. But that’s not the only thing I worked on. I also helped out building an analytical wealth management product. This software was sold to banks, and was used by professional “wealth managers” to help people (usually rich people, but not mega-rich people) plan for retirement.

We had a bunch of bells and whistles in the software to impress the clients – Monte Carlo simulations, fancy optimization tools, and more. But in the end, the bank’s and their wealth managers put in their own market assumptions when they used it. Specifically, they put in the forecast market growth for stocks, bonds, alternative investing, etc., as well as the assumed volatility of those categories and indeed the entire covariance matrix representing how correlated the market constituents are to each other.

The result is this: no matter how honest I would try to be with my modeling, I had no way of preventing the model from being misused and misleading to the clients. And it was indeed misused: wealth managers put in absolutely ridiculous assumptions of fantastic returns with vanishingly small risk.

Cathy O’Neil

Unbiased econometric estimates — forget about it!

14 maj, 2015 kl. 09:34 | Publicerat i Economics | 4 kommentarer

Following our recent post on econometricians’ traditional privileging of unbiased estimates, there were a bunch of comments echoing the challenge of teaching this topic, as students as well as practitioners often seem to want the comfort of an absolute standard such as best linear unbiased estimate or whatever. Commenters also discussed the tradeoff between bias and variance, and the idea that unbiased estimates can overfit the data.

I agree with all these things but I just wanted to raise one more point: In realistic settings, unbiased estimates simply don’t exist. In the real world we have nonrandom samples, measurement error, nonadditivity, nonlinearity, etc etc etc.

So forget about it. We’re living in the real world …

figure3

It’s my impression that many practitioners in applied econometrics and statistics think of their estimation choice kinda like this:

1. The unbiased estimate. It’s the safe choice, maybe a bit boring and maybe not the most efficient use of the data, but you can trust it and it gets the job done.

2. A biased estimate. Something flashy, maybe Bayesian, maybe not, it might do better but it’s risky. In using the biased estimate, you’re stepping off base—the more the bias, the larger your lead—and you might well get picked off …

If you take the choice above and combine it with the unofficial rule that statistical significance is taken as proof of correctness (in econ, this would also require demonstrating that the result holds under some alternative model specifications, but “p less than .05″ is still key), then you get the following decision rule:

A. Go with the safe, unbiased estimate. If it’s statistically significant, run some robustness checks and, if the result doesn’t go away, stop.

B. If you don’t succeed with A, you can try something fancier. But . . . if you do that, everyone will know that you tried plan A and it didn’t work, so people won’t trust your finding.

So, in a sort of Gresham’s Law, all that remains is the unbiased estimate. But, hey, it’s safe, conservative, etc, right?

And that’s where the present post comes in. My point is that the unbiased estimate does not exist! There is no safe harbor. Just as we can never get our personal risks in life down to zero … there is no such thing as unbiasedness. And it’s a good thing, too: recognition of this point frees us to do better things with our data right away.

Andrew Gelman

Chicago Follies (XII)

13 maj, 2015 kl. 17:33 | Publicerat i Economics | Kommentarer inaktiverade för Chicago Follies (XII)

At the University of Chicago, where I went to graduate school, they sell a t-shirt that says “that’s all well and good in practice, but how does it work in theory?” That ode to nerdiness in the ivory tower captures the state of knowledge about rising wealth inequality, both its causes and its consequences. Economic models of the distribution of wealth tend to assume that it is “stationary.” In other words, some people become wealthier and others become poorer, but as a whole it stays pretty much the same. Yet both of those ideas are empirically wrong: Individual mobility within the wealth distribution is low, and the distribution has become much more unequal over the past several decades …

Economists typically highlight individual or inter-generational mobility within the wealth distribution as both a reason not to care that the distribution itself is unequal and as an argument that having wealthy parents (or not) doesn’t matter that much for children’s outcomes. 2011-10-26-dumb_and_dumber-533x299-2In fact, an influential model by the late Gary Becker and Nigel Tomes, both of the University of Chicago , predicts that accumulated wealth reduces income inequality because parents who love all their children equally allocate their bequests to compensate for their stupid children’s likely lower earnings potential in the labor market. According to those two authors, families redistribute from the smart to the dumb, and therefore, by implication, governments don’t have to redistribute from the rich to the poor.

But as Thomas Piketty and numerous other scholars point out, those reasons not to care about wealth inequality are not empirically valid. There’s scant evidence that parents leave larger inheritances to stupid children. Nor is there much evidence that native ability is the major determinant of earnings in the labor market or other life outcomes. The weakness of these explanations gets to a much larger question, one of the most important (and unanswered) ones in economics: Why are some people rich while others are poor? What economists are just finding out (while others have known for awhile now) is, essentially, “because their parents were.”

Marshall Steinbaum

Minimum wage reality check

13 maj, 2015 kl. 16:57 | Publicerat i Economics | Kommentarer inaktiverade för Minimum wage reality check

minwage

In search of causality

13 maj, 2015 kl. 16:33 | Publicerat i Economics | Kommentarer inaktiverade för In search of causality

dilbert

One of the few statisticians that I have on my blogroll is Andrew Gelman.  Although not sharing his Bayesian leanings, yours truly finds  his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer here below for ”reverse causal questioning” is typical Gelmanian:

When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type ”What if?”, rather than ”Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …

Reverse causal reasoning is different; it involves asking questions and searching for new variables that might not yet even be in our model. We can frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to ”need an explanation”? It means that existing explanations — the existing model of the phenomenon — does not do the job …

By formalizing reverse casual reasoning within the process of data analysis, we hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. This is consistent with views such as Cartwright (2007) that causal inference in reality is more complex than is captured in any theory of inference … What we are really suggesting is a way of talking about reverse causal questions in a way that is complementary to, rather than outside of, the mainstream formalisms of statistics and econometrics.

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

mcgregor4_clip_image002_0000

In Gelman’s essay there is  no explicit argument for abduction —  inference to the best explanation — but I would still argue that it is de facto nothing but a very strong argument for why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in. The focus on causality, model checking, anomalies and context-dependence — although here expressed in statistical terms — is as close to abductive reasoning as we get in statistics and econometrics today.

Dangers of model simplifications

12 maj, 2015 kl. 23:41 | Publicerat i Economics | 1 kommentar

We forget – or willfully ignore – that our models are simplifications of the world …

nate silverOne of the pervasive risks that we face in the information age … is that even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening. This syndrome is often associated with very precise-seeming predictions that are not at all accurate … This is like claiming you are a good shot because your bullets always end up in about the same place — even though they are nowhere near the target …

Financial crises – and most other failures of prediction – stem from this false sense of confidence. Precise forecasts masquerade as accurate ones, and some of us get fooled and double-down our bets.

The paradox of skill

12 maj, 2015 kl. 09:05 | Publicerat i Economics | Kommentarer inaktiverade för The paradox of skill

 

Lyapunov functions and systems attaining equilibria

11 maj, 2015 kl. 20:31 | Publicerat i Statistics & Econometrics | 1 kommentar

 

Hypothesis testing and the importance of checking distribution assumptions

10 maj, 2015 kl. 16:59 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Hypothesis testing and the importance of checking distribution assumptions

 

Lönesänkarna

9 maj, 2015 kl. 15:52 | Publicerat i Economics | Kommentarer inaktiverade för Lönesänkarna

 

Random walks model thinking

9 maj, 2015 kl. 09:48 | Publicerat i Statistics & Econometrics | 1 kommentar

 

Hög tid skrota överskottsmålet!

8 maj, 2015 kl. 16:34 | Publicerat i Economics | 3 kommentarer

Sverige har haft massarbetslöshet i över 20 år. Varken de reformer som rekommenderas av etablerad nationalekonomisk teori, såsom lägre a-kassa, skatter och facklig organisering, eller andra förändringar, såsom låg realränta eller expansiv penningpolitik, verkar vara tillräcklig för att motverka denna massarbetslöshet. Än mindre för att skapa full sysselsättning.

Om regeringen ska klara av att nå full sysselsättning verkar det därför som att kraftigt expansiv finanspolitik är enda lösningen. Regeringen bör därför rikta om finanspolitiken i generellt mer expansiv inriktning. Detta innebär per definition att statsskulden bör öka …

frn-massarbetslshet-till-full-sysselsttning-katalys-no-20-1-638Alla partier i riksdagen arbetar i dag för att den sparsamma finanspolitik som bedrivits i Sverige sedan början av 1990-talet ska fortsätta. Men det finns ingen enkel relation mellan statsskuldens nivå och samhällsekonomins funktionssätt. Den skepsis som i dag verkar vara utbredd mot ökningar av statsskulden bygger till stora delar på helt omotiverade farhågor om dess inverkan på samhällsekonomin i övrigt. Delvis grundar sig detta i missförstånd om svensk ekonomi under 1990-talet.

Det finns ingen mening att ha ett mål för låg statsskuld om detta inte bidrar till sysselsättning, tillväxt eller något annat önskvärt. Mycket tyder på att sparsam finanspolitik kan skapa stora problem för samhällsekonomin, både vad gäller sysselsättning och tillväxt. Och mycket tyder på att expansiv finanspolitik är det nödvändiga verktyg som regeringen måste använda för att nå full sysselsättning. Regeringen bör därför avskaffa överskottsmålet.

I den mån det överhuvudtaget bör finnas explicita långsiktiga mål för statens finanser och statsskulden talar mycket för att detta bör vara någon form av expansionsmål eller underskottsmål. Om det finns lediga resurser, såsom arbetslösa, bör regeringen se till att deras arbete kommer till nytta — genom att öka statsskulden om så krävs.

Bra rutet!

8 maj, 2015 kl. 14:30 | Publicerat i Economics | Kommentarer inaktiverade för Bra rutet!

Fredrik Andersson, 43, är lärare på Slottsskolan i Vingåker.
Efter flera år i yrket har han tröttnat på slappa elever. Därför tänker han ta i med hårdhandskarna …
Han har infört helt nya regler – för att komma tillrätta med sena ankomster och dåliga prestationer.
dagens_ungdom_531a7a94ddf2b379f462494f– Allt ska gå så himla lätt och smidigt för elever nu. Det ska vara roligt och lattjo. De är så slappa. Det får inte kosta någon ansträngning och jag har varit så frustrerad över det så länge, säger Fredrik Andersson …
De nya reglerna går bland annat ut på att vara extra hård och noggrann med elever som beter sig illa …
Han nämner också att han kommer registrera alla sena ankomster som överstiger en minut och att han, dagligen om så behövs, kommer att kontakta föräldrar om saker går fel.
– Jag kommer att ställa och formulera krav på ett sådant sätt att det inte kommer att vara möjligt att smita undan längre, säger Fredrik Andersson.

Expressen

Seven principles to guard you against economics silliness

8 maj, 2015 kl. 09:01 | Publicerat i Economics | 3 kommentarer

argueIn the increasingly contentious world of pop economics, you … may find yourself in an argument with an economist. And when this happens, you should be prepared, because many of the arguments that may seem at first blush to be very powerful and devastating are, in fact, pretty weak tea …

Principle 1: Credentials are not an argument.

Example: ”You say Theory X is wrong…but don’t you know that Theory X is supported by Nobel Prize winners A, B, and C, not to mention famous and distinguished professors D, E, F, G, and H?”

Suggested Retort: Loud, barking laughter.

Alternative Suggested Retort: ”Richard Feynman said that ‘Science is the belief in the ignorance of experts.’ And you’re not going to argue with HIM, are you?”

Reason You’re Right: Credentials? Gimme a break. Nobody accepts received wisdom from sages these days. Show me the argument!

Principle 2: ”All theories are wrong” is false.

Example: ”Sure, Theory X fails to forecast any variable of interest or match important features of the data. But don’t you know that all models are wrong? I mean, look at Newton’s Laws…THOSE ended up turning out to be wrong, ha ha ha.”

Suggested Retort: Empty an entire can of Silly String onto anyone who says this. (I carry Silly String expressly for this purpose.)

Alternative Suggested Retort: ”Yeah, well, when your theory is anywhere near as useful as Newton’s Laws, come back and see me, K?”

Reason You’re Right: To say models are ”wrong” is fatuous semantics; philosophically, models can only have degrees of predictive power within domains of validity. Newton’s Laws are only ”wrong” if you are studying something very small or moving very fast. For most everyday applications, Newton’s Laws are very, very right.

Principle 3: ”We have theories for that” is not good enough.

Example: ”How can you say that macroeconomists have ignored Phenomenon X? We have theories in which X plays a role! Several, in fact!”

Suggested Retort: ”Then how come no one was paying attention to those theories before Phenomenon X emerged and slapped us upside the head?”

Reason You’re Right: Actually, there are two reasons. Reason 1 is that it is possible to make many models to describe any phenomenon, and thus there is no guarantee that Phenomenon X is correctly describe by Theory Y rather than some other theory, unless there is good solid evidence that Theory Y is right, in which case economists should be paying more a lot attention to Theory Y. Reason 2 is that if the profession doesn’t have a good way to choose which theories to apply and when, then simply having a bunch of theories sitting around gathering dust is a little pointless.

Principle 4: Argument by accounting identity almost never works.

Example: ”But your theory is wrong, because Y = C + I + G!”

Suggested Retort: ”If my theory violates an accounting identity, wouldn’t people have noticed that before? Wouldn’t this fact be common knowledge?”

Reason You’re Right: Accounting identities are mostly just definitions. Very rarely do definitions tell us anything useful about the behavior of variables in the real world. The only exception is when you have a very good understanding of the behavior of all but one of the variables in an accounting identity, in which case of course it is useful. But that is a very rare situation indeed.

Principle 5: The Efficient Markets Hypothesis does not automatically render all models useless.

Example: ”But if your model could predict financial crises, then people could use it to conduct a riskless arbitrage; therefore, by the EMH, your model cannot predict financial crises.”

Suggested Retort: ”By your logic, astrophysics can never predict when an asteroid is going to hit the Earth.”

Reason You’re Right: Conditional predictions are different than unconditional predictions. A macro model that is useful for making policy will not say ”Tomorrow X will happen.” It will say ”Tomorrow X will happen unless you do something to stop it.” If policy is taken to be exogenous to a model (a ”shock”), then the EMH does not say anything about whether you can see an event coming and do something about it.

Principle 6: Models that only fit one piece of the data are not very good models.

Example: ”Sure, this model doesn’t fit facts A, B, and C, but it does fit fact D, and therefore it is a ‘laboratory’ that we can use to study the impact of changes in the factors that affect D.”

Suggested Retort: ”Nope!”

Reason You’re Right: Suppose you make a different model to fit each phenomenon. Only if all your models don’t interact will you be able to use each different model to study its own phenomenon. And this is highly unlikely to happen. Also, it’s generally pretty easy to make a large number of different models that fit any one given fact, but very hard to make models that fit a whole bunch of facts at once. For these reasons, many philosophers of science claim that science theories should explain a whole bunch of phenomena in terms of some smaller, simpler subset of underlying phenomena. Or, in other words, wrong theories are wrong.

Principle 7: The message is not the messenger.

Example: ”Well, that argument is being made by Person X, who is obviously just angry/a political hack/ignorant/not a real economist/a commie/stupid/corrupt.”

Suggested Retort: ”Well, now it’s me making the argument! So what are you going to say about me?”

Reason You’re Right: This should be fairly obvious, but people seem to forget it. Even angry hackish ignorant stupid communist corrupt non-economists can make good cogent correct arguments (or, at least, repeat them from some more reputable source!). Arguments should be argued on the merits. This is the converse of Principle 1.

There are, of course, a lot more principles than these … The set of silly things that people can and will say to try to beat an interlocutor down is, well, very large. But I think these seven principles will guard you against much of the worst of the silliness.

Noah Smith

Why the ergodic theorem is not applicable in economics

6 maj, 2015 kl. 15:06 | Publicerat i Economics | 5 kommentarer

At a realistic level of analysis, Keynes’ claim that some events could have no probability ratios assigned to them can be represented as rejecting the belief that some observed economic phenomena are the outcomes of any stochastic process: probability structures do not even fleetingly exist for many economic events.

7107nQVwWOLIn order to apply probability theory, one must assume replicability of the experiment under the same conditions so that, in principle, the moments of the random functions can be calculated over a large number of realizations …

For macroeconomic functions it can be claimed that only a single realization exists since there is only one actual economy; hence there are no cross-sectional data which are relevant. If we do not possess, never have possessed, and conceptually never will possess an ensemble of macroeconomic worlds, then the entire concept of the definition of relevant distribution functions is questionable. It can be logically argued that the distribution function cannot be defined if all the macroinformation which can exist is only a finite part (the past and the present) of a single realization. Since a universe of such realizations must at least conceptually exist for this theory to be germane, the application of the mathematical theory of stochastic processes to macroeconomic phenomena is therefore questionable, if not in principle invalid.

Paul Davidson

To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

hicksbbcWhen we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

To simply assume that economic processes are ergodic — and a fortiori in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Den svenska skolkatastrofen

6 maj, 2015 kl. 09:16 | Publicerat i Economics | Kommentarer inaktiverade för Den svenska skolkatastrofen

skola_mellin
Aftonbladet

Why minimum wage has no discernible effect on employment

5 maj, 2015 kl. 17:11 | Publicerat i Economics | 3 kommentarer

Economists have conducted hundreds of studies of the employment impact of the minimum wage. Summarizing those studies is a daunting task, but two recent meta-studies analyzing the research conducted since the early 1990s concludes that the minimum wage has little or no discernible effect on the employment prospects of low-wage workers.

01The most likely reason for this outcome is that the cost shock of the minimum wage is small relative to most firms’ overall costs and only modest relative to the wages paid to low-wage workers. In the traditional discussion of the minimum wage, economists have focused on how these costs affect employment outcomes, but employers have many other channels of adjustment. Employers can reduce hours, non-wage benefits, or training. Employers can also shift the composition towardhigher skilled workers, cut pay to more highly paid workers, take action to increase worker productivity (from reorganizing production to increasing training), increase prices to consumers, or simply accept a smaller profit margin. Workers may also respond to the higher wage by working harder on the job. But, probably the most important channel of adjustment is through reductions in labor turnover, which yield significant cost savings to employers.

John Schmitt/CEPR

Krugman lectures Taylor on his rule

5 maj, 2015 kl. 16:50 | Publicerat i Economics | 2 kommentarer

6-Oct-2010-taylor-rule-equationIn fact … Taylor’s central claim about the alleged errors of monetary policy is bizarre. The Taylor rule was and is a clever heuristic for describing how central banks try to steer between unemployment and inflation, and perhaps a useful guide to how they ought to behave in normal times. But it says nothing at all about bubbles and financial crises; financial instability is impossible in the models usually used to justify the rule, and the rule wasn’t devised with such possibilities in mind. It makes no sense, then, to claim that following the rule just so happens to be exactly what we need to avoid crises. It slices! It dices! It prevents housing bubbles and stabilizes the financial system! No, I don’t think so.

Paul Krugman

The limits of statistical inference

5 maj, 2015 kl. 15:25 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för The limits of statistical inference

causationCausality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

For more on these issues — see the chapter ”Capturing causality in economics and the limits of statistical inference” in my On the use and misuse of theories and models in economics.

« Föregående sidaNästa sida »

Blogga med WordPress.com.
Entries och kommentarer feeds.