DeLong on the real mathiness-people

19 May, 2015 at 11:29 | Posted in Economics | Comments Off on DeLong on the real mathiness-people

Paul Romer inquired why I did not endorse his following Krusell and Smith (2014) in characterizing Piketty and Piketty and Zucman as a canonical example of what Romer calls “mathiness”. Indeed, I think that, instead, it is Krusell and Smith (2014) that suffers from “mathiness”–people not in control of their models deploying algebra untethered to the real world in a manner that approaches gibberish.

amathI wrote about this last summer … This time, I replied to Paul Romer’s question with a Tweetstorm. Here it is, collected, with paragraphs added and redundancy deleted:

My objection to Krusell and Smith (2014) was that it seemed to me to suffer much more from what you call “mathiness” than does Piketty or Piketty and Zucman.

Recall that Krusell and Smith began by saying that they:

do not quite recognize… k/y=s/g”…

But k/y=s/g is Harrod (1939) and Domar (1946). How can they fail to recognize it?

And then their calibration–n+g=.02, δ=.10–not only fails to acknowledge Piketty’s estimates of economy-wide depreciation rate as between .01 and .02, but leads to absolutely absurd results:

For a country with a k/y=4, δ=.10 -> depreciation is 40% of gross output.
For a country like Belle Époque France with a k/y=7, δ=.10 -> depreciation is 70% of gross output.

It seemed to me that Krusell and Smith had no control whatsoever over the calibration of their model at all.

Note that I am working from notes here, because http://aida.wss.yale.edu/smith/piketty1.pdf no longer points to Krusell and Smith (2014). It points, instead, to Krusell and Smith (2015), a revised version.

In the revised version, the calibration differs. It differs in:
1. raising (n+g) from .02 to .03,

2. lowering δ from .10 or .05 (still more than twice Piketty’s historical estimates), and

3.changing the claim that as n+g->0 k/y increases “only very marginally” to “only modestly”

(The right thing to do would be to take economy-wide δ=.02 and say that k/y increases “substantially”.)

If Krusell and Smith (2015) offers any reference to Piketty’s historical depreciation efforts, I missed it.

If it offers any explanation of why they decided to raise their calibration of n+g when they lowered their δ, I missed that too.

Piketty has flaws, but it does not seem to me that working in a net rather than a gross production function framework is one of them. And Krusell and Smith’s continued attempts to demonstrate otherwise seem to me to suffer from “mathiness” to a high degree …

Brad DeLong

How to interpret economic theory

18 May, 2015 at 19:41 | Posted in Economics | Comments Off on How to interpret economic theory

8428740853_5c7d09141f_zThe issue of interpreting economic theory is, in my opinion, the most serious problem now facing economic theorists. The feeling among many of us can be summarized as follows. Economic theory should deal with the real world. It is not a branch of abstract mathematics even though it utilizes mathematical tools. Since it is about the real world, people expect the theory to prove useful in achieving practical goals. But economic theory
has not delivered the goods. Predictions from economic theory are not nearly as accurate as those offered by the natural sciences, and the link between economic theory and practical problems … is tenuous at best. Economic theory lacks a consensus as to its purpose and interpretation. Again and again, we find ourselves asking the question “where does it lead?”

Ariel Rubinstein

Modelling consistency and real world non-coherence in mainstream economics

18 May, 2015 at 19:12 | Posted in Economics | Comments Off on Modelling consistency and real world non-coherence in mainstream economics

In those cases where economists do focus on questions of market or competitive equilibrium etc., the formulators of the models in question are often careful to stress that their theorising has little connection with the real world anyway and should not be used to draw conclusions about the latter, whether in terms of efficiency or for policy or whatever.

In truth in those cases where mainstream assumptions and categories are couched in terms of economic systems as a whole they are mainly designed to achieve consistency at the level of modelling rather than coherence with the world in which we live.

9781138851023This concern for a notion of consistency in modelling practice is true for example of the recently fashionable rational expectations hypothesis, originally formulated by John Muth (1961), and widely employed by those that do focus on system level outcomes. The hypothesis proposes that predictions attributed to agents (being theorised about) are treated as being essentially the same as (consistent with)
those generated by the economic model within which the same agents are theorised. As such the proposal is clearly no more than a technique for (consistency in) modelling, albeit a bizarre one. Significantly any assertion that the expectations held (and so model in which they are imposed) are essentially correct, is a step that is additional to assuming rational expectations.

It is a form of modelling consistency (albeit a different one) that underpins the notion of equilibrium itself. In modern mainstream economics the category equilibrium has nothing to do with the features of the real economy … Economic models often comprise not single, but sets of, equations, each of which is notoriously found to have little relation to what happens in the real world. One question that nevertheless keeps economists occupied with such unrealistic models is whether the equations formulated are mutually consistent in the sense that there ‘exists’ a vector of values of some variable, say one labelled ‘prices’, that is consistent with each and all the equations. Such a model ‘solution’ is precisely the meaning of equilibrium in this context. As such the notion is not at all a claim about the world but merely a (possible) property that a set of equations may or may not be found to possess …In short, when mainstream economists question whether an equilibrium ‘exists’ they merely enquire as to whether a set of equations has a solution.

Modern economics has become increasingly irrelevant to the understanding of the real world. Tony Lawson traces this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism.

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Paul Romer on math masquerading as science

16 May, 2015 at 16:18 | Posted in Economics | 15 Comments

I have a new paper in the Papers and Proceedings Volume of the AER that is out in print and on the AER website …

Paul_RomerThe point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model …

The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.

From my paper:

“The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.”

Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning …

The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. A necessary condition for making this kind of progress is a capacity for reaching consensus that is grounded in logic and evidence. Given how deeply entrenched positions seem to have become in macroeconomics, this discussion could be unpleasant. If animosity surfaces, it will be tempting to postpone this discussion. We should resist this temptation.

I know many of the people whose work I’m criticizing. I genuinely like them. It will be costly for many of us if disagreement spills over into animosity. But if it does, we can be confident that the bad feelings will pass and we should stay focused on the long run …

Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.

Paul Romer

One of those economists Romer knows and — rightfully — criticizes in his paper is Robert Lucas.

Lucas is as we all know a very “mathy” person, and Romer is not he first to notice that “mathiness” lets academic politics masquerade as science …

quote-too-large-a-proportion-of-recent-mathematical-economics-are-mere-concoctions-as-imprecise-as-the-john-maynard-keynes-243582-3

Added 20:00 GMT: Joshua Gans has a post up on Romer’s article well worth reading, not the least because it highlights the nodal Romer-Lucas difference behind the “mathiness” issue.

In modern endogenous growth theory knowledge (ideas) is presented as the locomotive of growth. But as Allyn Young, Piero Sraffa and others had shown already in the 1920s, knowledge is also something that has to do with increasing returns to scale and therefore not really compatible with neoclassical economics with its emphasis on constant returns to scale.

Increasing returns generated by non-rivalry between ideas is simply not compatible with pure competition and the simplistic invisible hand dogma. That is probably also the reason why so many neoclassical economists — like Robert Lucas — have been so reluctant to embrace the theory wholeheartedly.

Neoclassical economics has tried to save itself by more or less substituting human capital for knowledge/ideas. But knowledge or ideas should not be confused with human capital.

In one way one might say that increasing returns is the darkness of the neoclassical heart. And this is something most mainstream neoclassical economists don’t really want to talk about. They prefer to look the other way and pretend that increasing returns are possible to seamlessly incorporate into the received paradigm. Romer’s view of human capital as a good example of non-“mathiness” not-withstanding, yours truly is of the view that talking about “human capital” — or as Lucas puts it,”knowledge ’embodied’ in individual people in the short run” — rather than knowledge/ideas, is only preferred because it makes this more easily digested.

Added 20:55 GMT: Romer has an even newer post up, further illustrating Lucasian obfuscations.

Added May 17: Brad DeLong has a comment up on Romer’s article, arguing that Lucas et consortes don’t approve of imperfect competition models because they are “intellectually dangerous” since they might open up for government intervention and “interventionist planning.” I agree with Brad, but as I’ve argued above, what these guys fear even more, is taking aboard increasing returns, since that would not only mean that policy preferences would have to change, but actually would bring havoc to one of the very fundaments of mainstream neoclassicism — marginal productivity theory.

Added May 18: Sandwichman has a great post up on this issue, with pertinent quotations from one of my intellectual heros, Nicholas Georgescu-Roegen.

Added May 19: David Ruccio has some interesting thoughts on Romer and the fetishism of mathematics here.

Piketty and the non-applicability of neoclassical economics

16 May, 2015 at 10:55 | Posted in Economics | 8 Comments

economic-mythIn yours truly’s On the use and misuse of theories and models in economics the author of Capital in the Twenty-First Century is criticized for not being prepared to fully take the consequences of marginal productivity theory — and the alleged close connection between productivity and remuneration postulated in mainstream income distribution theory — over and over again being disconfirmed both by history and, as shown already by Sraffa in the 1920s and in the Cambridge capital controversy in the 1960s, also from a theoretical point of view:

Having read Piketty (2014, p. 332) no one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality:

“It is only reasonable to assume that people in a position to set their own salaries have a natural incentive to treat themselves generously, or at the very least to be rather optimistic in gauging their marginal productivity.”

But although I agree with Piketty on the obvious – at least to anyone not equipped with ideological blinders – insufficiency and limitation of neoclassical marginal productivity theory to explain the growth of top 1 % incomes, I strongly disagree with his rather unwarranted belief that when it comes to more ordinary wealth and income, the marginal productivity theory somehow should still be considered applicable. It is not.

Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to “capital,” what it is supposed to mean and how to measure it. Remunerations, a fortiori, do not necessarily correspond to any marginal product of different factors of production – or to “compensating differentials” due to non-monetary characteristics of different jobs, natural ability, effort or chance.

It’s pleasing to see that Piketty has taken this critique to heart. In an interview in Potemkin Review he admits that marginal productivity explanations of income is wanting, not only for those at the very top, but, generally:

Piketty: I do not believe in the basic neoclassical model. But I think it is a language that is important to use in order to respond to those who believe that if the world worked that way everything would be fine. And one of the messages of my book is, first, it does not work that way, and second, even if it did, things would still be almost as bad …

All I am saying to neoclassical economists is this: if you really want to stick to your standard model, very small departures from it like an elasticity of substitution slightly above 1 will be enough to generate what we observe in recent decades. But there are many other, and in my view more plausible, ways to explain it. You should be aware of the fact that even with your perfect competition and simplified one good assumption, things can still go wrong, in the sense that the capital share can rise, etc.

PR: Are you saying that notwithstanding your rhetorical strategy to communicate with neoclassical economists on a ground where they feel comfortable, in your views it is not just that you reject marginal productivity explanations of income for those at the very top but more generally as well?

Piketty: Yes, I think bargaining power is very important for the determination of the relative shares of capital and labor in national income. It is perfectly clear to me that the decline of labor unions, globalization, and the possibility of international investors to put different countries in competition with one another–not only different groups of workers, but even different countries–have contributed to the rise in the capital share.

‘The greatest mathematical discovery of all time’

15 May, 2015 at 13:59 | Posted in Economics | 3 Comments

 

Simple. Beautiful. Einstein was right.

DSGE quagmire

15 May, 2015 at 08:52 | Posted in Economics | 2 Comments

Given that unions are weaker than they have been for a century or so, and that severe cuts to social welfare benefits have been imposed in most countries, the traditional rightwing explanation that labour market inflexibility [arising from minimum wage laws or unions], is the cause of unemployment, appeals only to ideologues (who are, unfortunately, plentiful) …

wrong-tool-by-jerome-awAfter the Global Financial Crisis, it became clear that the concessions made by the New Keynesians were ill-advised in both theoretical and political terms. In theoretical terms, the DSGE models developed during the spurious “Great Moderation” were entirely inconsistent with the experience of the New Depression. The problem was not just a failure of prediction: the models simply did not allow for depressions that permanently shift the economy from its previous long term growth path. In political terms, it turned out that the seeming convergence with the New Classical school was an illusion. Faced with the need to respond to the New Depression, most of the New Classical school retreated to pre-Keynesian positions based on versions of Say’s Law (supply creates its own demand) that Say himself would have rejected, and advocated austerity policies in the face of overwhelming evidence that they were not working …

Relative to DSGE, the key point is that there is no unique long-run equilibrium growth path, determined by technology and preferences, to which the economy is bound to return. In particular, the loss of productive capacity, skills and so on in the current depression is, for all practical purposes, permanent. But if there is no exogenously determined (though maybe still stochastic) growth path for the economy, economic agents (workers and firms) can’t make the kind of long-term plans required of them in standard life-cycle models. They have to rely on heuristics and rules of thumb … This is, in my view, the most important point made by post-Keynesians and ignored by Old Old Keynesians.

John Quiggin

Debating moden economics, yours truly often gets the feeling that mainstream economists, when facing anomalies, think that there is always some further “technical fix” that will get them out of the quagmire. But are these elaborations and amendments on something basically wrong really going to solve the problem? I doubt it. Acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

When criticizing the basic workhorse DSGE model for its inability to explain involuntary unemployment, some DSGE defenders maintain that later elaborations — e.g. newer search models — manage to do just that. I strongly disagree. One of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model-immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — nothing but nonsense on stilts?

The whole strategy reminds me not so little of the following little tale:

Time after time you hear people speaking in baffled terms about mathematical models that somehow didn’t warn us in time, that were too complicated to understand, and so on. If you have somehow missed such public displays of throwing the model (and quants) under the bus, stay tuned below for examples.
But this is far from the case – most of the really enormous failures of models are explained by people lying …
truth_and_lies
A common response to these problems is to call for those models to be revamped, to add features that will cover previously unforeseen issues, and generally speaking, to make them more complex.

For a person like myself, who gets paid to “fix the model,” it’s tempting to do just that, to assume the role of the hero who is going to set everything right with a few brilliant ideas and some excellent training data.

Unfortunately, reality is staring me in the face, and it’s telling me that we don’t need more complicated models.

If I go to the trouble of fixing up a model, say by adding counterparty risk considerations, then I’m implicitly assuming the problem with the existing models is that they’re being used honestly but aren’t mathematically up to the task.

If we replace okay models with more complicated models, as many people are suggesting we do, without first addressing the lying problem, it will only allow people to lie even more. This is because the complexity of a model itself is an obstacle to understanding its results, and more complex models allow more manipulation …

I used to work at Riskmetrics, where I saw first-hand how people lie with risk models. But that’s not the only thing I worked on. I also helped out building an analytical wealth management product. This software was sold to banks, and was used by professional “wealth managers” to help people (usually rich people, but not mega-rich people) plan for retirement.

We had a bunch of bells and whistles in the software to impress the clients – Monte Carlo simulations, fancy optimization tools, and more. But in the end, the bank’s and their wealth managers put in their own market assumptions when they used it. Specifically, they put in the forecast market growth for stocks, bonds, alternative investing, etc., as well as the assumed volatility of those categories and indeed the entire covariance matrix representing how correlated the market constituents are to each other.

The result is this: no matter how honest I would try to be with my modeling, I had no way of preventing the model from being misused and misleading to the clients. And it was indeed misused: wealth managers put in absolutely ridiculous assumptions of fantastic returns with vanishingly small risk.

Cathy O’Neil

Unbiased econometric estimates — forget about it!

14 May, 2015 at 09:34 | Posted in Economics | 4 Comments

Following our recent post on econometricians’ traditional privileging of unbiased estimates, there were a bunch of comments echoing the challenge of teaching this topic, as students as well as practitioners often seem to want the comfort of an absolute standard such as best linear unbiased estimate or whatever. Commenters also discussed the tradeoff between bias and variance, and the idea that unbiased estimates can overfit the data.

I agree with all these things but I just wanted to raise one more point: In realistic settings, unbiased estimates simply don’t exist. In the real world we have nonrandom samples, measurement error, nonadditivity, nonlinearity, etc etc etc.

So forget about it. We’re living in the real world …

figure3

It’s my impression that many practitioners in applied econometrics and statistics think of their estimation choice kinda like this:

1. The unbiased estimate. It’s the safe choice, maybe a bit boring and maybe not the most efficient use of the data, but you can trust it and it gets the job done.

2. A biased estimate. Something flashy, maybe Bayesian, maybe not, it might do better but it’s risky. In using the biased estimate, you’re stepping off base—the more the bias, the larger your lead—and you might well get picked off …

If you take the choice above and combine it with the unofficial rule that statistical significance is taken as proof of correctness (in econ, this would also require demonstrating that the result holds under some alternative model specifications, but “p less than .05″ is still key), then you get the following decision rule:

A. Go with the safe, unbiased estimate. If it’s statistically significant, run some robustness checks and, if the result doesn’t go away, stop.

B. If you don’t succeed with A, you can try something fancier. But . . . if you do that, everyone will know that you tried plan A and it didn’t work, so people won’t trust your finding.

So, in a sort of Gresham’s Law, all that remains is the unbiased estimate. But, hey, it’s safe, conservative, etc, right?

And that’s where the present post comes in. My point is that the unbiased estimate does not exist! There is no safe harbor. Just as we can never get our personal risks in life down to zero … there is no such thing as unbiasedness. And it’s a good thing, too: recognition of this point frees us to do better things with our data right away.

Andrew Gelman

Chicago Follies (XII)

13 May, 2015 at 17:33 | Posted in Economics | Comments Off on Chicago Follies (XII)

At the University of Chicago, where I went to graduate school, they sell a t-shirt that says “that’s all well and good in practice, but how does it work in theory?” That ode to nerdiness in the ivory tower captures the state of knowledge about rising wealth inequality, both its causes and its consequences. Economic models of the distribution of wealth tend to assume that it is “stationary.” In other words, some people become wealthier and others become poorer, but as a whole it stays pretty much the same. Yet both of those ideas are empirically wrong: Individual mobility within the wealth distribution is low, and the distribution has become much more unequal over the past several decades …

Economists typically highlight individual or inter-generational mobility within the wealth distribution as both a reason not to care that the distribution itself is unequal and as an argument that having wealthy parents (or not) doesn’t matter that much for children’s outcomes. 2011-10-26-dumb_and_dumber-533x299-2In fact, an influential model by the late Gary Becker and Nigel Tomes, both of the University of Chicago , predicts that accumulated wealth reduces income inequality because parents who love all their children equally allocate their bequests to compensate for their stupid children’s likely lower earnings potential in the labor market. According to those two authors, families redistribute from the smart to the dumb, and therefore, by implication, governments don’t have to redistribute from the rich to the poor.

But as Thomas Piketty and numerous other scholars point out, those reasons not to care about wealth inequality are not empirically valid. There’s scant evidence that parents leave larger inheritances to stupid children. Nor is there much evidence that native ability is the major determinant of earnings in the labor market or other life outcomes. The weakness of these explanations gets to a much larger question, one of the most important (and unanswered) ones in economics: Why are some people rich while others are poor? What economists are just finding out (while others have known for awhile now) is, essentially, “because their parents were.”

Marshall Steinbaum

Minimum wage reality check

13 May, 2015 at 16:57 | Posted in Economics | Comments Off on Minimum wage reality check

minwage

In search of causality

13 May, 2015 at 16:33 | Posted in Economics | Comments Off on In search of causality

dilbert

One of the few statisticians that I have on my blogroll is Andrew Gelman.  Although not sharing his Bayesian leanings, yours truly finds  his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer here below for “reverse causal questioning” is typical Gelmanian:

When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …

Reverse causal reasoning is different; it involves asking questions and searching for new variables that might not yet even be in our model. We can frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to “need an explanation”? It means that existing explanations — the existing model of the phenomenon — does not do the job …

By formalizing reverse casual reasoning within the process of data analysis, we hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. This is consistent with views such as Cartwright (2007) that causal inference in reality is more complex than is captured in any theory of inference … What we are really suggesting is a way of talking about reverse causal questions in a way that is complementary to, rather than outside of, the mainstream formalisms of statistics and econometrics.

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

mcgregor4_clip_image002_0000

In Gelman’s essay there is  no explicit argument for abduction —  inference to the best explanation — but I would still argue that it is de facto nothing but a very strong argument for why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in. The focus on causality, model checking, anomalies and context-dependence — although here expressed in statistical terms — is as close to abductive reasoning as we get in statistics and econometrics today.

Dangers of model simplifications

12 May, 2015 at 23:41 | Posted in Economics | 1 Comment

We forget – or willfully ignore – that our models are simplifications of the world …

nate silverOne of the pervasive risks that we face in the information age … is that even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening. This syndrome is often associated with very precise-seeming predictions that are not at all accurate … This is like claiming you are a good shot because your bullets always end up in about the same place — even though they are nowhere near the target …

Financial crises – and most other failures of prediction – stem from this false sense of confidence. Precise forecasts masquerade as accurate ones, and some of us get fooled and double-down our bets.

The paradox of skill

12 May, 2015 at 09:05 | Posted in Economics | Comments Off on The paradox of skill

 

Lyapunov functions and systems attaining equilibria

11 May, 2015 at 20:31 | Posted in Statistics & Econometrics | 1 Comment

 

Hypothesis testing and the importance of checking distribution assumptions

10 May, 2015 at 16:59 | Posted in Statistics & Econometrics | Comments Off on Hypothesis testing and the importance of checking distribution assumptions

 

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.