## Why we are better at forecasting drizzle than financial crises

18 August, 2013 at 16:08 | Posted in Economics, Statistics & Econometrics | 3 Comments

The analysis of probability originates in games of chance, in which the rules are sufficiently simple and well-defined that the game can be repeated in more or less identical form over and over again. If you toss a fair coin repeatedly, it will come up heads about 50 per cent of the time. If you can be bothered, you can verify that fact empirically. Perhaps more strikingly, the theory of probability tells you that if you repeatedly toss that coin 50 times you will get 23 or more heads about 67 per cent of the time, and you can verify that prediction empirically too.

It is a stretch, but perhaps not a very long stretch, to extend this analysis of frequency to single events, and to say that the probability that England will win the toss in the fifth Ashes cricket test against Australia is 50 per cent. And that the probability the home side will win the toss at least 23 times in a decade of five-test match series is 67 per cent. Tosses to start sporting contests are repeated at similar events, and theory and experience validate the probabilistic approach.

Perhaps one could stretch the approach further and apply it to the probability of rain. The Met Office might form a view of tomorrow’s weather. Records might show that it rains on 10 per cent of similar days. The difficulty is defining exactly what is meant by “a similar day”. “A typical April day in England” is a rather loose concept.

However, this is not, in fact, what weather forecasters do. Their analysis is based on elaborate computer models and they tweak the assumptions of these models to generate many different predictions. What they mean when they say the probability of rain is 10 per cent is that rain occurs in 10 per cent of these simulations. The validity of that prediction depends on two rather implausible assumptions; that the model correctly describes the physical world, and that the range of assumptions made by the modellers properly reflects the full range of possible assumptions.

Despite these difficulties – and popular derision comparable to that experienced by economic forecasters – weather forecasters do rather well …

In contrast, severe recessions, property bubbles and bank failures are relatively infrequent, and calibration by economists has come to mean tweaking models to better explain the past rather than revising them to better predict the future – a particularly dangerous methodology when there are many reasons to think that the underlying structure of the economy is in a state of constant flux.

The further one moves from mechanisms that are well understood and events that are frequently repeated, the less appropriate is the use of probabilistic language. What does it mean to say: “I am 90 per cent certain that the extinction of the dinosaurs was caused by an object hitting the earth at Yucatán?” Not, I think, that on 90 per cent of occasions on which the dinosaurs were wiped out, the cause was an asteroid landing in what is now Mexico. There is a difference – often elided – between a probability and a degree of confidence in a forecast. It is one reason why we are better at avoiding drizzle than financial crises.

John Kay

To me this wonderful little article shows how important it is in social sciences — and economics in particular — to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in his seminal A Treatise on Probability (1921).

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

How strange that social scientists and mainstream economists as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for measurable quantities one puts a blind eye to qualities and looks the other way.

1. I think there are two very simple reasons we can predict weather better than economic events; mostly it’s because economics doesn’t contain calculus and partly because economic data is largely privately owned.

• “…economic data is largely privately owned.”

You can buy such private data if you so wish. Banks do so all the time. And they still crash because of their exposure to toxic assets.

2. Our access to data on the internal activities of industry varies. In the US corporations are given amendment rights making them largely opaque. There are also all kinds of IP rights and non-disclosure systems in place to keep everything private for fairly obvious reasons. But, my point is that climate data is literally open source. Anyone can gather it. The only real constraint is land ownership, which is hardly a problem. This is, in sense, reflective of the shortcomings of economics. They don’t see any need to gather fine data because they only think in terms of nonsense aggregation fairy-tales. As a result the economy, is a phenomenon, is almost completely a secret.

I’m not sure we can even measure GDP properly. Can you imagine a climatologist not being able to measure air pressure? Or not being able to even agree what pressure is?

Blog at WordPress.com. | The Pool Theme.