Forecasting alchemy

5 April, 2014 at 12:08 | Posted in Economics, Statistics & Econometrics | 1 Comment

Businesswoman standing on a ladder looking through binocularsIn New York State, Section 899 of the Code of Criminal Procedure provides that persons “Pretending to Forecast the Future” shall be considered disorderly under subdivision 3, Section 901 of the Code and liable to a fine of $250 and/or six months in prison.

Although the law does not apply to “ecclesiastical bodies acting in good faith and without fees,” I’m not sure where that leaves econometricians and other forecasters …

I came to think about this nineteenth century New York law the other day when interviewed by a public radio journalist working on a series on Great Economic ThinkersWe were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these cocksure economists with their “rigorous” and “precise” mathematical-statistical-econometric models are so wrong again and again — why do they persist wasting time on it?

In a discussion on uncertainty and the hopelessness of accurately modeling what will happen in the real world – in M. Szenberg’s Eminent Economists: Their Life Philosophies – Nobel laureate Kenneth Arrow comes up with what is probably the right answer:

It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. cloudsAn incident illustrates both uncer-tainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’

Brownian motion simulation in Excel (student stuff)

20 March, 2014 at 16:26 | Posted in Statistics & Econometrics | Leave a comment

 

Statistical power analysis (student stuff)

20 March, 2014 at 15:47 | Posted in Statistics & Econometrics | Leave a comment

 

Simple logistic regression (student stuff)

18 March, 2014 at 14:58 | Posted in Statistics & Econometrics | Leave a comment

 

 
And in the video below (in Swedish) yours truly shows how to perform a logit regression using Gretl.
 

On the limits of randomization

16 March, 2014 at 18:23 | Posted in Statistics & Econometrics | 1 Comment

In the video below, Angus Deaton — Professor of International Affairs and Professor of Economics and International Affairs at the Woodrow Wilson School and the Economics Department at Princeton — explains why using Randomized Controlled Trials (RCTs) is not at all the “gold standard” that it has lately often been portrayed as. control-group1-2As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

Using randomization to analyze discrimination (student stuff)

15 March, 2014 at 17:53 | Posted in Statistics & Econometrics | 5 Comments

 

Euler’s method (student stuff)

7 March, 2014 at 16:58 | Posted in Statistics & Econometrics | Leave a comment

 

The one statistics book every economist ought to have read

6 March, 2014 at 14:01 | Posted in Statistics & Econometrics | 1 Comment

freedmanMathematical statistician David A. Freedman‘s Statistical Models and Causal Inference (Cambridge University Press, 2010) is a marvellous book. It ought to be mandatory reading for every serious social scientist – including economists and econometricians – who doesn’t want to succumb to ad hoc assumptions and unsupported statistical conclusions!

freedHow do we calibrate the uncertainty introduced by data collection? Nowadays, this question has become quite salient, and it is routinely answered using wellknown methods of statistical inference, with standard errors, t -tests, and P-values … These conventional answers, however, turn out to depend critically on certain rather restrictive assumptions, for instance, random sampling …

Thus, investigators who use conventional statistical technique turn out to be making, explicitly or implicitly, quite restrictive behavioral assumptions about their data collection process … More typically, perhaps, the data in hand are simply the data most readily available …

The moment that conventional statistical inferences are made from convenience samples, substantive assumptions are made about how the social world operates … When applied to convenience samples, the random sampling assumption is not a mere technicality or a minor revision on the periphery; the assumption becomes an integral part of the theory …

In particular, regression and its elaborations … are now standard tools of the trade. Although rarely discussed, statistical assumptions have major impacts on analytic results obtained by such methods.

Consider the usual textbook exposition of least squares regression. We have n observational units, indexed by i = 1, . . . , n. There is a response variable yi , conceptualized as μi + i , where μi is the theoretical mean of yi while the disturbances or errors i represent the impact of random variation (sometimes of omitted variables). The errors are assumed to be drawn independently from a common (gaussian) distribution with mean 0 and finite variance. Generally, the error distribution is not empirically identifiable outside the model; so it cannot be studied directly—even in principle—without the model. The error distribution is an imaginary population and the errors i are treated as if they were a random sample from this imaginary population—a research strategy whose frailty was discussed earlier.

Usually, explanatory variables are introduced and μi is hypothesized to be a linear combination of such variables. The assumptions about the μi and i are seldom justified or even made explicit—although minor correlations in the i can create major bias in estimated standard errors for coefficients …

Why do μi and i behave as assumed? To answer this question, investigators would have to consider, much more closely than is commonly done, the connection between social processes and statistical assumptions …

We have tried to demonstrate that statistical inference with convenience samples is a risky business. While there are better and worse ways to proceed with the data at hand, real progress depends on deeper understanding of the data-generation mechanism. In practice, statistical issues and substantive issues overlap. No amount of statistical maneuvering will get very far without some understanding of how the data were produced.

More generally, we are highly suspicious of efforts to develop empirical generalizations from any single dataset. Rather than ask what would happen in principle if the study were repeated, it makes sense to actually repeat the study. Indeed, it is probably impossible to predict the changes attendant on replication without doing replications. Similarly, it may be impossible to predict changes resulting from interventions without actually intervening.

On chance, probability, randomness, uncertainty and all that

26 February, 2014 at 23:04 | Posted in Statistics & Econometrics | 2 Comments

 

Hicks on the inapplicability of probability calculus

24 February, 2014 at 18:48 | Posted in Economics, Statistics & Econometrics | 11 Comments

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

hicksbbcWhen we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

To simply assume that economic processes are ergodic — and a fortiori in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Added 25 February: Commenting on this article, Paul Davidson writes:

After reading my article on the fallacy of rational expectations, Hicks wrote to me in a letter dated 12 February 1983 in which he said “I have just been reading your RE [rational expectations] paper … I do like it very much … You have now rationalized my suspicions and shown me that I missed a chance of labeling my own point of view as nonergodic. One needs a name like that to ram a point home.”

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.