## Econometrics — fictions masquerading as science

11 Feb, 2020 at 10:51 | Posted in Economics | 1 CommentIn econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come into the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we equate randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

In his book *Statistical Models and Causal Inference: A Dialogue with the Social Sciences *David Freedman touches on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals.

However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions.Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And

an enormous amount of fiction has been produced, masquerading as rigorous science.

## 1 Comment

Sorry, the comment form is closed at this time.

Blog at WordPress.com.

Entries and comments feeds.

“the all-important question of how to handle uncertainty and randomness.”

.

Social scientists ought to study the ways finance handles uncertainty. One can structure bets using combinations of derivatives so that all risk is clearly defined. Butterfly trades have a maximum gain and loss, predefined. Counterparty risk is largely backstopped by the Fed, and by private clearinghouses backstopped by the Fed. Defining risk upfront, using financial methods, puts bounds on uncertainty.

.

Economists would do well to study and report on such mechanisms, because the practice of economic agents often violates the rationality assumptions of economic theory. If you hedge an iron butterfly with long volatility, in the right proportions, you might have no preference on price movement or its velocity. You’ll certainly profit no matter what state the market ends up in …

Comment by Robert Mitchell— 12 Feb, 2020 #