Econometrics — the art of pulling a rabbit out of a hat

22 Dec, 2020 at 17:57 | Posted in Statistics & Econometrics | 5 Comments

Magician Pulling Rabbit From Hat Cartoon Illustration Royalty Free  Cliparts, Vectors, And Stock Illustration. Image 68544338.In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is — as Joan Robinson once had it — like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics, and as Clint Ballinger highlights, this is a particularly questionable rabbit pulling assumption:

Inferential statistics are based on taking a random sample from a larger population … and attempting to draw conclusions about a) the larger population from that data and b) the probability that the relations between measured variables are consistent or are artifacts of the sampling procedure.

However, in political science, economics, development studies and related fields the data often represents as complete an amount of data as can be measured from the real world (an ‘apparent population’). It is not the result of a random sampling from a larger population. Nevertheless, social scientists treat such data as the result of random sampling.

Because there is no source of further cases a fiction is propagated—the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Imagine there could be more worlds with more cases and the problem is fixed …

What ‘draw’ from this imaginary superpopulation does the real-world set of cases we have in hand represent? This is simply an unanswerable question. The current set of cases could be representative of the superpopulation, and it could be an extremely unrepresentative sample, a one in a million chance selection from it …

The problem is not one of statistics that need to be fixed. Rather, it is a problem of the misapplication of inferential statistics to non-inferential situations.


  1. I’m reminded that self reporting is fraught with psychological pitfalls, both individual and group, less the questions themselves are self seeking. A classic case of a rigorous study back in the day was on breast cancer, conducted on Swedish nurses [about 6K if memory serves] over a protracted period. Firstly medical staff are known to be good self reporters due to ethical and knowledge based factors, compared to other groups which might have environmental conditioning factors, for whatever reasons, that diminish the data veracity. The outcome was quite contra to the popular view on hyper examination of women for early detection, seems there was a case of precancerous cells that predominately went back into remission without any short or long term ramifications for the nurses E.g. aggressive/invasive response to the observations due to hyper examination protocol front loading response outcomes was a self fulfilling paradigm and not necessarily in the patients best interests.


    Another classic case is the attempt to lower the birth rate in Africa via the introduction of prophylactics, so a group of medical/social behaviorist professionals did the rounds in villages to inform the locals of the benefits of its use in unwanted births as well the rise in STDs. It seemed some projected their own cultural preconceptions in establishing a means to pass on this knowledge on, before taking the time to understand their target groups social dynamics. On one occasion after spending time using props to show the way its used a visual media presentation was offered, after the sun set a sheet was used to screen a projector offering which depicted a village setting with actors discussing the topics benefits. The Village was in hysterics over the short film and the medical/social behaviorist professionals were at a complete loss over the response to their hard works, how could such a well thought out and planned endeavor that had such serious ramifications for the people it targeted illicit such a response. At some point one of the professionals inquired why the village responded in such a manner and what was so hilarious about it, only to be informed that the media presentation offered the villagers a reflection of themselves they had never had before, especially something about a chicken running around in the foreground.


    This sums up my opinion on mainstream/orthodox economics at the moment.

  2. I should add this is not a case of the subjects being observed, but the observers not doing their homework E.g. in the latter case above the issue was the villagers never being environmentally conditioned to determinate the message the visual media was attempting to deliver, rightly or wrongly in construct, hence the observers projected their own environmental biases on to the very people they attempted to assist.

    Something about cookie cutter optics in a dynamic environment with time and space line lag,

  3. What is the art of putting the rabbit into the hat?
    Is that also econometrics?
    Maybe econometrics should be taught as that, as putting the rabbit into the hat.
    Sadly I can think of few exemplary papers where econometrics was used fairly and gave an intuitively informative result. Maybe if a list of such was available to imitative graduate students, practice would improve.

  4. Prof Syll’s Rabbit
    In this post, as in numerous previous posts, Prof.Syll slanderously accuses econometricians of deception.
    He alleges that that they assume fake non-existent data resulting in “a fiction” similar to the well-known magician’s trick of pulling a rabbit out of an empty hat.
    However, Prof. Syll’s argument is invalid because he confuses deductive with inductive reasoning, and he confuses assumptions with hypotheses.
    Prof. Syll bases his argument on a comment made by Joan Robinson in 1966. She criticized economic theorists for being surprised by deductions which were merely logical consequence of their own assumptions. She suggested that this was analogous to a magician putting a rabbit into his hat and later pulling it out again.
    Rabbit put into hat = assumption.
    Rabbit pulled out of hat = logical deduction drawn from the assumption.

    Note that Robinson’s argument did not concern empirical work – it concerned the deductive reasoning of theorists. The analogy of the magician’s rabbit is wholly inapplicable to the empirical analysis/statistical inference /inductive reasoning of econometricians and other applied scientists.
    Scientists formulate probabilistic models and test hypotheses concerning patterns which may exist in the available data. Hypotheses are completely different concepts from assumptions. It is not assumed that any of the hypothesized patterns exist in the data.
    Moreover there is no expectation of finding a perfectly clear “rabbit” pattern. Very often the data only shows a vague blur, or just random noise, or even a completely unexpected relationship. And in the absence of sufficient data, econometricians can’t draw any conclusions whatsoever,
    Thus, contrary to Prof Syll, econometricians don’t put any rabbits into their hats, and they never pull one out.

  5. Assumptions are conclusions without proof. Conclusions are proven hypotheses; the proofs are based on assumptions. Substituting, we see that proofs are based on conclusions without proof. Hypotheses are proposed proofs without proof. Thus the proposition “Hypotheses are completely different concepts from assumptions” is wrong, because hypotheses assume assumptions, and nothing is ultimately provable. The problem of infinite regress is well-known in Philosophy, but mostly ignored after it is (sometimes) introduced and hand-waved away at the start of textbooks. (Externalities receive the same treatment in Economics: introduce them early, then ignore them.)
    “Very often the data only shows a vague blur, or just random noise, or even a completely unexpected relationship.”
    Consider a model of supply and demand. The more bond supply increases, the less it should be in demand; interest rates should rise, the price of the bond should fall, the cost of issuing debt should rise. History reveals, however, that US bond supply has increased well over 1000% since 1980, while the price of that debt has fallen by 90%. Contrary to the Law of Supply and Demand, bond supply has increased, interest rates have fallen, bond prices have risen, and the cost of issuing debt has fallen.
    Econometrics has disproven the Law of Supply and Demand.

Sorry, the comment form is closed at this time.

Blog at
Entries and comments feeds.