Econometric objectivity …

18 October, 2016 at 10:16 | Posted in Statistics & Econometrics | 2 Comments


It is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

Magnus and Morgan conducted a further experiment which involved eight expert teams, from different universities, analysing the same sets of data each using their own particular methodology. The data concerned the demand for food in the US and in the Netherlands and was based on a classic study by Tobin (1950) augmented with more recent data. The teams were asked to estimate the income elasticity of food demand and to forecast per capita food consumption. In terms of elasticities, the lowest estimates were around 0.38 whilst the highest were around 0.74 – clearly vastly different especially when remembering that these were based on the same sets of data. The forecasts were perhaps even more extreme – from a base of around 4000 in 1989 the lowest forecast for the year 2000 was 4130 while the highest was nearly 18000!

John Mingers



  1. This does not prove anything because the subject is insufficiently general so as to firstly include and then strategically eliminate all the unnecessary effects. Also apprentices are scarcely the most experienced analysts.

  2. This result isn’t even a tiny bit surprising to anyone who has experience working with the sort of “big data” data sets now routinely processed by consumer-facing internet companies and machine-learning projects.
    The cardinality of the data that economists deal with is simply far too small (one data point per quarter? seriously?), the dimensionality of the data is far too great, and the measurements themselves far too squishy to get any useful predictive value. Data scientists at ad-tech companies are delighted to get predictive uplifts of a few percent over random from clever massaging of hundreds of thousands of contemporaneous, precisely controlled, discrete “econometric” data points.
    It goes to show (as also evidenced in the recent heads/tails discussion) that for all their mathematical self-pleasuring, when it comes to statistical analysis, economists as a community remain blissfully ignorant.
    And that’s just for the simple vanilla urn-model statistics, well before getting into feedback effects, as in, e.g.:

Sorry, the comment form is closed at this time.

Create a free website or blog at
Entries and comments feeds.