The unknown knowns of modern macroeconomics

22 Jul, 2012 at 16:20 | Posted in Economics | 5 Comments

The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?

The root of our problem ultimately goes back to how we look upon the data we are handling. In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the “data-generating process” – if we do not have the “true” model – the whole edifice collapses.

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

This is like saying that you are going on a holiday-trip and that you know that the chance the weather being sunny is at least 30%, and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.

But this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.

In the end this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty type. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.

How much better – how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control – if instead we would just admit that we often simply do not know, and that we have to live with that uncertainty as well as it goes.

Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing – economic catastrophy!

The unknown knowns – the things we fool ourselves to believe we know – often have more dangerous repercussions than the “Black Swans” of Knightian unknown unknowns, something quantitative risk management based on the hypothesis of market efficiency and rational expectations has given ample evidence of during the latest financial crisis.


  1. Weekly Post Keynesian weather forecast:

    Monday: We simply do not know!

    Tuesday: We simply do not know!

    Wednesday: We simply do not know!


    Neoclassical weather forecast:

    Now, you decide who to consult if you’re thinking of bringing an umbrella to the office tomorrow.

    And they whine that they are marginalized!

    • if weather forecasters had the track record of neoclassical economists, everyone would carry with them at all times sunglasses, a wool jacket, galoshes, and a tank top. the public would be quite layered at all times

  2. You pose an interesting question about motivation. Obviously the main incentive for the investor is to profit from an understanding of the market, whereas the societal motives are to regulate for maximum social benefit from the market while avoiding catastrophic risk. In the first case, the investor is seeking to profit from taking a certain level of risk, whereas the regulator wishes to avoid socially catastrophic financial events.

    The trouble with the DSGE models is that they model neither market pathologies, external events that can lead to sudden market stall, or the interrelationship of such events. At Lawrence Berkeley National Laboratory, Dr. Leinweber and his colleagues have looked at programmed trading using advanced computational techniques. This study provides us with a look at a market with large amounts of data with a limited number of agents.

    Here is a link to the OECD Insights piece, which shows that the ‘flash crash’ was not an isolated event, but simply a more extreme case of intrinsic volatility in the markets, hardly the tidy shift between equilibria of the DSGE model. I strongly recommend that those interested read Leinweber’s paper in Social Science Research Network at

    I believe that the most important underlying point is the necessity to return to the study of economics to promote stability, sustainability and the common good, rather than ever more obtuse attempts to extract the maximum profits from markets that have become increasingly abstracted from economic reality and the efficient allocation of capital. The proliferation of interlinked electronic markets is clearly a risk that needs to be understood, as pointed out by Leinweber and his colleagues. As expressed in my recent piece in OECD Insights, the economics profession needs to provide better, clearly understandable tools for policymakers

    JR Hulls

    • Thanks for the links. I will certainly check them out!

  3. I think I agree with you. Are you saying that false precision is worse no prediction at all? Or are you saying that economists should rely on analysis that provides accurate but less specific answers?

    What do you think of the sectoral balances approach that Simon Wren-Lewis mentioned? MMT economists like Bill Mitchell use this almost exclusively to analyze the economy. The results seem almost incontrovertible as it has more to do with accounting than anything else.

Sorry, the comment form is closed at this time.

Blog at
Entries and comments feeds.