The spectacular failure of DSGE models

1 May, 2017 at 11:34 | Posted in Statistics & Econometrics | 3 Comments

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.

macroeconomics-14-638 Moreover, all such views are predicated on there being no un-anticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.

The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models …

Many of the theoretical equations in DSGE models take a form in which a variable today, say incomes (denoted as yt) depends inter alia on its ‘expected future value’… For example, yt may be the log-difference between a de-trended level and its steady-state value. Implicitly, such a formulation assumes some form of stationarity is achieved by de-trending.

Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for DSGEs are profound. As we explain below, the mathematical basis of a DSGE model fails when distributions shift … This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are built.

David Hendry & Grayham Mizon

A great article, not only showing on what shaky mathematical basis DSGE models are built, but also underlining that to understand real world ‘non-routine’ decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.

Advocates of DSGE modeling want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises — and that also applies to the quest for causality and forecasting predictability in DSGE models.

Advertisements

3 Comments

  1. I would be interested to discover what alternatives there are which you might suggest can replace the DSGE modeling.

  2. The DSGE model
    is really quite hollow
    it claims to be dynamic
    but it can’t deal with a panic
    in it just wallows.

  3. Lars et al: The following may well help to explain why DSGE models are unsatisfactory:

    Models that forecast impact of government spending are easily manipulated
    May 4, 2017

    Economists at North Carolina State University and Indiana University have found that the most widely used model for predicting how U.S. government spending affects gross domestic product (GDP) can be rigged using theoretical assumptions to control forecasts of how government spending will stimulate the economy.

    By accounting for these assumptions, the researchers developed an impartial version of the model, which found that every dollar of increased government spending results in more than a dollar’s worth of GDP growth.

    “There is a longstanding debate over the impact of government spending, and people who are very smart disagree – one camp holds that a dollar of spending leads to more than a dollar in GDP growth, while the other camp holds that spending results in less than a dollar in GDP growth,” says Nora Traum, an associate professor of economics at NC State and co-author of a paper describing the work. “This debate is important because it plays a role in determining government spending policies.”

    In an attempt to better understand the issues underlying the debate, the researchers evaluated the model used by economists – from central banks to the International Monetary Fund – to predict the impacts of government spending.

    The researchers found that by making tweaks to specific assumptions in the model, they could effectively force the model to make predictions that supported one government spending camp or the other – even if they used the exact same data.

    For example, the researchers found that assumptions related to how Congress and central banks will address the servicing of national debt could have a powerful effect on the predicted impact of government spending.

    Based on their observations, the researchers then developed an agnostic model, which was designed to avoid those tweaks that predispose the results to support a particular argument.

    “We found that the agnostic model predicts roughly $1.30 in near-term GDP growth for each $1 in spending,” Traum says.

    “This work looks at aggregate government spending, but it raises some interesting questions about the impact of spending in specific areas, and on how these statistical assumptions may be influencing economic forecasts in other sectors,” Traum says.

    The paper, “Clearing Up The Fiscal Multiplier Morass,” is published in the journal American Economic Review. The paper was co-authored by Eric Leeper and Todd Walker of Indiana University.


Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.