## What is it that DSGE models — really — explain?

16 June, 2017 at 16:55 | Posted in Economics | 3 CommentsNow it is “dynamic stochastic general equilibrium” (DSGE) models inspired by the Lucas critique that have failed to predict or even explain the Great Recession of 2007–2009. More precisely, the implicit “explanations” based on these models are that the recession, including the millions of net jobs lost, was primarily due to large negative shocks to both technology and willingness to work … So can the reputation of modern macroeconomics be rehabilitated by simply modifying DSGE models to include a few more realistic shocks? …

A simple example helps illustrate for the uninitiated just how DSGE models work and why it should come as little surprise that they are largely inadequate for the task of explaining the Great Recession.

For this simple DSGE model, consider the following technical assumptions: i) an infinitely-lived representative agent with rational expectations and additive utility in current and discounted future log consumption and leisure; ii) a Cobb-Douglas aggregate production function with labor-augmenting technology; iii) capital accumulation with a fixed depreciation rate; and iv) a stochastic process for exogenous technology shocks …

It is worth making two basic points about the setup. First, by construction, technology shocks are the only underlying source of fluctuations in this simple model. Thus, if we were to assume that U.S. real GDP was the literal outcome of this model, we would be assuming a priori that fluctuations in real GDP were ultimately due to technology. When faced with the Great Recession, this model would have no choice but to imply that technology shocks were somehow to blame. Second, despite the underlying role of technology, the observed fluctuations in real GDP can be divided into those that directly reflect the behavior of the exogenous shocks and those that reflect the endogenous capital accumulation in response to these shocks.

To be more precise about these two points, it is necessary to assume a particular process for the exogenous technology shocks. In this case, let’s assume technology follows a random walk with drift [and assuming a 100% depreciation rate of capital]…

So, with this simple DSGE model and for typical measures of the capital share, we have the implication that output growth follows an AR(1) process with an AR coefficient of about one third. This is notable given that such a time-series model does reasonably well as a parsimonious description of quarterly real GDP dynamics for the U.S. economy …

However, the rather absurd assumption of a 100% depreciation rate at the quarterly horizon would surely still have prompted a sharp question or two in a University of Chicago seminar back in the days. So, with this in mind, what happens if we consider the more general case?

Unfortunately, for more realistic depreciation rates, we cannot solve the model analytically. Instead, taking a log-linearization around steady state, we can use standard methods to solve for output growth … This simple DSGE model is able to mimic the apparent AR(1) dynamics in real GDP growth. But it does so by assuming the exogenous technology shocks also follow an AR(1) process with an AR coefficient that happens to be the same as the estimated AR coefficient for output growth. Thus, the magic trick has been revealed: a rabbit was stuffed into the hat and then a rabbit jumped out of the hat …

Despite their increasing sophistication, DSGE models share one key thing in common with their RBC predecessors. After more than two decades of earnest promises to do better in the “future directions” sections of academic papers, they still have those serially-correlated shocks. Thus, the models now “explain” variables like real GDP, inflation, and interest rates as the outcome of more than just serially-correlated technology shocks. They also consider serially-correlated preference shocks and serially-correlated policy shocks …

And still mainstream economists seem to be impressed by the ‘rigour’ brought to macroeconomics by New-Classical-New-Keynesian DSGE models and its rational expectations and micrcofoundations!

It is difficult to see why.

Take the rational expectations assumption. Rational expectations in the mainstream economists’ world implies that relevant distributions have to be time independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modelling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents make ‘predictable errors’ in DSGE models doesn’t take us any closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

‘Rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not say anything about real world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Mainstream economists think there is a gain from the DSGE style of modeling in its capacity to offer some kind of structure around which to organise discussions. To me that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

## 3 Comments »

RSS feed for comments on this post. TrackBack URI

### Leave a Reply

Blog at WordPress.com.

Entries and comments feeds.

“If DSGE models work, why don’t people use them to get rich?

When I studied macroeconomics in grad school, I was told something along these lines:

“DSGE models are useful for policy advice because they (hopefully) pass the Lucas Critique. If all you want to do is forecast the economy, you don’t need to pass the Lucas Critique, so you don’t need a DSGE model.”

This is usually what I hear academic macroeconomists say when asked to explain the fact that essentially no one in the private sector uses DSGE models. Private-sector people can’t set economic policy, the argument goes, so they don’t need Lucas Critique-robust models.

The problem is, this argument is wrong. If you have a model that both A) satisfies the Lucas Critique and B) is a decent model of the economy, you can make huge amounts of money. This is because although any old spreadsheet can be used to make unconditional forecasts of the economy, you need Lucas-robust models to make good policy-conditional forecasts.

Let me explain. An unconditional forecast is when you say “GDP growth will be 2.4% next year”, or “inflation will be 1.7% next quarter”. For this kind of thing, any old spreadsheet will do.

A policy-conditional forecast is when you say “If the Fed tapers, inflation will fall by 0.5% next year.” To get these forecasts as good as possible, you need to know how policy affects the economy. and if your model is not Lucas-robust, then you will not be able to know how policy affects the economy, so you will react sub-optimally to a policy change.

For example, suppose the Fed suddenly lowers interest rates substantially. Most people, using their silly spreadsheets with their 70s-vintage Phillips Curves, will forecast a rise in GDP growth, so they will pay a lot for stocks, expecting higher profits from the increased growth. But wise DSGE modelers, using the Nobel-winning and ostensibly Lucas-robust Kydland-Prescott 1982 model, know that the Phillips Curve is not structural. They know that the promised growth will not occur, so as soon as stocks become overpriced, they short the S&P. When the hoped-for growth does not materialize and stocks fall, the DSGE modelers reap a huge profit at the expense of the spreadsheet modelers.

Now that’s a bit of an old example, so let’s take a more modern one. Suppose the Fed launches a new program of QE. Clever DSGE modelers, armed with Steve Williamson’s 2013 QE paper, know that QE will be deflationary rather than inflationary (as most people think). This allows them to take other investors, who are armed only with spreadsheets, for a ride, shorting TIPS and buying Treasuries. Voila – instant riches. Williamson himself endorses this idea, writing:

[I]f it does anything, QE will lower the inflation rate over the long run. And the long run comes sooner than you might think, i.e. if QE gives you a short-run increase in inflation, then if it’s like typical monetary easing, then that effect lasts only a year or two. More to the point, there are other forces post-financial crisis that will cause the real interest rate on safe assets to rise, and inflation to fall further, so long as the Fed keeps short nominal rates at or near the zero lower bound. And there are good reasons to think that the Fed will be stuck at the zero lower bound indefinitely. Conclusion: expect less inflation rather than more. That has to matter for your portfolio choices. (emphasis mine)

So as we see, a Lucas-robust DSGE model has the potential to make its wielders a LOT of money. This is especially true in the current environment, where correlations are high and macro events have become much more important to investors’ performance.

But not necessarily. Being Lucas-robust is necessary for making optimal policy-contingent forecasts, but it is not sufficient. You also need the model to be a good model of the economy. If your parameters are all structural, but you’ve assumed the wrong microfoundations, then your model will make bad predictions even though it’s Lucas-robust.

So now let’s get to the point of this post. As far as I’m aware, private-sector firms don’t hire anyone to make DSGE models, implement DSGE models, or even scan the DSGE literature. There are a lot of firms that make macro bets in the finance industry – investment banks, macro hedge funds, bond funds. To my knowledge, none of these firms spends one thin dime on DSGE. I’ve called and emailed everyone I could think of who knows what financial-industry macroeconomists do, and they’re all unanimous – they’ve never heard of anyone in finance using a DSGE model.

If you know someone who does, please reply in the comments. I’m sure there’s someone out there. But even if there is, they haven’t soared to fame and fortune on the back of their DSGE model.

So maybe they’re just using the wrong DSGE models? Maybe they’re using Williamson (2012) instead of Williamson (2013). I mean, after all, there is a huge, vast, unending array of DSGE models out there, most of which purport to explain the entire macroeconomy, and most of which are thus mutually exclusive at any point in time. Maybe two or three of them are right at any given point in time, but maybe this set switches around as conditions change. Perhaps finance-industry people are simple unable to pick out the right DSGE model to use on any given day.

But if finance-industry people can’t know which DSGE model to use, how can policymakers or policy advisors?

In other words, DSGE models (not just “Freshwater” models, I mean the whole kit & caboodle) have failed a key test of usefulness. Their main selling point – satisfying the Lucas critique – should make them very valuable to industry. But industry shuns them.

Many economic technologies pass the industry test. Companies pay people lots of money to use auction theory. Companies pay people lots of money to use vector autoregressions. Companies pay people lots of money to use matching models. But companies do not, as far as I can tell, pay people lots of money to use DSGE to predict the effects of government policy. Maybe this will change someday, but it’s been 32 years, and no one’s touching these things.

As I see it, this is currently the most damning critique of the whole DSGE paradigm.”

http://noahpinionblog.blogspot.se/2014/01/the-most-damning-critique-of-dsge.html

Comment by Jan Milch— 16 June, 2017 #

Some interesting admissions from a DSGE practitioner:

https://www.economicdynamics.org/newsletter-apr-2017/#unique-identifierInterviewApr17

Comment by Henry— 17 June, 2017 #

And a lot of ‘story telling’ … 🙂

Comment by Lars Syll— 17 June, 2017 #