Modern macroeconomics – like Hamlet without the Prince

31 May, 2013 at 19:13 | Posted in Economics | Comments Off on Modern macroeconomics – like Hamlet without the Prince

Something is rotten in the state of macroeconomics … 

And its miserable state should come as no surprise for those of you who regularly follow yours truly’s blog.

hamletForecasting is by its nature a hit-and-miss affair; economics is not—despite the apparent dogmatic certainty of some of its practitioners—an exact science. But the track record of the profession in recent years—and last year in particular —is dire. Few economists spotted the boom and most hopelessly underestimated the bust. And it’s not as if the profession’s troubles in 2012 were limited to longer-range forecasts; it was getting it wrong virtually in real time with most forecasters forced to slash their projections every few months as each quarter turned out worse than expected …

What the dismal science’s dismal record suggests is that there is something profoundly wrong with the mainstream economics profession’s understanding of how modern economies work. The models on which its forecasts are built are clearly badly flawed …

But the most important contribution to the debate is an essay by Claudio Borio, deputy head of the monetary and economics department at the Bank for International Settlements, published last moth and titled: “The Financial Cycle and Macroeconomics: What have we learned?”

In Mr. Borio’s view, the “New Keynesian Dynamic Stochastic General Equilibrium” model used by most mainstream forecasters is flawed because it assumes the financial system is frictionless: Its role is simply to allocate resources and therefore can be ignored. Although many economists now accept these assumptions are wrong, efforts to modify their models amount to little more than tinkering. What is needed is a return to out-of-fashion insights influential before World War II and kept alive since by maverick economists such as Hyman Minsky and Charles Kindleberger that recognized the central importance of the financial cycle.

Mainstream economists have been so fixated on understanding ordinary business cycles that they ignored the role that years of rising asset prices and financial sector liberalization can play in fueling credit booms. They lost sight of the fact that the financial system does more than allocate resources: It creates money—and therefore purchasing power—every time it extends a loan.

“Macroeconomics without the financial cycle is like Hamlet without the Prince,” to Mr. Borio.

Simon Nixon

My new book is out

30 May, 2013 at 07:48 | Posted in Theory of Science & Methodology | Comments Off on My new book is out

what is theoryEconomics is a discipline with the avowed ambition to produce theory for the real world. But it fails in this ambition, Lars Pålsson Syll asserts in Chapter 12, at least as far as the dominant mainstream neoclassical economic theory is concerned. Overly confident in deductivistic Euclidian methodology, neoclassical economic theory lines up series of mathematical models that display elaborate internal consistency but lack clear counterparts in the real world. Such models are at best unhelpful, if not outright harmful, and it is time for economic theory to take a critical realist perspective and explain economic life in depth rather than merely modeling it axiomatically.

The state of economic theory is not as bad as Pålsson Syll describes, Fredrik Hansen retorts in Chapter 13. Looking outside the mainstream neoclassic tradition, one can find numerous economic perspectives that are open to other disciplines and manifest growing interest in methodological matters. He is confident that theoretical and methodological pluralism will be able to refresh the debate on economic theory, particularly concerning the nature of realism in economic theory, a matter about which Pålsson Syll and Hansen clearly disagree.

What is theory? consists of a multidisciplinary collection of essays that are tied together by a common effort to tell what theory is, and paired as dialogues between senior and junior researchers from the same or allied disciplines to add a trans-generational dimension to the book’s multidisciplinary approach.

The book has mainly been designed for master’s degree students and postgraduates in the social sciences and the humanities.

On the impossibility of predicting the future

28 May, 2013 at 20:02 | Posted in Statistics & Econometrics | 1 Comment

 

Capturing causality in economics (wonkish)

27 May, 2013 at 14:12 | Posted in Economics, Theory of Science & Methodology | Comments Off on Capturing causality in economics (wonkish)

A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.

What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.

But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard …  Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under  very restrictive conditions!

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

Harvard Statistics 110 – some classic probability problems

26 May, 2013 at 11:44 | Posted in Statistics & Econometrics | Comments Off on Harvard Statistics 110 – some classic probability problems

 

New Keynesianism and DSGE – intellectually bankrupt enterprises

26 May, 2013 at 09:16 | Posted in Economics | Comments Off on New Keynesianism and DSGE – intellectually bankrupt enterprises

In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore allegedly have access to heaps of historical time-series.

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

In the end this is what it all boils down to. We all know that many activities, relations, processes and events are genuinely uncertaint. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption.

Fortunately – when you’ve got tired of  the kind of gobsmacking macroeconomics apologetics produced by so called “New Keynesian” macroeconomists – there still are some  real Keynesian macroeconomists to read!

One of them, Axel Leijonhufvud, I last met  a couple of years ago in Copenhagen, where we were invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?” Axel’s speech was later published as Keynes and the crisis and contains some very profound insights and antidotes to DSGE modeling and New “Keynesianism” :

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

axel_leijonhufvudThe real interest rate … The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.

The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedt hat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

Are economists rational?

25 May, 2013 at 15:48 | Posted in Economics | 1 Comment

nate silverNow consider what happened in November 2007. It was just one month before the Great Recession officially began …

Economists in the Survey of Professional Forecasters, a quarterly poll put out by the Federal Reserve Bank of Philadelphia, nevertheless foresaw a recession as relatively unlikely. Intead, they expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008 … This was a very bad forecast: GDP actually shrank by 3.3 percent once the financial crisis hit. What may be worse is that the economists were extremely confident in their prediction. They assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008 …

Indeed, economists have for a long time been much to confident in their ability to predict the direction of the economy … Their predictions have not just been overconfident but also quite poor in a real-world sense … Economic forecasters get more feedback than people in most other professions, but they haven’t chosen to correct for their bias toward overconfidence.

Macroeconomic fallacies

24 May, 2013 at 10:10 | Posted in Economics | 2 Comments

Fallacy 8

If deficits continue, the debt service would eventually swamp the fisc.

Real prospect: While viewers with alarm are fond of horror-story projections in which per capita debt would become intolerably burdensome, debt service would absorb the entire income tax revenue, or confidence is lost in the ability or willingness of the government to levy the required taxes so that bonds cannot be marketed on reasonable terms, reasonable scenarios protect a negligible or even favorable effect on the fisc … A fifteen trillion debt will be far easier to deal with out of a full employment economy with greatly reduced needs for unemployment benefits and welfare payments than a five trillion debt from an economy in the doldrums with its equipment in disrepair. There is simply no problem …

Fallacy 14

Government debt is thought of as a burden handed on from one generation to its children and grandchildren.

Reality: Quite the contrary, in generational terms, (as distinct from time slices) the debt is the means whereby the present working cohorts are enabled to earn more by fuller employment and invest in the increased supply of assets, of which the debt is a part, so as to provide for their own old age. In this way the children and grandchildren are relieved of the burden of providing for the retirement of the preceding generations, whether on a personal basis or through government programs.

This fallacy is another example of zero-sum thinking that ignores the possibility of increased employment and expanded output …

—————————-

These fallacious notions, which seem to be widely held in various forms by those close to the seats of economic power, are leading to policies that are not only cruel but unnecessary and even self-defeating in terms of their professed objectives …

We will not get out of the economic doldrums as long as we continue to be governed by fallacious notions that are based on false analogies, one-sided analysis, and an implicit underlying counterfactual assumption of an inevitable level of unemployment …

If a budget balancing program should actually be carried through, the above analysis indicates that sooner or later a crash comparable to that of 1929 would almost certainly result … To assure against such a disaster and start on the road to real prosperity it is necessary to relinquish our unreasoned ideological obsession with reducing government deficits, recognize that it is the economy and not the government budget that needs balancing in terms of the demand for and supply of assets, and proceed to recycle attempted savings into the income stream at an adequate rate, so that they will not simply vanish in reduced income, sales, output and employment. There is too a free lunch out there, indeed a very substantial one. But it will require getting free from the dogmas of the apostles of austerity, most of whom would not share in the sacrifices they recommend for others. Failing this we will all be skating on very thin ice.

William Vickrey Fifteen Fatal Fallacies of Financial Fundamentalism

Sweden’s equality fades away

23 May, 2013 at 16:40 | Posted in Economics | 1 Comment

SwedenGini1980to2011           Source

Sweden, which has long been the shining example for liberal economists of what we should be aiming for, seems to be losing its luster.

That’s because the growth in Swedish inequality between 1985 and the late 2000s was the largest among all OECD countries, increasing by one third:

Sweden has seen the steepest increase in inequality over 15 years amongst the 34 OECD nations, with disparities rising at four times the pace of the United States, the think tank said.

Once the darling of the political left, heavy state control and wealth distribution through high taxes and generous benefits gave the country’s have-nots an enviable standard of living at the expense of the wealthiest members of society.

Although still one of the most equal countries in the world, the last two decades have seen a marked change. Market reforms have helped the economy become one of Europe’s best performers but this has Swedes wondering if their love affair with state welfare was coming to an end.

The real tipping point came in 2006 when the centre-right government swept to power, bringing an end to a Social Democratic era which stretched for most of the 20th century.

Swedes had grown increasingly weary of their high taxes and with more jobs going overseas, the new government laid out a plan to fine-tune the old welfare system. It slashed income taxes, sold state assets and tried to make it pay to work.

Spending on welfare benefits such as pensions, unemployment and incapacity assistance has fallen by almost a third to 13 percent of GDP from the early nineties, putting Sweden only just above the 11 percent OECD average.

At the other end of the spectrum, tax changes and housing market reforms have made the rich richer.

Since the mid-80s, income from savings, private pensions or rentals, jumped 10 percent for the richest fifth of the population while falling one percent for the poorest 20 percent.

David Ruccio

My favourite drug

23 May, 2013 at 10:27 | Posted in Varia | Comments Off on My favourite drug

 

IS-LM basics in less than 16 minutes

22 May, 2013 at 15:53 | Posted in Economics | 5 Comments

 

Lördagmorgon i P2

20 May, 2013 at 14:38 | Posted in Varia | Comments Off on Lördagmorgon i P2

I dessa tider – när ljudrummet dränks i den kommersiella radions tyckmyckentrutade ordbajseri och Melodifestivalens fullständigt intetsägande skval – har man ju nästan gett upp.

Men det finns ljus i mörkret! I radions P2 går varje lördagmorgon ett vederkvickelsens och den seriösa musikens Lördagmorgon i P2.

Så passa på och börja dagen med en musikalisk örontvätt och rensa hörselgångarna från kvarvarande musikslagg. Här kunde man i lördags till exempel lyssna på musik av Vassilis Tsabropoulos, Max Richter och Emerson String Quartet. Att i tre timmar få lyssna till sådan musik ger sinnet ro och får hoppet att återvända. Tack public-service-radio.

Och tack Erik Schüldt. Att i tre timmar få lyssna till underbar musik och en programledare som har något att säga och inte bara låter foderluckan glappa hela tiden – vilken lisa för själen!

Lars E O Svensson om penningpolitik och bostadsbubblor

19 May, 2013 at 17:32 | Posted in Economics | 1 Comment

leoI morgon gör Lars E O Svensson sin sista arbetsdag på Riksbanken. I den här intervjun ser Svensson – Sveriges internationellt mest ansedde nationalekonom – tillbaka på sina sex år som vice riksbankschef. Intressant!

Further suggestions for Krugman’s IS-LM reading list

17 May, 2013 at 14:56 | Posted in Economics | 4 Comments

The determination of investment is a four-stage process in The General Theory. Money and debts determine an “interest rate”; long-term expectations determine the yield – or expected cash flows – from capital assets and current investment (i.e., the capital stock); the yield and the interest rate enter into the determination of the price of capital assets; and investment is carried to the point where the supply price of investment output equals the capitalized value of the yield. The simple IS-LM framework violates the complexity of the investment-determning process as envisaged by Keynes …

minskys keynesbokThe Hicks-Hansen model, by making explicit the interdependence of the commodity and money markets in Keynes’s thought, is a more accurate representation of his views than the simple consumption-function models. Nevertheless, because it did not explicitly consider the significance of uncertainty in both portfolio decisions and investment behavior, and becasue it was an equilibrium rather than a process interpretation of the model, it was an unfair and naïve representation of Keynes’s subtle and sophisticated views …

The journey through various standard models that embody elements derived from The General Theory has led us to the position that such Keynesian models are either trivial (the consumption-function models), incomplete (the IS-LM models without a labor market), inconsistent (the IS-LM models with a labor market but no real-balance effect), or indistinguishable in their results from those of older quantity-theory models (the neoclassical synthesis).

As we all know Paul Krugman is very fond of referring to and defending the old and dear IS-LM model.

John Hicks, the man who invented it in his 1937 Econometrica review of Keynes’ General TheoryMr. Keynes and the ‘Classics’. A Suggested Interpretation – returned to it in an article in 1980 – IS-LM: an explanation – in Journal of Post Keynesian Economics. Self-critically he wrote:

I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate …

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

Back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. He wasn’t.

What Hicks acknowledges in 1980 – confirming Minsky’s critique – is basically that his original review totally ignored the very core of Keynes’ theory – uncertainty. Ignoring uncertainty, he had actually contributed to turning the train of macroeconomics on the wrong tracks for decades.

It’s about time that neoclassical economists – as Krugman, Mankiw, or what have you – set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to non-ergodicity and uncertainty?

Flawed macroeconomic models

17 May, 2013 at 13:39 | Posted in Economics, Theory of Science & Methodology | 3 Comments

stiglitz3If we had begun our reform efforts with a focus on how to make our economy more efficient and more stable, there are other questions we would have naturally asked; other questions we would have posed. Interestingly, there is some correspondence between these deficiencies in our reform efforts and the deficiencies in the models that we as economists often use in macroeconomics.

•First, the importance of credit
We would, for instance, have asked what the fundamental roles of the financial sector are, and how we can get it to perform those roles better. Clearly, one of the key roles is the allocation of capital and the provision of credit, especially to small and medium-sized enterprises, a function which it did not perform well before the crisis, and which arguably it is still not fulfilling well.

This might seem obvious. But a focus on the provision of credit has neither been at the centre of policy discourse nor of the standard macro-models. We have to shift our focus from money to credit. In any balance sheet, the two sides are usually going to be very highly correlated. But that is not always the case, particularly in the context of large economic perturbations. In these, we ought to be focusing on credit. I find it remarkable the extent to which there has been an inadequate examination in standard macro models of the nature of the credit mechanism. There is, of course, a large microeconomic literature on banking and credit, but for the most part, the insights of this literature has not been taken on board in standard macro-models …

•Second, stability
As I have already noted, in the conventional models (and in the conventional wisdom) market economies were stable. And so it was perhaps not a surprise that fundamental questions about how to design more stable economic systems were seldom asked. We have already touched on several aspects of this: how to design economic systems that are less exposed to risk or that generate less volatility on their own.

One of the necessary reforms, but one not emphasised enough, is the need for more automatic stabilisers and fewer automatic destabilisers – not only in the financial sector, but throughout the economy. For instance, the movement from defined benefit to defined contribution systems may have led to a less stable economy …

•Third, distribution
Distribution matters as well – distribution among individuals, between households and firms, among households, and among firms. Traditionally, macroeconomics focused on certain aggregates, such as the average ratio of leverage to GDP. But that and other average numbers often don’t give a picture of the vulnerability of the economy.

In the case of the financial crisis, such numbers didn’t give us warning signs. Yet it was the fact that a large number of people at the bottom couldn’t make their debt payments that should have tipped us off that something was wrong …

•Fourth, policy frameworks
Flawed models not only lead to flawed policies, but also to flawed policy frameworks.

Should monetary policy focus just on short-term interest rates? In monetary policy, there is a tendency to think that the central bank should only intervene in the setting of the short-term interest rate. They believe ‘one intervention’ is better than many. Since at least eighty years ago, with the work of Frank Ramsey, we know that focusing on a single instrument is not generally the best approach.

The advocates of the ‘single intervention’ approach argue that it is best, because it least distorts the economy. Of course, the reason we have monetary policy in the first place – the reason why government acts to intervene in the economy – is that we don’t believe that markets on their own will set the right short-term interest rate. If we did, we would just let free markets determine that interest rate. The odd thing is that while just about every central banker would agree we should intervene in the determination of that price, not everyone is so convinced that we should strategically intervene in others, even though we know from the general theory of taxation and the general theory of market intervention that intervening in just one price is not optimal.

Once we shift the focus of our analysis to credit, and explicitly introduce risk into the analysis, we become aware that we need to use multiple instruments. Indeed, in general, we want to use all the instruments at our disposal. Monetary economists often draw a division between macro-prudential, micro-prudential, and conventional monetary policy instruments. In our book Towards a New Paradigm in Monetary Economics, Bruce Greenwald and I argue that this distinction is artificial. The government needs to draw upon all of these instruments, in a coordinated way …

Of course, we cannot ‘correct’ every market failure. The very large ones, however – the macroeconomic failures – will always require our intervention. Bruce Greenwald and I have pointed out that markets are never Pareto efficient if information is imperfect, if there are asymmetries of information, or if risk markets are imperfect. And since these conditions are always satisfied, markets are never Pareto efficient. Recent research has highlighted the importance of these and other related constraints for macroeconomics – though again, the insights of this important work have yet to be adequately integrated either into mainstream macroeconomic models or into mainstream policy discussions.

•Fifth, price versus quantitative interventions
These theoretical insights also help us to understand why the old presumption among some economists that price interventions are preferable to quantity interventions is wrong. There are many circumstances in which quantity interventions lead to better economic performance.

A policy framework that has become popular in some circles argues that so long as there are as many instruments as there are objectives, the economic system is controllable, and the best way of managing the economy in such circumstances is to have an institution responsible for one target and one instrument. (In this view, central banks have one instrument – the interest rate – and one objective – inflation. We have already explained why limiting monetary policy to one instrument is wrong.)

Drawing such a division may have advantages from an agency or bureaucratic perspective, but from the point of view of managing macroeconomic policy – focusing on growth, stability and distribution, in a world of uncertainty – it makes no sense. There has to be coordination across all the issues and among all the instruments that are at our disposal. There needs to be close coordination between monetary and fiscal policy. The natural equilibrium that would arise out of having different people controlling different instruments and focusing on different objectives is, in general, not anywhere near what is optimal in achieving overall societal objectives. Better coordination – and the use of more instruments – can, for instance, enhance economic stability.

Joseph Stiglitz

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.