Modern macroeconomics – like Hamlet without the Prince

31 May, 2013 at 19:13 | Posted in Economics | Leave a comment

Something is rotten in the state of macroeconomics … 

And its miserable state should come as no surprise for those of you who regularly follow yours truly’s blog.

hamletForecasting is by its nature a hit-and-miss affair; economics is not—despite the apparent dogmatic certainty of some of its practitioners—an exact science. But the track record of the profession in recent years—and last year in particular —is dire. Few economists spotted the boom and most hopelessly underestimated the bust. And it’s not as if the profession’s troubles in 2012 were limited to longer-range forecasts; it was getting it wrong virtually in real time with most forecasters forced to slash their projections every few months as each quarter turned out worse than expected …

What the dismal science’s dismal record suggests is that there is something profoundly wrong with the mainstream economics profession’s understanding of how modern economies work. The models on which its forecasts are built are clearly badly flawed …

But the most important contribution to the debate is an essay by Claudio Borio, deputy head of the monetary and economics department at the Bank for International Settlements, published last moth and titled: “The Financial Cycle and Macroeconomics: What have we learned?”

In Mr. Borio’s view, the “New Keynesian Dynamic Stochastic General Equilibrium” model used by most mainstream forecasters is flawed because it assumes the financial system is frictionless: Its role is simply to allocate resources and therefore can be ignored. Although many economists now accept these assumptions are wrong, efforts to modify their models amount to little more than tinkering. What is needed is a return to out-of-fashion insights influential before World War II and kept alive since by maverick economists such as Hyman Minsky and Charles Kindleberger that recognized the central importance of the financial cycle.

Mainstream economists have been so fixated on understanding ordinary business cycles that they ignored the role that years of rising asset prices and financial sector liberalization can play in fueling credit booms. They lost sight of the fact that the financial system does more than allocate resources: It creates money—and therefore purchasing power—every time it extends a loan.

“Macroeconomics without the financial cycle is like Hamlet without the Prince,” to Mr. Borio.

Simon Nixon

My new book is out

30 May, 2013 at 07:48 | Posted in Theory of Science & Methodology | Leave a comment

what is theoryEconomics is a discipline with the avowed ambition to produce theory for the real world. But it fails in this ambition, Lars Pålsson Syll asserts in Chapter 12, at least as far as the dominant mainstream neoclassical economic theory is concerned. Overly confident in deductivistic Euclidian methodology, neoclassical economic theory lines up series of mathematical models that display elaborate internal consistency but lack clear counterparts in the real world. Such models are at best unhelpful, if not outright harmful, and it is time for economic theory to take a critical realist perspective and explain economic life in depth rather than merely modeling it axiomatically.

The state of economic theory is not as bad as Pålsson Syll describes, Fredrik Hansen retorts in Chapter 13. Looking outside the mainstream neoclassic tradition, one can find numerous economic perspectives that are open to other disciplines and manifest growing interest in methodological matters. He is confident that theoretical and methodological pluralism will be able to refresh the debate on economic theory, particularly concerning the nature of realism in economic theory, a matter about which Pålsson Syll and Hansen clearly disagree.

What is theory? consists of a multidisciplinary collection of essays that are tied together by a common effort to tell what theory is, and paired as dialogues between senior and junior researchers from the same or allied disciplines to add a trans-generational dimension to the book’s multidisciplinary approach.

The book has mainly been designed for master’s degree students and postgraduates in the social sciences and the humanities.

On the impossibility of predicting the future

28 May, 2013 at 20:02 | Posted in Statistics & Econometrics | 1 Comment

 

Capturing causality in economics (wonkish)

27 May, 2013 at 14:12 | Posted in Economics, Theory of Science & Methodology | Leave a comment

A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.

What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.

But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard …  Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under  very restrictive conditions!

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

Harvard Statistics 110 – some classic probability problems

26 May, 2013 at 11:44 | Posted in Statistics & Econometrics | Leave a comment

 

New Keynesianism and DSGE – intellectually bankrupt enterprises

26 May, 2013 at 09:16 | Posted in Economics | Leave a comment

In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore allegedly have access to heaps of historical time-series.

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

In the end this is what it all boils down to. We all know that many activities, relations, processes and events are genuinely uncertaint. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption.

Fortunately – when you’ve got tired of  the kind of gobsmacking macroeconomics apologetics produced by so called “New Keynesian” macroeconomists – there still are some  real Keynesian macroeconomists to read!

One of them, Axel Leijonhufvud, I last met  a couple of years ago in Copenhagen, where we were invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?” Axel’s speech was later published as Keynes and the crisis and contains some very profound insights and antidotes to DSGE modeling and New “Keynesianism” :

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

axel_leijonhufvudThe real interest rate … The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.

The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedt hat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

Are economists rational?

25 May, 2013 at 15:48 | Posted in Economics | 1 Comment

nate silverNow consider what happened in November 2007. It was just one month before the Great Recession officially began …

Economists in the Survey of Professional Forecasters, a quarterly poll put out by the Federal Reserve Bank of Philadelphia, nevertheless foresaw a recession as relatively unlikely. Intead, they expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008 … This was a very bad forecast: GDP actually shrank by 3.3 percent once the financial crisis hit. What may be worse is that the economists were extremely confident in their prediction. They assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008 …

Indeed, economists have for a long time been much to confident in their ability to predict the direction of the economy … Their predictions have not just been overconfident but also quite poor in a real-world sense … Economic forecasters get more feedback than people in most other professions, but they haven’t chosen to correct for their bias toward overconfidence.

Eleni Karaindrou

24 May, 2013 at 13:29 | Posted in Varia | Leave a comment

 

Breathtaking masterpiece

24 May, 2013 at 11:07 | Posted in Varia | Leave a comment

Eleni Karaindrou’s breathtakingly beautiful “Prayer” from Theo Angelopoulos’s masterpiece The Weeping Meadow. It breaks my heart every time.
 

Macroeconomic fallacies

24 May, 2013 at 10:10 | Posted in Economics | 2 Comments

Fallacy 8

If deficits continue, the debt service would eventually swamp the fisc.

Real prospect: While viewers with alarm are fond of horror-story projections in which per capita debt would become intolerably burdensome, debt service would absorb the entire income tax revenue, or confidence is lost in the ability or willingness of the government to levy the required taxes so that bonds cannot be marketed on reasonable terms, reasonable scenarios protect a negligible or even favorable effect on the fisc … A fifteen trillion debt will be far easier to deal with out of a full employment economy with greatly reduced needs for unemployment benefits and welfare payments than a five trillion debt from an economy in the doldrums with its equipment in disrepair. There is simply no problem …

Fallacy 14

Government debt is thought of as a burden handed on from one generation to its children and grandchildren.

Reality: Quite the contrary, in generational terms, (as distinct from time slices) the debt is the means whereby the present working cohorts are enabled to earn more by fuller employment and invest in the increased supply of assets, of which the debt is a part, so as to provide for their own old age. In this way the children and grandchildren are relieved of the burden of providing for the retirement of the preceding generations, whether on a personal basis or through government programs.

This fallacy is another example of zero-sum thinking that ignores the possibility of increased employment and expanded output …

—————————-

These fallacious notions, which seem to be widely held in various forms by those close to the seats of economic power, are leading to policies that are not only cruel but unnecessary and even self-defeating in terms of their professed objectives …

We will not get out of the economic doldrums as long as we continue to be governed by fallacious notions that are based on false analogies, one-sided analysis, and an implicit underlying counterfactual assumption of an inevitable level of unemployment …

If a budget balancing program should actually be carried through, the above analysis indicates that sooner or later a crash comparable to that of 1929 would almost certainly result … To assure against such a disaster and start on the road to real prosperity it is necessary to relinquish our unreasoned ideological obsession with reducing government deficits, recognize that it is the economy and not the government budget that needs balancing in terms of the demand for and supply of assets, and proceed to recycle attempted savings into the income stream at an adequate rate, so that they will not simply vanish in reduced income, sales, output and employment. There is too a free lunch out there, indeed a very substantial one. But it will require getting free from the dogmas of the apostles of austerity, most of whom would not share in the sacrifices they recommend for others. Failing this we will all be skating on very thin ice.

William Vickrey Fifteen Fatal Fallacies of Financial Fundamentalism

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.