Current macro debate

23 Sep, 2016 at 08:39 | Posted in Economics | 2 Comments

– The models are rubbish.
– Don’t be silly. There’s a paper from the 1980s on learning effects.

Jo Michell

‘Modern’ macroeconomics — a costly waste of time

22 Sep, 2016 at 16:58 | Posted in Economics | 2 Comments

Commenting on the state of standard modern macroeconomics, Willem Buiter argues that neither New Classical nor New Keynesian microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies:

buiterThe Monetary Policy Committee of the Bank of England I was privileged to be a ‘founder’ external member of during the years 1997-2000 contained, like its successor vintages of external and executive members, quite a strong representation of academic economists and other professional economists with serious technical training and backgrounds. This turned out to be a severe handicap when the central bank had to switch gears and change from being an inflation-targeting central bank under conditions of orderly financial markets to a financial stability-oriented central bank under conditions of widespread market illiquidity and funding illiquidity. Indeed, the typical graduate macroeconomics and monetary economics training received at Anglo-American universities during the past 30 years or so, may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding. It was a privately and socially costly waste of time and other resources.

Most mainstream macroeconomic theoretical innovations since the 1970s … have turned out to be self-referential, inward-looking distractions at best. Research tended to be motivated by the internal logic, intellectual sunk capital and aesthetic puzzles of established research programmes rather than by a powerful desire to understand how the economy works …

Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered. They did not allow such questions to be asked …

Charles Goodhart, who was fortunate enough not to encounter complete markets macroeconomics and monetary economics during his impressionable, formative years, but only after he had acquired some intellectual immunity, once said of the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling: “It excludes everything I am interested in”. He was right. It excludes everything relevant to the pursuit of financial stability.

The Bank of England in 2007 faced the onset of the credit crunch with too much Robert Lucas, Michael Woodford and Robert Merton in its intellectual cupboard. A drastic but chaotic re-education took place and is continuing.

I believe that the Bank has by now shed the conventional wisdom of the typical macroeconomics training of the past few decades. In its place is an intellectual potpourri of factoids, partial theories, empirical regularities without firm theoretical foundations, hunches, intuitions and half-developed insights. It is not much, but knowing that you know nothing is the beginning of wisdom.

Reading Buiter’s article is certainly a very worrying confirmation of what Paul Romer wrote last week. Modern macroeconomics is becoming more and more a total waste of time.

But why are all these macro guys wasting their time and efforts on these models? Besides simply having the usual aspirations of being published, I think maybe Frank Hahn gave the truest answer back in 2005, when interviewed on the occasion of his 80th birthday, he confessed that some economic assumptions didn’t really say anything about “what happens in the world,” but still had to be considered very good “because it allows us to get on this job.”

Hahn’s suggestion reminds me of an episode, twenty years ago, when Phil Mirowski was invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the mainstream neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defense. Being at a nonplus, one of them, in total desperation, finally asked “But what shall we do then?”

Yes indeed — what shall they do when their emperor has turned out to be naked?

Stiglitz and the demise of marginal productivity theory

22 Sep, 2016 at 15:36 | Posted in Economics | 7 Comments

Today the trend to greater equality of incomes which characterised the postwar period has been reversed. Inequality is now rising rapidly. Contrary to the rising-tide hypothesis, the rising tide has only lifted the large yachts, and many of the smaller boats have been left dashed on the rocks. This is partly because the extraordinary growth in top incomes has coincided with an economic slowdown.

economic-mythThe trickle-down notion— along with its theoretical justification, marginal productivity theory— needs urgent rethinking. That theory attempts both to explain inequality— why it occurs— and to justify it— why it would be beneficial for the economy as a whole. This essay looks critically at both claims. It argues in favour of alternative explanations of inequality, with particular reference to the theory of rent-seeking and to the influence of institutional and political factors, which have shaped labour markets and patterns of remuneration. And it shows that, far from being either necessary or good for economic growth, excessive inequality tends to lead to weaker economic performance. In light of this, it argues for a range of policies that would increase both equity and economic well-being.

Joseph Stiglitz

Mainstream economics textbooks usually refer to the interrelationship between technological development and education as the main causal force behind increased inequality. If the educational system (supply) develops at the same pace as technology (demand), there should be no increase, ceteris paribus, in the ratio between high-income (highly educated) groups and low-income (low education) groups. In the race between technology and education, the proliferation of skilled-biased technological change has, however, allegedly increased the premium for the highly educated group.

Another prominent explanation is that globalization – in accordance with Ricardo’s theory of comparative advantage and the Wicksell-Heckscher-Ohlin-Stolper-Samuelson factor price theory – has benefited capital in the advanced countries and labour in the developing countries. The problem with these theories are that they explicitly assume full employment and international immobility of the factors of production. Globalization means more than anything else that capital and labour have to a large extent become mobile over country borders. These mainstream trade theories are really not applicable in the world of today, and they are certainly not able to explain the international trade pattern that has developed during the last decades. Although it seems as though capital in the developed countries has benefited from globalization, it is difficult to detect a similar positive effect on workers in the developing countries.

There are, however, also some other quite obvious problems with these kinds of inequality explanations. The World Top Incomes Database shows that the increase in incomes has been concentrated especially in the top 1%. If education was the main reason behind the increasing income gap, one would expect a much broader group of people in the upper echelons of the distribution taking part of this increase. It is dubious, to say the least, to try to explain, for example, the high wages in the finance sector with a marginal productivity argument. High-end wages seem to be more a result of pure luck or membership of the same ‘club’ as those who decide on the wages and bonuses, than of ‘marginal productivity.’

Mainstream economics, with its technologically determined marginal productivity theory, seems to be difficult to reconcile with reality. Although card-carrying neoclassical apologetics like Greg Mankiw want to recall John Bates Clark’s (1899) argument that marginal productivity results in an ethically just distribution, that is not something – even if it were true – we could confirm empirically, since it is impossible realiter to separate out what is the marginal contribution of any factor of production. The hypothetical ceteris paribus addition of only one factor in a production process is often heard of in textbooks, but never seen in reality.

When reading  mainstream economists like Mankiw who argue for the ‘just desert’ of the 0.1 %, one gets a strong feeling that they are ultimately trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they ‘deserve.’ To most social scientists that probably smacks more of being an evasive action trying to explain away a very disturbing structural ‘regime shift’ that has taken place in our societies. A shift that has very little to do with ‘stochastic returns to education.’ Those were in place also 30 or 40 years ago. At that time they meant that perhaps a top corporate manager earned 10–20 times more than ‘ordinary’ people earned. Today it means that they earn 100–200 times more than ‘ordinary’ people earn. A question of education? Hardly. It is probably more a question of greed and a lost sense of a common project of building a sustainable society.

Since the race between technology and education does not seem to explain the new growing income gap – and even if technological change has become more and more capital augmenting, it is also quite clear that not only the wages of low-skilled workers have fallen, but also the overall wage share – mainstream economists increasingly refer to ‘meritocratic extremism,’ ‘winners-take-all markets’ and ‘super star-theories’ for explanation. But this is also highly questionable.

Fans may want to pay extra to watch top-ranked athletes or movie stars performing on television and film, but corporate managers are hardly the stuff that people’s dreams are made of – and they seldom appear on television and in the movie theaters.

Everyone may prefer to employ the best corporate manager there is, but a corporate manager, unlike a movie star, can only provide his services to a limited number of customers. From the perspective of ‘super-star theories,’ a good corporate manager should only earn marginally better than an average corporate manager. The average earnings of corporate managers of the 50 biggest Swedish companies today, is equivalent to the wages of 46 blue-collar workers.

It is difficult to see the takeoff of the top executives as anything else but a reward for being a member of the same illustrious club. That they should be equivalent to indispensable and fair productive contributions – marginal products – is straining credulity too far. That so many corporate managers and top executives make fantastic earnings today, is strong evidence the theory is patently wrong and basically functions as a legitimizing device of indefensible and growing inequalities.

No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to ‘capital,’ what it is supposed to mean and how to measure it. Remunerations do not necessarily correspond to any marginal product of different factors of production – or to ‘compensating differentials’ due to non-monetary characteristics of different jobs, natural ability, effort or chance.

Put simply – highly paid workers and corporate managers are not always highly productive workers and corporate managers, and less highly paid workers and corporate managers are not always less productive. History has over and over again disconfirmed the close connection between productivity and remuneration postulated in mainstream income distribution theory.

Neoclassical marginal productivity theory is a collapsed theory from both a historical and a theoretical point of view, as shown already by Sraffa in the 1920s, and in the Cambridge capital controversy in the 1960s and 1970s. As Joan Robinson wrote in 1953:

joan robinsonThe production function has been a powerful instrument of miseducation. The student of economic theory is taught to write Q = f (L, K) where L is a quantity of labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labor; he is told something about the index-number problem in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units K is measured. Before he ever does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.

It’s great that Stiglitz has joined those of us who for decades have criticised marginal productivity theory. Institutional, political and social factors have an overwhelming influence on wages and the relative shares of labour and capital.

When a theory is impossible to reconcile with facts there is only one thing to do — scrap it!

Romer follows up his critique

21 Sep, 2016 at 22:14 | Posted in Economics | 2 Comments

head-in-sand-1The one reaction that puzzles me goes something like this: “Romer’s critique of RBC models is dated; we’ve known all along that those models make no sense.”

If we know that the RBC model makes no sense, why was it left as the core of the DSGE model? Those phlogiston shocks are still there. Now they are mixed together with a bunch of other made-up shocks.

Moreover, I see no reason to be confident about what we will learn if some econometrician adds sticky prices and then runs a horse to see if the shocks are more or less important than the sticky prices. The essence of the identification problem is that the data do not tell you who wins this kind of race. The econometrician picks the winner.

Paul Romer

Those of us in the economics community who have been unpolite enough to dare questioning the preferred methods and models applied in macroeconomics are as a rule met with disapproval. Although people seem to get very agitated and upset by the critique, defenders of ‘received theory’ always say that the critique is ‘nothing new,’ that they have always been ‘well aware’ of the problems, and so on, and so on.

So, for the benefit of all macroeconomists who, like Simon Wren-Lewis, don’t want to be disturbed in their doings — eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:

We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

Wren-Lewis trivializing Romer’s critique

20 Sep, 2016 at 22:11 | Posted in Economics | 1 Comment

As yours truly wrote last week, there has been much discussion going on in the economics academia on Paul Romer’s recent critique of ‘modern’ macroeconomics.

Now Oxford professor Simon Wren-Lewis has a blog post up arguing that Romer’s critique is

ostrich-headunfair and wide of the mark in places … Paul’s discussion of real effects from monetary policy, and the insistence on productivity shocks as business cycle drivers, is pretty dated … Yet it took a long time for RBC models to be replaced by New Keynesian models, and you will still see RBC models around. Elements of the New Classical counter revolution of the 1980s still persist in some places … The impression Paul Romer’s article gives, might just have been true in a few years in the 1980s before New Keynesian theory arrived. Since the 1990s New Keynesian theory is now the orthodoxy, and is used by central banks around the world.

Now this rather unsuccessful attempt to disarm the real force of Romer’s critique should come as no surprise for anyone who has been following Wren-Lewis’ writings over the years.

In a recent paper — Unravelling the New Classical Counter Revolution — Wren-Lewis writes approvingly about all the ‘impressive’ theoretical insights New Classical economics has brought to macroeconomics:

The theoretical insights that New Classical economists brought to the table were impressive: besides rational expectations, there was a rationalisation of permanent income and the life-cycle models using intertemporal optimisation, time inconsistency and more …

A new revolution, that replaces current methods with older ways of doing macroeconomics, seems unlikely and I would argue is also undesirable. The discipline does not need to advance one revolution at a time …

To understand modern academic macroeconomics, it is no longer essential that you start with The General Theory. It is far more important that you read Lucas and Sargent (1979), which is a central text in what is generally known as the New Classical Counter Revolution (NCCR). That gave birth to DSGE models and the microfoundations programme, which are central to mainstream macroeconomics today …

There’s something that just does not sit very well with this picture of modern macroeconomics.

‘Read Lucas and Sargent (1979)’. Yes, why not. That is exactly what Romer did!

One who has also read it is Wren-Lewis’s ‘New Keynesian’ buddy Paul Krugman. And this is what he has to say on that reading experience:

Lucas and his school … went even further down the equilibrium rabbit hole, notably with real business cycle theory. And here is where the kind of willful obscurantism Romer is after became the norm. I wrote last year about the remarkable failure of RBC theorists ever to offer an intuitive explanation of how their models work, which I at least hinted was willful:

“But the RBC theorists never seem to go there; it’s right into calibration and statistical moments, with never a break for intuition. And because they never do the simple version, they don’t realize (or at any rate don’t admit to themselves) how fundamentally silly the whole thing sounds, how much it’s at odds with lived experience.”

Paul Krugman

And so has Truman F. Bewley:

Lucas and Rapping (1969) claim that cyclical increases in unemployment occur when workers quit their jobs because wages or salaries fall below expectations …

According to this explanation, when wages are unusually low, people become unemployed in order to enjoy free time, substituting leisure for income at a time when they lose the least income …

According to the theory, quits into unemployment increase during recessions, whereas historically quits decrease sharply and roughly half of unremployed workers become jobless because they are laid off … During the recession I studied, people were even afraid to change jobs because new ones might prove unstable and lead to unemployment …

If wages and salaries hardly ever fall, the intertemporal substitution theory is widely applicable only if the unemployed prefer jobless leisure to continued employment at their old pay. However, the attitude and circumstances of the unemployed are not consistent with their having made this choice …

In real business cycle theory, unemployment is interpreted as leisure optimally selected by workers, as in the Lucas-Rapping model. It has proved difficult to construct business cycle models consistent with this assumption and with real wage fluctuations as small as they are in reality, relative to fluctuations in employment.

This is, of course, only what you would expect of New Classical Chicago economists.

So, what’s the problem?

The problem is that sadly enough this extraterrestial view of unemployment is actually shared by Wren-Lewis and other so called ‘New Keynesians’ — a school whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfounded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its ‘equilibrium value,’ the representative agent adjust her labour supply, so that when the real wage is higher than its ‘equilibrium value,’ labour supply is increased, and when the real wage is below its ‘equilibrium value,’ labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

To Wren-Lewis is seems as though the ‘New Keynesian’ acceptance of rational expectations, representative agents and microfounded DSGE models is something more or less self-evidently good. Not all economists (yours truly included) share that view:

While one can understand that some of the elements in DSGE models seem to appeal to Keynesians at first sight, after closer examination, these models are in fundamental contradiction to Post-Keynesian and even traditional Keynesian thinking. The DSGE model is a model in which output is determined in the labour market as in New Classical models and in which aggregate demand plays only a very secondary role, even in the short run.

In addition, given the fundamental philosophical problems presented for the use of DSGE models for policy simulation, namely the fact that a number of parameters used have completely implausible magnitudes and that the degree of freedom for different parameters is so large that DSGE models with fundamentally different parametrization (and therefore different policy conclusions) equally well produce time series which fit the real-world data, it is also very hard to understand why DSGE models have reached such a prominence in economic science in general.

Sebastian Dullien

Neither New Classical nor ‘New Keynesian’ microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies.

Wren-Lewis ultimately falls back on the same kind of models that he criticize, and it would sure be interesting to once hear him explain how silly assumptions like ‘hyperrationality’ and ‘representative agents’ help him work out the fundamentals of a truly relevant macroeconomic analysis.

In a recent paper on modern macroeconomics, another of Wren-Lewis’s ‘New Keynesian’ buddies, macroeconomist Greg Mankiw, wrote:

The real world of macroeconomic policymaking can be disheartening for those of us who have spent most of our careers in academia. The sad truth is that the macroeconomic research of the past three decades has had only minor impact on the practical analysis of monetary or fiscal policy. The explanation is not that economists in the policy arena are ignorant of recent developments. Quite the contrary: The staff of the Federal Reserve includes some of the best young Ph.D.’s, and the Council of Economic Advisers under both Democratic and Republican administrations draws talent from the nation’s top research universities. The fact that modern macroeconomic research is not widely used in practical policymaking is prima facie evidence that it is of little use for this purpose. The research may have been successful as a matter of science, but it has not contributed significantly to macroeconomic engineering.

So, then what is the raison d’être of macroeconomics, if it has nothing to say about the real world and the economic problems out there?

If macoeconomic models – no matter of what ilk – assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being ‘New Monetarist’, ‘New Classical’ or ‘New Keynesian’ – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:

The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.

So, these are some of my arguments for why I think that Simon Wren-Lewis ought to be even more critical of the present state of macroeconomics — including ‘New Keynesian’ macroeconomics  — than he is. If macroeconomic models – no matter of what ilk –  build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Trying to represent real-world target systems with models flagrantly at odds with reality is futile. And if those models are New Classical or ‘New Keynesian’ makes very little difference.

Fortunately — when you’ve got tired of the kind of macroeconomic apologetics produced by ‘New Keynesian’ macroeconomists like Wren-Lewis, Mankiw, and Krugman, there still are some real Keynesian macroeconomists to read. One of them — Axel Leijonhufvud — writes:

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it …

I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes … are about how to view our responsibilities and how to approach our subject.

No matter how brilliantly silly ‘New Keynesian’ DSGE models central banks, Wren-Lewis, and his buddies come up with, they do not help us working with the fundamental issues of modern economies. Using that kind of models only confirms Robert Gordon‘s  dictum that today

rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant.

History of Modern Monetary Theory

20 Sep, 2016 at 19:20 | Posted in Economics | Comments Off on History of Modern Monetary Theory

 

A skyrocketing economics blog

20 Sep, 2016 at 19:04 | Posted in Varia | 4 Comments

happy-cartoon-boy-jumping-and-smiling3

Tired of the idea of an infallible mainstream neoclassical economics and its perpetuation of spoon-fed orthodoxy, yours truly launched this blog five years ago. The number of visitors has increased steadily, and with now having had my posts viewed more than 3 million times, I have to admit of still being — given the rather wonkish character of the blog, with posts mostly on economic theory, statistics, econometrics, theory of science and methodology — rather gobsmacked that so many are interested and take their time to read the often rather geeky stuff posted here.

In the 21st century the blogosphere has without any doubts become one of the greatest channels for dispersing new knowledge and information. images-4As a blogger I can specia-lize in those particular topics an economist and critical realist professor of social science happens to have both deep knowledge of and interest in. That, of course, also means — in the modern long tail world — being able to target a segment of readers with much narrower and specialized interests than newspapers and magazines as a rule could aim for — and still attract quite a lot of readers.

Chicago drivel — a sure way to get a ‘Nobel prize’ in economics

19 Sep, 2016 at 17:38 | Posted in Economics | 6 Comments

In 2007 Thomas Sargent gave a graduation speech at University of California at Berkeley, giving the grads “a short list of valuable lessons that our beautiful subject teaches”:

1. Many things that are desirable are not feasible.
2. Individuals and communities face trade-offs.
3. Other people have more information about their abilities, their efforts, and their preferences than you do.
4. Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always end up working as intended.
5. There are trade offs between equality and efficiency.
6. In an equilibrium of a game or an economy, people are satisfied with their choices. That is why it is difficult for well meaning outsiders to change things for better or worse.
Lebowski.jpg-610x07. In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation.
8. Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made.
9. It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore).
10. When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inflation.
11. Most people want other people to pay for public goods and government transfers (especially transfers to themselves).
12. Because market prices aggregate traders’ information, it is difficult to forecast stock prices and interest rates and exchange rates.

Reading through this list of “valuable lessons” things suddenly fall in place.

This kind of self-righteous neoliberal drivel has again and again been praised and prized. And not only by econ bloggers and right-wing think-tanks.

Out of the seventy-six laureates that have been awarded ‘The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,’ twenty-eight have been affiliated to The University of Chicago. The world is really a small place when it comes to economics …

Why critique in economics is so important

17 Sep, 2016 at 16:46 | Posted in Economics | 9 Comments

Some of the economists who agree about the state of macro in private conversations will not say so in public. This is consistent with the explanation based on different prices. Yet some of them also discourage me from disagreeing openly, which calls for some other explanation.

un7gnnaThey may feel that they will pay a price too if they have to witness the unpleasant reaction that criticism of a revered leader provokes. There is no question that the emotions are intense. After I criticized a paper by Lucas, I had a chance encounter with someone who was so angry that at first he could not speak. Eventually, he told me, “You are killing Bob.”

But my sense is that the problem goes even deeper that avoidance. Several economists I know seem to have assimilated a norm that the post-real macroeconomists actively promote – that it is an extremely serious violation of some honor code for anyone to criticize openly a revered authority figure – and that neither facts that are false, nor predictions that are wrong, nor models that make no sense matter enough to worry about …

Science, and all the other research fields spawned by the enlightenment, survive by “turning the dial to zero” on these innate moral senses. Members cultivate the conviction that nothing is sacred and that authority should always be challenged … By rejecting any reliance on central authority, the members of a research field can coordinate their independent efforts only by maintaining an unwavering commitment to the pursuit of truth, established imperfectly, via the rough consensus that emerges from many independent assessments of publicly disclosed facts and logic; assessments that are made by people who honor clearly stated disagreement, who accept their own fallibility, and relish the chance to subvert any claim of authority, not to mention any claim of infallibility.

Paul Romer

This is part of why yours truly appreciate Romer’s article, and even find it ‘brave.’ Everyone knows what he says is true, but few have the courage to openly speak and write about it. The ‘honour code’ in academia certainly needs revision.

The excessive formalization and mathematization of economics since WW II has made mainstream — neoclassical — economists more or less obsessed with formal, deductive-axiomatic models. Confronted with the critique that they do not solve real problems, they  often react as Saint-Exupéry’s Great Geographer, who, in response to the questions posed by The Little Prince, says that he is too occupied with his scientific work to be be able to say anything about reality. Confronting economic theory’s lack of relevance and ability to tackle real probems, one retreats into the wonderful world of economic models.  While the economic problems in the world around us steadily increase, one is rather happily playing along with the latest toys in the mathematical toolbox.

Modern mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?

Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more  than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

Jensen’s inequality (wonkish)

17 Sep, 2016 at 09:26 | Posted in Statistics & Econometrics | Comments Off on Jensen’s inequality (wonkish)


Economists’ infatuation with immense assumptions

16 Sep, 2016 at 23:31 | Posted in Economics | Comments Off on Economists’ infatuation with immense assumptions

Peter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focussed on economists’ infatuation with homogeneity and averages:

You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + …. + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

notes7-2What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …

The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to the causal process lying behind events and disclose the causal forces behind what appears to be simple facts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

Lazy theorizing and useless macroeconomics

15 Sep, 2016 at 15:03 | Posted in Economics | 6 Comments

In a new, extremely well-written, brave, and interesting article, Paul Romer goes to frontal attack on the theories that has put macroeconomics on a path of ‘intellectual regress’ for three decades now:

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …

67477738In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions.” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.

The noncommittal relationship with the truth revealed by these methodological evasions and the “less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.”

Paul Romer

There are many kinds of useless ‘post-realeconomics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Paul Romer and yours truly are certainly not the only ones having doubts about the scientific value of calibration. In Journal of Economic Perspective (1996, vol. 10) Nobel laureates Lars Peter Hansen and James J. Heckman writes:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Mathematical statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Lucas, Sargent, Prescott, Kydland and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

As Romer says:

Math cannot establish the truth value of a fact. Never has. Never will.

So instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer notices — just looking the other way and pretend it’s raining:

The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.

Proper use of math

15 Sep, 2016 at 08:33 | Posted in Economics | 18 Comments

Balliol Croft, Cambridge
27. ii. 06
My dear Bowley,

I have not been able to lay my hands on any notes as to Mathematico-economics that would be of any use to you: and I have very indistinct memories of what I used to think on the subject. I never read mathematics now: in fact I have forgotten even how to integrate a good many things.

13.1a Alfred MarshallBut I know I had a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules — (1) Use mathematics as a short-hand language, rather than as an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in 4, burn 3. This last I did often.

I believe in Newton’s Principia Methods, because they carry so much of the ordinary mind with them. Mathematics used in a Fellowship thesis by a man who is not a mathematician by nature — and I have come across a good deal of that — seems to me an unmixed evil. And I think you should do all you can to prevent people from using Mathematics in cases in which the English language is as short as the Mathematical …

Your emptyhandedly,

Alfred Marshall

Has economics — really — become more empirical?

14 Sep, 2016 at 19:22 | Posted in Economics | 4 Comments

alchemyIn Economics Rules (OUP 2015), Dani Rodrik maintains that ‘imaginative empirical methods’ — such as game theoretical applications, natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions conerning the external validity of economic models. In Rodrik’s view they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever expanding ‘collection of potentially applicable models.’ Writes Rodrik:

Another way we can observe the transformation of the discipline is by looking at the new areas of research that have flourished in recent decades. Three of these are particularly noteworthy: behavioral economics, randomized controlled trials (RCTs), and institutions … They suggest that the view of economics as an insular, inbred discipline closed to the outside influences is more caricature than reality.

I beg to differ. When looked at carefully, there  are in fact few real reasons to share  Rodrik’s optimism on this ’empirical turn’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2014? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Rodrik, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Rodrik’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

 

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Re game theory, yours truly remembers when back in 1991, earning my first Ph.D. with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that

repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.

This, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed. Listening to what one of the world’s most renowned game theorists — Ariel Rubinstein — has to say on the — rather limited — applicability of game theory in this interview (emphasis added), I basically think he confirms my doubts about how well-founded is Rodrik’s ‘optimism:’

Is game theory useful in a concrete sense or not? … I believe that game theory is very interesting. I’ve spent a lot of my life thinking about it, but I don’t respect the claims that it has direct applications.

The analogy I sometimes give is from logic. Logic is a very interesting field in philosophy, or in mathematics. But I don’t think anybody has the illusion that logic helps people to be better performers in life. A good judge does not need to know logic. It may turn out to be useful – logic was useful in the development of the computer sciences, for example – but it’s not directly practical in the sense of helping you figure out how best to behave tomorrow, say in a debate with friends, or when analysing data that you get as a judge or a citizen or as a scientist …

Game theory is about a collection of fables. Are fables useful or not? In some sense, you can say that they are useful, because good fables can give you some new insight into the world and allow you to think about a situation differently. But fables are not useful in the sense of giving you advice about what to do tomorrow, or how to reach an agreement between the West and Iran. The same is true about game theory …

In general, I would say there were too many claims made by game theoreticians about its relevance. Every book of game theory starts with “Game theory is very relevant to everything that you can imagine, and probably many things that you can’t imagine.” In my opinion that’s just a marketing device …

So — contrary to Rodrik’s optimism — I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a true empirical science.

Dark age of macroeconomics

13 Sep, 2016 at 15:41 | Posted in Economics | 3 Comments

In his 1936 “The General Theory of Employment, Interest and Money”, John Maynard Keynes already recognized that the idea that savings finance investments is wrong. Savings equal investment indeed, which is written as S=I. However, the way that this identity (roughly: definition in the form of an equation) holds is exactly the opposite …

461226-1Income is created by the value in excess of user cost which the producer obtains for the output he has sold; but the whole of this output must obviously have been sold either to a consumer or dark age of macroeconomicso another entrepreneur; and each entrepreneur’s current investment is equal to the excess of the equipment which he has purchased from other entrepreneurs over his own user cost. Hence, in the aggregate the excess of income over consumption, which we call saving, cannot differ from the addition to capital equipment which we call investment. And similarly with net saving and net investment. Saving, in fact, is a mere residual. The decisions to consume and the decisions to invest between them determine incomes. Assuming that the decisions to invest become effective, they must in doing so either curtail consumption or expand income. Thus the act of investment in itself cannot help causing the residual or margin, which we call saving, to increase by a corresponding amount …

Clearness of mind on this matter is best reached, perhaps, by thinking in terms of decisions to consume (or to refrain from consuming) rather than of decisions to save. A decision to consume or not to consume truly lies within the power of the individual; so does a decision to invest or not to invest. The amounts of aggregate income and of aggregate saving are the results of the free choices of individuals whether or not to consume and whether or not to invest; but they are neither of them capable of assuming an independent value resulting from a separate set of decisions taken irrespective of the decisions concerning consumption and investment. In accordance with this principle, the conception of the propensity to consume will, in what follows, take the place of the propensity or disposition to save.

This means that investment is financed by credit. When banks create new loans, new deposits are credited to the borrower’s account. These deposits are additional deposits that did not exist before. When spending, the investment takes place (and rises by some amount) and the seller of the goods are services that constitute the investment will received bank deposits. This is income not spend, which means that savings go up (by the same amount). Hence savings equal investment, but not because savings finance investment! Before Keynes, Wicksell and Schumpeter wrote about this as well, so it was common knowledge that loans finance investment and not savings. Today, we live in a dark age of macroeconomics and monetary theory since this insight has been forgotten by most of the discipline.

Dirk Ehnts

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.