## Explained variance and Pythagoras’ theorem

31 July, 2012 at 14:39 | Posted in Statistics & Econometrics | Comments Off on Explained variance and Pythagoras’ theorem

In many statistical and econometric studies R2 is used to measure goodness of fit – or more technically, the fraction of variance ”explained” by a regression.

But it’s actually a rather weird measure. As eminent mathematical statistician David Freedman writes:

The math is fine, but the concept is a little peculiar … Let’s take an example. Sacramento is about 78 miles from San Francisco, as the crow flies. Or, the crow could fly 60 miles East and 50 miles North, passing near Stockton at the turn. If we take the 60 and 50 as exact, Pythagoras tells us that the squared hypotenuse in the triangle is

602 + 502 = 3600 + 2500 = 6100 miles2.

With “explained” as in “explained variance”, the geography lesson can be cruelly summarized. The area – squared distance – between San Francisco and Sacramento is 6100 miles2, of which 3600 is explained by East …

The theory of explained variance boils down to Pythagoras’ theorem on the crow’s triangular flight. Explainig the area between San Francisco and Sacramento by East is zany, and explained variance may not be much better.

## Dumb and dumber – the Real Business Cycles version

31 July, 2012 at 12:19 | Posted in Economics | 3 Comments

Real business cycles theory (RBC) basically says that economic cycles are caused by technologyinduced changes in productivity. It says that employment goes up or down because people choose to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense – and how on earth those guys that promoted this theory could be awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is really beyond comprehension.

In yours truly’s History of Economic Theories (4th ed, 2007, p. 405) it was concluded that

the problem is that it has turned out to be very difficult to empirically verify the theory’s view on economic fluctuations as being effects of rational actors’ optimal intertemporal choices … Empirical studies have not been able to corroborate the assumption of the sensitivity of  labour supply to changes in intertemporal relative  prices. Most studies rather points to expected changes in real wages having only rather little influence on the supply of labour.

And this is what Lawrence Summers – in Some Skeptical Observations on Real Business Cycle Theory – had to say about RBC:

The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …

If these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …

The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …

My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …

Presoctt’s growth model is not an inconceivable representation of reality. But to claim that its prameters are securely tied down by growth and micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …

In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …

Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.

Yet Pescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …

Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.

Thomas Sargent was awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2011 for his “empirical research on cause and effect in the macroeconomy”. In an interview with Sargent in The Region (September 2010), however, one could read the following defense of “modern macro” (my emphasis):

Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.

Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?

Sargent: Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.

Are these the words of an empirical macroeconomist? I’ll be dipped! To me it sounds like the same old axiomatic-deductivist mumbo jumbo that parades as economic science of today.

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. But “facts kick”, as Gunnar Myrdal used to say. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

When that day comes The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel will hopefully be awarded a real macroeconomist and not axiomatic-deductivist modellers like Thomas Sargent and economists of that ilk in the efficient-market-rational-expectations camp .

## Mario Draghi sends a cold shiver down my back

31 July, 2012 at 10:25 | Posted in Economics, Politics & Society | 1 Comment

ECB is ready to do whatever it takes to preserve the euro.

The euro is already a monumental fiasco.

Draghi’s statement is not a promise. It’s a threat.

## Non-ergodic economics, expected utility and the Kelly criterion (wonkish)

31 July, 2012 at 00:11 | Posted in Economics, Theory of Science & Methodology | 6 Comments

Suppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T), should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played – the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 – with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly-strategy – after the American physicist and information theorist John L. Kelly, who in the article A New Interpretation of Information Rate (1956) suggested this criterion for how to optimize the size of the bet – under which the optimum is to invest a specific fraction (x) of wealth (T) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times more than before, and when I lose (1 – x) times less. After n rounds, when I have won v times and lost n – v times, my new bankroll (W) is



The bankroll increases multiplicatively – “compound interest” – and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left hand side now represents the average growth rate (g) in each game. On the right hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>
p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 – that is, 20% of my bankroll. The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average to be  times more than when I started (≈ 1.22 times more).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

On a more economic-theoretical level, the Kelly strategy highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly strategy gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the “arrow of time ” make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles .

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (x) of his assets (T) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly strategy says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 11% (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

## Keynes and Knight on uncertainty – ontology vs. epistemology

29 July, 2012 at 20:41 | Posted in Economics, Theory of Science & Methodology | 10 Comments

A couple of weeks ago yours truly had an interesting discussion – on the Real-World Economics Review Blog – with Paul Davidson, founder and editor of the Journal of Post Keynesian Economics, on uncertainty and ergodicity. It all started with me commenting on Davidson’s article Is economics a science? Should economics be rigorous? :

LPS:

Davidson’s article is a nice piece – but ergodicity is a difficult concept that many students of economics have problems with understanding. To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all.

Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article of physicist and mathematician Ole Peters in the Santa Fe Institute Bulletin from 2009 – “On Time and Risk” – that makes it perfectly clear that the flaw in thinking about uncertainty in terms of “rational expectations” and ensemble averages has had real repercussions on the functioning of the financial system:

“In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase, when correlation hinders diversification, when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?”

PD:

Lars, if the stochastic process is ergodic, then for for an infinite realizations, the time and space (ensemble) averages will coincide. An ensemble a is samples drawn at a fixed point of time drawn from a universe of realizations For finite realizations, the time and space statistical averages tend to converge (with a probability of one) the more data one has.

Even in physics there are some processes that physicists recognize are governed by nonergodic stochastic processes. [ see A. M. Yaglom, An Introduction to Stationary Random Functions [1962, Prentice Hall]]

I do object to Ole Peters exposition quote where he talks about “when risks increase”. Nonergodic systems are not about increasing or decreasing risk in the sense of the probability distribution variances differing. It is about indicating that any probability distribution based on past data cannot be reliably used to indicate the probability distribution governing any future outcome. In other words even if (we could know) that the future probability distribution will have a smaller variance (“lower risks”) than the past calculated probability distribution, then the past distribution is not is not a reliable guide to future statistical means and other moments around the means.

LPS:

Paul, re nonergodic processes in physics I would even say that most processes definitely are nonergodic. Re Ole Peters I totally agree that what is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules the roost. That was something both Keynes and Knight basically said in their 1921 books. But I still think that Peters’ discussion is a good example of how thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

PD:

Lars, there is a difference between the uncertainty concept developed by Keynes and the one developed by Knight.

As I have pointed out, Keynes’s concept of uncertainty involves a nonergodic stochastic process . On the other hand, Knight’s uncertainty — like Taleb’s black swan — assumes an ergodic process. The difference is the for Knight (and Taleb) the uncertain outcome lies so far out in the tail of the unchanging (over time) probability distribution that it appears empirically to be [in Knight’s terminology] “unique”. In other words, like Taleb’s black swan, the uncertain outcome already exists in the probability distribution but is so rarely observed that it may take several lifetimes for one observation — making that observation “unique”.

In the latest edition of Taleb’s book , he was forced to concede that philosophically there is a difference between a nonergodic system and a black swan ergodic system –but then waves away the problem with the claim that the difference is irrelevent.

LPS:

Paul, on the whole, I think you’re absolutely right on this. Knight’s uncertainty concept has an epistemological founding and Keynes’s definitely an ontological founding. Of course this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’s view is the most warranted of the two.

BUT – from a “practical” point of view I have to agree with Taleb. Because if there is no reliable information on the future, whether you talk of epistemological or ontological uncertainty, you can’t calculate probabilities.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if you subscribe to the former, knightian view – as Taleb and “black swan” theorists basically do – you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As both you and Keynes convincingly have argued, that is ontologically just not possible.

PD:

Lars, your last sentence says it all. If you believe it is an ergodic system and epistemology is the only problem, then you should urge more transparency , better data collection, hiring more “quants” on Wall Street to generate “better” risk management computer problems, etc — and above all keep the government out of regulating financial markets — since all the government can do is foul up the outcome that the ergodic process is ready to deliver.

Long live Stiglitz and the call for transparency to end asymmetric information — and permit all to know the epistemological solution for the ergodic process controlling the economy.

Or as Milton Friedman would say, those who make decisions “as if” they knew the ergodic stochastic process create an optimum market solution — while those who make mistakes in trying to figure out the ergodic process are like the dinosaurs, doomed to fail and die off — leaving only the survival of the fittest for a free market economy to prosper on. The proof is why all those 1% far cats CEO managers in the banking business receive such large salaries for their “correct” decisions involving financial assets.

Alternatively, if the financial and economic system is non ergodic then there is a positive role for government to regulate what decision makers can do so as to prevent them from mass destruction of themselves and other innocent bystanders — and also for government to take positive action when the herd behavior of decision makers are causing the economy to run off the cliff.

So this distinction between ergodic and nonergodic is essential if we are to build institutional structures that make running off the cliff almost impossible. — and for the government to be ready to take action when some innovative fool(s) discovers a way to get around institutional barriers and starts to run the economy off the cliff.

To Keynes the source of uncertainty was in the nature of the real – nonergodic – world. It had to do, not only – or primarily – with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilites and expectations.

26 July, 2012 at 09:32 | Posted in Varia | Comments Off on Sporadic blogging

Touring again – and I’m supposed to be on holiday! Regular blogging will be resumed early next week.

## The Great Obfuscation

26 July, 2012 at 09:26 | Posted in Economics | Comments Off on The Great Obfuscation

As I commented on yesterday, the Spanish and Italian prospectives for the future look desperately bleak and there are today every sign that the euro crisis is spinning out of control.

And still

l’euro n’est absolument pas en danger et la monnaie unique est irréversible”

according to European Central Bank President Mario Draghi.

As if this wasn’t enough, Financial Times now reports:

Germany on Tuesday threw its considerable weight behind the reform and austerity programme of the Spanish government, in the face of a continuing surge in the cost of borrowing for Madrid, and strong protests against its spending cuts.

A joint statement by Wolfgang Schäuble, German finance minister, and Luis de Guindos, Spanish economy minister, condemned the high interest rates demanded for the sale of Spanish bonds as failing to reflect “the fundamentals of the Spanish economy, its growth potential and the sustainability of its public debt”.

Mr de Guindos flew to Berlin for the talks with the German finance minister as Miguel Angel Fernandez Ordonez, former governor of the Spanish central bank, launched a fierce criticism of his government in the Spanish parliament.

“In the first half of the year we have witnessed a collapse in confidence in Spain and its financial system to levels unimaginable seven months ago,” he said. “Now we are not only worse than Italy, but worse than Ireland, a country that has been rescued.”

though this be madness, yet there’s method in it.

## Oh dear, oh dear, Wren-Lewis gets it so wrong – again!

25 July, 2012 at 21:07 | Posted in Economics | 10 Comments

Commenting once again on my critique (here and here) of microfounded “New Keynesian” macroeconomics in general and on Wren-Lewis himself  more specifically – Wren-Lewis writes on his blog (italics added):

Lars Syll gave a list recently [on heterodox alternatives to the microfounded macroeconomics that Wren-Lewis in an earlier post had intimated didn’t really exist]: Hyman Minsky, Michal Kalecki, Sidney Weintraub, Johan Åkerman, Gunnar Myrdal, Paul Davidson, Fred Lee, Axel Leijonhufvud, Steve Keen.

I cannot recall reading papers or texts by Akerman or Lee, but I have read at least something of all the others. I was also taught Neo-Ricardian economics when young, so I have read plenty by Robinson, Sraffa, Pasinetti etc. I do not know much about the Austrians.

But actually my concern is not what any particular author thought, but with the divide I talked about. Mainstream economists can and in some cases have learnt a lot from some of these authors, and a number of mainstream economists have acknowledged this. (One of these days I want to write a paper arguing that Leijonhufvud was the first New Keynesian economist.)

Axel Leijonhufvud the first “New Keynesian”? No way! This is so wrong, so wrong.

The  last time I met Axel was in Roskilde and Copenhagen back in April 2008. We were both invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?”  Axel’s speech was later published as Keynes and the crisis and contains the following thought provoking passages:

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

The real interest rate … The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.

The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedthat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

Axel Leijonhufvud the first “New Keynesian” ecoonomist? Forget it!

## The euro “irreversible”? I’ll be dipped!

25 July, 2012 at 10:27 | Posted in Economics | 1 Comment

The Spanish and Italian yield curves look worrying all over. 10-Year Yields day after day over 7%  is no good sign.  On the contrary. Everything indicates that these countries are in for a long period of recession. They have no growth. Debt servicing is rising. And the 10-Year Spread against Germany widens and widens. There are today every sign that the euro crisis is spinning out of control. And that  the GIIPS countries’ lack of monetary sovereignty will lead to the downfall of the euro.

And still the  euro zone is said not to be in danger of breaking up according to European Central Bank President Mario Draghi. Interviewed in Le Monde Draghi says that

l’euro n’est absolument pas en danger et la monnaie unique est irréversible.

I’ve lost count on how many times we’ve heard that before. And still things keep on getting worse and worse …

## My favourite book on Keynes

24 July, 2012 at 22:10 | Posted in Economics | 3 Comments

Keynes’s economic theory is intimately connected with the epistemological and methodological view he presented already in his Treatise on Probability. To Keynes, economic theory is always unsatisfactory if it is based on the scientist distancing himself from a reality characterized by our knowledge of the future being fluctuating,vague and uncertain.  The main difference between Keynes and neoclassical macroeconomics is – on the deepest level – centered around this point.

In a world in equilibrium there is no difference between now and then. In such a world there is no need for Keynes. But – it is also a fact that the cradle of equlibrium analysis silences all really interesting economic questions.

## Amartya Sen on the state of modern economics

24 July, 2012 at 21:29 | Posted in Economics | 3 Comments

Interviewed by Olaf Storbeck and Dorit Hess earlier this year, Nobel laureate Amartya Sen made some interesting remarks on the state of modern economics:

Question: Professor Sen, do you have the impression that economists and economic policy makers are learning the right lessons from the most severe economic and financial crisis since the Great Depression?

Answer: I don’t think that at all. I’m quite disappointed by the nature of economic thinking as well as social thinking that connects economics with politics.

You make a lot of references to old economic thinkers like Smith, Keynes and so on. However, if you look at the current economic research that is published in the journals and taught at universities, the history of economic thought does not play a big role anymore…

Yes, absolutely. The history of economic thought has been woefully neglected by the profession in the last decades. This has been one of the major mistakes of the profession. One of the earliest reminders that we are going in the wrong direction has come from Kenneth Arrow about 30 years ago when he said: These days, I get surprised when I find the students don’t seem to know any economics that was written 25 or 30 years ago.

Is there any hope that this trend can be reversed?

Yes, I’m quite optimistic in this regard. I get the impression that this seems to be getting corrected right now. I’m particularly delighted that the corrective has come to a great extent from student interest. I’m very struck by the fact that at the university where I teach – Harvard – the demand for more history of economic thought has mostly come from students. As a result there is a lot more attempt by the department of economics as well as history and government to look for the history of political economy. Last year, along with my wife Emma Rothschild, I offered a course on Adam Smith’s philosophy and political economy. It drew a lot of interest and we got some of the finest students at Harvard.

Do you think the focus on mathematics in current economics is the flip side of the neglect of the history of economic thought?

I don’t think that there is any conflict between mathematical reasoning and being interested in the history of thought. Many of our early thinkers were quite mathematical. The connection between mathematics and economics is very strong, and there is no reason to be ashamed of it. What is to be avoided is to be concentrated only on mathematical economics. We must not neglect the insights that come from parts of the subject where mathematics is not sensible to use and different kinds of reasoning are useful. I don’t think the conflict is between mathematics and other kinds of methods. The conflict is between taking an integrated, broad, comprehensive view as opposed to a narrow view whether it is mathematical or anti-mathematical.

## Rigour – a poor substitute for relevance and realism

24 July, 2012 at 18:07 | Posted in Economics, Theory of Science & Methodology | 2 Comments

The mathematization of economics since WW II has made mainstream – neoclassical – economists more or less obsessed with formal, deductive-axiomatic models. Confronted with the critique that they do not solve real problems, they  often react as Saint-Exupéry‘s Great Geographer, who, in response to the questions posed by The Little Prince, says that he is too occupied with his scientific work to be be able to say anything about reality. Confronting economic theory’s lack of relevance and ability to tackle real probems, one retreats into the wonderful world of economic models. One goes into the “shack of tools” – as my old mentor Erik Dahmén used to say – and stays there. While the economic problems in the world around us steadily increase, one is rather happily playing along with the latest toys in the mathematical toolbox.

Paul Krugman has been having similar critical thoughts on our “queen of social science”. In a piece called Irregular Economics he writes:

Why, exactly, are we to have such faith in “regular economics”? What is the compelling evidence that the vision of a competitive, efficient economy allocating resources to the right uses is actually a good description of the world we live in?

I mean, it’s a lovely model, and one I, like everyone else in economics, use a lot. But I would not have said that it’s a model backed by lots of evidence. We do know that demand curves generally slope down; it’s a lot harder to give good examples of supply curves that slope up (as a textbook author, believe me, I’ve looked); and it’s a very long way from there to the vision of Pareto efficiency and all that which Barro wants us to take as the true economics. Realistically, imperfect competition, market failure, and more are everywhere.

Meanwhile, there’s actually a lot of evidence for a broadly Keynesian view of the world. Not, to be fair, for fiscal policy, mainly because clean fiscal experiments are rare. But there’s huge evidence for sticky prices, lots of evidence that monetary shocks have real effects — and it’s hard to produce a coherent model in which that’s true that doesn’t also leave room for fiscal policy.

In short, there’s no reason at all to consider microeconomics the “real” economics and macroeconomics some kind of flaky impostor. Yes, micro is a lot more rigorous — but if it’s rigorously wrong, who cares?

Instead of making the model the message, I think we are better served by economists who more  than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

## Causality and economics (wonkish)

24 July, 2012 at 10:54 | Posted in Economics, Theory of Science & Methodology | 1 Comment

A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.

What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.

But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Everyone – both “labs” and “experimentalists” – should consider the following lines from David Salsburg’s The Lady Tasting Tea (Henry Holt 2001:146):

In Kolmogorov’s axiomatization of probability theory, we assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify Kolmogorov’s abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard …  Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under  very restrictive conditions!

Science philosopher Nancy Cartwright has succinctly summarized the value of randomization. In The Lancet 23/4 2011 she states:

But recall the logic of randomized control trials … [T]hey are ideal for supporting ‘it-works-somewhere’ claims. But they are in no way ideal for other purposes; in particular they provide no better bases for extrapolating or generalising than knowledge that the treatmet caused the outcome in any other individuals in any other circumstances … And where no capacity claims obtain, there is seldom warrant for assuming that a treatment that works somewhere will work anywhere else. (The exception is where there is warrant to believe that the study population is a representative sample of the target population – and cases like this are hard to come by.)

And in BioSocieties 2/2007:

We experiment on a population of individuals each of whom we take to be described (or ‘governed’) by the same fixed causal structure (albeit unknown) and fixed probability measure (albeit unknown). Our deductive conclusions depend on that cvery causal structure and probability. How do we know what individuals beyond those in our experiment this applies to? … The [randomized experiment], with its vaunted rigor, takes us only a very small part of the way we need to go for practical knowledge. This is what disposes me to warn about the vanity of rigor in [randomized experiments].

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

## Kenneth Arrow on rational expectations and microfoundations

23 July, 2012 at 21:39 | Posted in Economics, Theory of Science & Methodology | 1 Comment

One of the greatest economists of all time – Kenneth Arrow – made an assessment of microfoundations and rational expectations in an article – in Journal of Business (1986) – entitled Rationality of Self and Others in an Economic System. It ought to be mandatory reading for all students of “modern” macroeconomics:

[The power of both “new classical” and “rational expectations models”] is obtained by adding strong supplementary assumptions to the general model of rationality. Most prevalent of all is the assumption that all individuals have the same utility function … But this postulate leads to curious and, to my mind, serious difficulties in the interpretation of evidence … [I]f all individuals are alike, why do they not make the same choice? Why do we observe a dispersion? … Analogously, in macroeconomic models … the assumption of homogeneous agents implies that there will never be any trading, though there will be changes in prices.

This dilemma is intrinsic. If agents are alike, there is really no room for trade. The very basis of economic analysis, from Smith on, is the existence of differences in agents …

The new theoretical paradigm of rational expectations holds that ach individual forms expectations of the future on the basis of a correct model of the economy, in fact, the same model that the econometrician is using … Since the world is uncertain, the expectations take the form of probability distributions, and each agent’s expectations are conditional on the information available to him or her …

Each agent has to have a model of the entire economy to preserve rationality. The cost of knowledge, so emphasized by the defenders of the price system as against centralized planning, has disappeared; each agent is engaged in very extensive information gathering and data processing.

Rational expectations theory is a stochastic form of perfect foresight. Not only the feasibility but even the logical consistency of this hypothesis was attacked long ago … Rational expectations … require not only extensive first-order knowledge but also common knowledge, since predictions of the future depend on other individuals’ predictions of the future.

## So you want to run yet another regression? Think twice!

23 July, 2012 at 14:18 | Posted in Statistics & Econometrics | 3 Comments

The cost of computing has dropped exponentially, but the cost of thinking is what it always was. That is why we see so many articles with so many regressions and so little thought.

Zvi Griliches

## The Lucas Critique is but a shallow version of the Keynes Critique (wonkish)

23 July, 2012 at 12:23 | Posted in Economics, Theory of Science & Methodology | 6 Comments

If we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems. Or as the always eminently quotable Keynes wrote in Treatise on Probability(1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

This is a more fundamental and radical problem than the celebrated “Lucas critique” have suggested.This is not the question if deep parameters, absent on the macro-level, exist in “tastes” and “technology” on the micro-level. It goes deeper. Real world social systems are not governed by stable causal mechanisms or capacities. It is the criticism that Keynes first launched against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of economic theoretical modeling – rather useless.

Both the “Lucas critique” and the “Keynes’ critique” of econometrics argued that it was inadmissible to project history on the future. Consequently an economic policy cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different “variables” was not enough. If they could not get at the causal structure that generated the data, they were not really “identified”. Lucas himself drew the conclusion that the problem with unstable relations was to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioural parameters are seen to be stable even to changes in economic policies. As yours truly has argued in a couple of post (e. g. here and here), this, however, is a dead end.

## O, horrible! O, horrible! most horrible!

23 July, 2012 at 10:03 | Posted in Economics, Politics & Society | Comments Off on O, horrible! O, horrible! most horrible!

George Monbiot has a magnificent article in The Guardian  on the perversion of the concept of freedom that neoliberals and libertarians are trying to bring about. A must read for everyone – but perhaps especially for market fundamentalist sweetwater economists and right wing think tanks with their Panglossian views on “efficient markets”:

Freedom: who could object? Yet this word is now used to justify a thousand forms of exploitation. Throughout the rightwing press and blogosphere, among thinktanks and governments, the word excuses every assault on the lives of the poor, every form of inequality and intrusion to which the 1% subject us. How did libertarianism, once a noble impulse, become synonymous with injustice?

In the name of freedom – freedom from regulation – the banks were permitted to wreck the economy. In the name of freedom, taxes for the super-rich are cut. In the name of freedom, companies lobby to drop the minimum wage and raise working hours. In the same cause, US insurers lobby Congress to thwart effective public healthcare; the government rips up our planning laws; big business trashes the biosphere. This is the freedom of the powerful to exploit the weak, the rich to exploit the poor.

Rightwing libertarianism recognises few legitimate constraints on the power to act, regardless of the impact on the lives of others … [Its] concept of freedom looks to me like nothing but a justification for greed.

So why have we been so slow to challenge this concept of liberty? I believe that one of the reasons is as follows. The great political conflict of our age – between neocons and the millionaires and corporations they support on one side, and social justice campaigners and environmentalists on the other – has been mischaracterised as a clash between negative and positive freedoms. These freedoms were most clearly defined by Isaiah Berlin in his essay of 1958, Two Concepts of Liberty. It is a work of beauty: reading it is like listening to a gloriously crafted piece of music. I will try not to mangle it too badly.

Put briefly and crudely, negative freedom is the freedom to be or to act without interference from other people. Positive freedom is freedom from inhibition: it’s the power gained by transcending social or psychological constraints. Berlin explained how positive freedom had been abused by tyrannies, particularly by the Soviet Union. It portrayed its brutal governance as the empowerment of the people, who could achieve a higher freedom by subordinating themselves to a collective single will.

Rightwing libertarians claim that greens and social justice campaigners are closet communists trying to resurrect Soviet conceptions of positive freedom. In reality, the battle mostly consists of a clash between negative freedoms.

As Berlin noted: “No man’s activity is so completely private as never to obstruct the lives of others in any way. ‘Freedom for the pike is death for the minnows’.” So, he argued, some people’s freedom must sometimes be curtailed “to secure the freedom of others”. In other words, your freedom to swing your fist ends where my nose begins. The negative freedom not to have our noses punched is the freedom that green and social justice campaigns, exemplified by the Occupy movement, exist to defend.

Berlin also shows that freedom can intrude on other values, such as justice, equality or human happiness. “If the liberty of myself or my class or nation depends on the misery of a number of other human beings, the system which promotes this is unjust and immoral.” It follows that the state should impose legal restraints on freedoms that interfere with other people’s freedoms – or on freedoms which conflict with justice and humanity …

But rightwing libertarians do not recognise this conflict. They speak … as if the same freedom affects everybody in the same way. Theyassert their freedom to pollute, exploit, even – among the gun nuts – to kill, as if these were fundamental human rights. They characterise any attempt to restrain them as tyranny. They refuse to see that there is a clash between the freedom of the pike and the freedom of the minnow …

Modern libertarianism is the disguise adopted by those who wish to exploit without restraint. It pretends that only the state intrudes on our liberties. It ignores the role of banks, corporations and the rich in making us less free. It denies the need for the state to curb them in order to protect the freedoms of weaker people. This bastardised, one-eyed philosophy is a con trick, whose promoters attempt to wrongfoot justice by pitching it against liberty. By this means they have turned “freedom” into an instrument of oppression.

## Sticky wages and the transmogrification of truth

22 July, 2012 at 22:25 | Posted in Varia | Comments Off on Sticky wages and the transmogrification of truth

In a post on his blog today Paul Krugman writes on a truly important methodological question in economics:

I’ve written quite a lot about sticky wages, aka downward nominal wage rigidity, which is one of those things that we can’t derive from first principles but is a glaringly obvious feature of the real world. But I keep running into comments along the lines of “Well, if you think sticky wages are the problem, why aren’t you calling for wage cuts?”

This is a category error. It confuses the question “What do we need to make sense of what we see?” with the question “What is the problem?”

So right, so right. I only wish this knowledge also found its way in to the standard economics textbooks.
Among intermediate neoclassical macroeconomics textbooks, Chad Jones textbook Macroeconomics (2nd ed, W W Norton, 2011) stands out as perhaps one of the better alternatives, by combining more traditional short-run macroeconomic analysis with an accessible coverage of the Romer model – the foundation of modern growth theory.

Unfortunately it also contains some utter nonsense!

In a chapter on “The Labor Market, Wages, and Unemployment” Jones writes:

The point of this experiment is to show that wage rigidities can lead to large movements in employment. Indeed, they are the reason John Maynard Keynes gave, in The General Theory of Employment, Interest, and Money (1936), for the high unemployment of the Great Depression.

This is of course pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage rigidities, he certainly did not hold the view that wage rigidities were “the reason … for the high unemployment of the Great Depression.”

Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will – according to Keynes – accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and – as Keynes noted – unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.

In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the “fallacy of composition.” Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.

Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. But to Keynes it would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen by Keynes as a general substitute for an expansionary monetary or fiscal policy.

Even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

So, what Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.

To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.

To mainstream neoclassical theory the kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. Keynes on the other hand writes in General Theory:

The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …

The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemployment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …

Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.

Unfortunately, Jones macroeconomics textbook is not the only one containing this kind of utter nonsense on Keynes. Similar distortions of Keynes views can be found in , e. g., the economics textbooks of the “New Keynesian” – a grotesque misnomer – Greg Mankiw. How is this possible? Probably because these economists have but a very superficial acquaintance with Keynes own works, and rather depend on second-hand sources like Hansen, Samuelson, Hicks and the likes.

Fortunately there is a solution to the problem. Keynes books are still in print. Read them!!

## Alternatives to the neoliberal mumbo jumbo of Greg Mankiw and Richard Epstein

22 July, 2012 at 17:55 | Posted in Economics, Politics & Society | 1 Comment

As a young research stipendiate in the U.S. at the beginning of the 1980s, yours truly had the great pleasure and privelege of participating at seminars and lectures with people like Paul Davidson, Hyman Minsky and Stephen Marglin.

They were great inspirations at the time. They still are.

## The unknown knowns of modern macroeconomics

22 July, 2012 at 16:20 | Posted in Economics | 5 Comments

The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?

The root of our problem ultimately goes back to how we look upon the data we are handling. In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the “data-generating process” – if we do not have the “true” model – the whole edifice collapses.

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

This is like saying that you are going on a holiday-trip and that you know that the chance the weather being sunny is at least 30%, and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.

But this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.

In the end this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty type. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.

How much better – how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control – if instead we would just admit that we often simply do not know, and that we have to live with that uncertainty as well as it goes.

Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing – economic catastrophy!

The unknown knowns – the things we fool ourselves to believe we know – often have more dangerous repercussions than the “Black Swans” of Knightian unknown unknowns, something quantitative risk management based on the hypothesis of market efficiency and rational expectations has given ample evidence of during the latest financial crisis.

Next Page »