## Explained variance and Pythagoras’ theorem

31 July, 2012 at 14:39 | Posted in Statistics & Econometrics | Leave a commentIn many statistical and econometric studies **R ^{2}** is used to measure goodness of fit – or more technically, the fraction of variance ”explained” by a regression.

But it’s actually a rather weird measure. As eminent mathematical statistician **David Freedman **writes:

The math is fine, but the concept is a little peculiar … Let’s take an example. Sacramento is about 78 miles from San Francisco, as the crow flies. Or, the crow could fly 60 miles East and 50 miles North, passing near Stockton at the turn. If we take the 60 and 50 as exact, Pythagoras tells us that the squared hypotenuse in the triangle is

60

^{2}+ 50^{2}= 3600 + 2500 = 6100 miles^{2}.With “explained” as in “explained variance”, the geography lesson can be cruelly summarized. The area – squared distance – between San Francisco and Sacramento is 6100 miles

^{2}, of which 3600 is explained by East …The theory of explained variance boils down to Pythagoras’ theorem on the crow’s triangular flight. Explainig the area between San Francisco and Sacramento by East is zany, and explained variance may not be much better.

## Dumb and dumber – the Real Business Cycles version

31 July, 2012 at 12:19 | Posted in Economics | 3 CommentsReal business cycles theory (RBC) basically says that economic cycles are caused by technologyinduced changes in productivity. It says that employment goes up or down because people *choose *to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense – and how on earth those guys that promoted this theory could be awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is really beyond comprehension.

In yours truly’s *History of Economic Theories* (4th ed, 2007, p. 405) it was concluded that

the problem is that it has turned out to be very difficult to empirically verify the theory’s view on economic fluctuations as being effects of rational actors’ optimal intertemporal choices … Empirical studies have not been able to corroborate the assumption of the sensitivity of labour supply to changes in intertemporal relative prices. Most studies rather points to expected changes in real wages having only rather little influence on the supply of labour.

And this is what **Lawrence Summers – **in Some Skeptical Observations on Real Business Cycle Theory - had to say about RBC:

The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …

If these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …

The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …

My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …

Presoctt’s growth model is not an inconceivable representation of reality. But to claim that its prameters are securely tied down by growth and micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …

In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …

Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.

Yet Pescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …

Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.

Thomas Sargent was awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2011 for his “**empirical research on cause and effect in the macroeconomy”. **In an interview with Sargent in The Region (September 2010), however, one could read the following defense of “modern macro” (my emphasis):

Sargent:I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.

Rolnick:OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?

Sargent:Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past.But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.

Are these the words of an empirical macroeconomist? I’ll be dipped! To me it sounds like the same old axiomatic-deductivist mumbo jumbo that parades as economic science of today.

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. But “facts kick”, as Gunnar Myrdal used to say. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

**When that day comes The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel will hopefully be awarded a real macroeconomist and not axiomatic-deductivist modellers like Thomas Sargent and economists of that ilk in the efficient-market-rational-expectations camp .**

## Mario Draghi sends a cold shiver down my back

31 July, 2012 at 10:25 | Posted in Economics, Politics & Society | 1 Comment

ECB is ready to do whatever it takes to preserve the euro.

The euro is already a monumental fiasco.

Draghi’s statement is not a promise. It’s a threat.

## Non-ergodic economics, expected utility and the Kelly criterion (wonkish)

31 July, 2012 at 00:11 | Posted in Economics, Theory of Science & Methodology | 6 CommentsSuppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (**p**) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T), should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played – the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 – with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly-strategy – after the American physicist and information theorist John L. Kelly, who in the article *A New Interpretation of Information Rate* (1956) suggested this criterion for how to optimize the size of the bet – under which the optimum is to invest a specific fraction (**x**) of wealth (**T**) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times more than before, and when I lose (1 – x) times less. After** n** rounds, when I have won **v** times and lost **n – v** times, my new bankroll (**W**) is

The bankroll increases multiplicatively – “compound interest” – and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left hand side now represents the average growth rate (**g**) in each game. On the right hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>

p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 – that is, 20% of my bankroll. The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average to be times more than when I started (≈ 1.22 times more).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

On a more economic-theoretical level, the Kelly strategy highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly strategy gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the “arrow of time ” make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles .

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (**x**) of his assets (**T**) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly strategy says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 11% (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

## The Money Multiplier is neat, plausible – and utterly wrong!

30 July, 2012 at 17:44 | Posted in Economics | 4 CommentsThe neoclassical textbook concept of money multiplier assumes that banks automatically expand the credit money supply to a multiple of their aggregate reserves. If the required currency-deposit reserve ratio is 5%, the money supply should be about twenty times larger than the aggregate reserves of banks. In this way the money multiplier concept assumes that the central bank controls the money supply by setting the required reserve ratio.

In his *Macroeconomics* (6th ed, 2007) – just to take an example – **Greg Mankiw** writes (p. 514):

We can now see that the money supply is proportional to the monetary base. The factor of proportionality … is called the

money multiplier… Each dollar of the monetary base produces m dollars of money. Because the monetary base has a multiplied effect on the money supply, the monetary base is called high-powered money.

The money multiplier concept is – as can be seen from the quote above – nothing but one big fallacy. This is not the way credit is created in a monetary economy. It’s nothing but a monetary myth that the monetary base can play such a decisive role in a modern credit-run economy with fiat money.

In the real world banks *first* extend credits and *then* look for reserves. So the money multiplier basically also gets the causation wrong. At a deep fundamental level the supply of money is endogenous.

Although lately we’ve had our quarrels over microfoundations of macroeconomics, it seems as though Simon Wren-Lewis and yours truly at least agree on the money multiplier concept. **Wren-Lewis** has a nice piece on the subject on his blog today:

[T]his does raise a rather embarrassing question for macroeconomists – why is the money multiplier still taught to many undergraduates? Why is it still in the textbooks? …

[One] response is that there is no

harmin including the money multiplier … But I think it also does harm, because it gives the impression that banks are passive, just translating savings into investment via loans. If it is taught properly, it also leaves the student wondering what on earth is going on. They take the trouble to learn and understand the formula, and then discover that in the last few years central banks have been expanding the monetary base like there is no tomorrow and the money supply has hardly changed! …I think I know why it is still in the textbooks. It is there because the LM curve is still part of the basic macro model we teach students. We still teach first year students about a world where the monetary authorities fix the money supply. And if we do that, we need a nice little story about how the money supply could be controlled. Now, just as is the case with the money multiplier, good textbooks will also talk about how monetary policy is actually done, discussing Taylor rules and the like. But all my previous arguments apply here as well. Why waste the time of, and almost certainly confuse, first year students this way? …

Many students who go on to become economists are put off macroeconomics because it is badly taught. Some who do not go on to become economists end up running their country! So we really should be concerned about what we teach. So please, anyone reading this who still teaches the money multiplier, please think about whether you could spend the time teaching something that is more relevant and useful.

## Keynes and Knight on uncertainty – ontology vs. epistemology

29 July, 2012 at 20:41 | Posted in Economics, Theory of Science & Methodology | 10 CommentsA couple of weeks ago yours truly had an interesting discussion – on the *Real-World Economics Review Blog – *with **Paul Davidson, **founder and editor of the *Journal of Post Keynesian Economics, *on uncertainty and ergodicity. It all started with me commenting on Davidson’s article Is economics a science? Should economics be rigorous? :

**LPS:**

Davidson’s article is a nice piece – but ergodicity is a difficult concept that many students of economics have problems with understanding. To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all.

Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article of physicist and mathematician Ole Peters in the

Santa Fe Institute Bulletinfrom 2009 – “On Time and Risk” – that makes it perfectly clear that the flaw in thinking about uncertainty in terms of “rational expectations” and ensemble averages has had real repercussions on the functioning of the financial system:“In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase, when correlation hinders diversification, when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?”

**PD:**

Lars, if the stochastic process is ergodic, then for for an infinite realizations, the time and space (ensemble) averages will coincide. An ensemble a is samples drawn at a fixed point of time drawn from a universe of realizations For finite realizations, the time and space statistical averages tend to converge (with a probability of one) the more data one has.

Even in physics there are some processes that physicists recognize are governed by nonergodic stochastic processes. [ see A. M. Yaglom,

An Introduction to Stationary Random Functions[1962, Prentice Hall]]I do object to Ole Peters exposition quote where he talks about “when risks increase”. Nonergodic systems are not about increasing or decreasing risk in the sense of the probability distribution variances differing. It is about indicating that any probability distribution based on past data cannot be reliably used to indicate the probability distribution governing any future outcome. In other words even if (we could know) that the future probability distribution will have a smaller variance (“lower risks”) than the past calculated probability distribution, then the past distribution is not is not a reliable guide to future statistical means and other moments around the means.

**LPS:**

Paul,

renonergodic processes in physics I would even say thatmostprocesses definitely are nonergodic.ReOle Peters I totally agree that what is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules the roost. That was something both Keynes and Knight basically said in their 1921 books. But I still think that Peters’ discussion is a good example of how thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

**PD:**

Lars, there is a difference between the uncertainty concept developed by Keynes and the one developed by Knight.

As I have pointed out, Keynes’s concept of uncertainty involves a nonergodic stochastic process . On the other hand, Knight’s uncertainty — like Taleb’s black swan — assumes an ergodic process. The difference is the for Knight (and Taleb) the uncertain outcome lies so far out in the tail of the unchanging (over time) probability distribution that it appears empirically to be [in Knight’s terminology] “unique”. In other words, like Taleb’s black swan, the uncertain outcome already exists in the probability distribution but is so rarely observed that it may take several lifetimes for one observation — making that observation “unique”.

In the latest edition of Taleb’s book , he was forced to concede that philosophically there is a difference between a nonergodic system and a black swan ergodic system –but then waves away the problem with the claim that the difference is irrelevent.

**LPS:**

Paul, on the whole, I think you’re absolutely right on this. Knight’s uncertainty concept has an epistemological founding and Keynes’s definitely an ontological founding. Of course this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’s view is the most warranted of the two.

BUT – from a “practical” point of view I have to agree with Taleb. Because if there is no reliable information on the future, whether you talk of epistemological or ontological uncertainty, you can’t calculate probabilities.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if you subscribe to the former, knightian view – as Taleb and “black swan” theorists basically do – you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As both you and Keynes convincingly have argued, that is ontologically just not possible.

**PD:**

Lars, your last sentence says it all. If you believe it is an ergodic system and epistemology is the only problem, then you should urge more transparency , better data collection, hiring more “quants” on Wall Street to generate “better” risk management computer problems, etc — and above all keep the government out of regulating financial markets — since all the government can do is foul up the outcome that the ergodic process is ready to deliver.

Long live Stiglitz and the call for transparency to end asymmetric information — and permit all to know the epistemological solution for the ergodic process controlling the economy.

Or as Milton Friedman would say, those who make decisions “as if” they knew the ergodic stochastic process create an optimum market solution — while those who make mistakes in trying to figure out the ergodic process are like the dinosaurs, doomed to fail and die off — leaving only the survival of the fittest for a free market economy to prosper on. The proof is why all those 1% far cats CEO managers in the banking business receive such large salaries for their “correct” decisions involving financial assets.

Alternatively, if the financial and economic system is non ergodic then there is a positive role for government to regulate what decision makers can do so as to prevent them from mass destruction of themselves and other innocent bystanders — and also for government to take positive action when the herd behavior of decision makers are causing the economy to run off the cliff.

So this distinction between ergodic and nonergodic is essential if we are to build institutional structures that make running off the cliff almost impossible. — and for the government to be ready to take action when some innovative fool(s) discovers a way to get around institutional barriers and starts to run the economy off the cliff.

**To Keynes the source of uncertainty was in the nature of the real – nonergodic – world. It had to do, not only – or primarily – with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilites and expectations.**

## Sporadic blogging

26 July, 2012 at 09:32 | Posted in Varia | Leave a commentTouring again – and I’m supposed to be on holiday! Regular blogging will be resumed early next week.

## The Great Obfuscation

26 July, 2012 at 09:26 | Posted in Economics | Leave a commentAs I commented on yesterday, the Spanish and Italian prospectives for the future look desperately bleak and there are today every sign that the euro crisis is spinning out of control.

And still

l’euro n’est absolument pas en danger et la monnaie unique est irréversible”

according to European Central Bank President Mario Draghi.

As if this wasn’t enough, Financial Times now reports:

Germany on Tuesday threw its considerable weight behind the reform and austerity programme of the Spanish government, in the face of a continuing surge in the cost of borrowing for Madrid, and strong protests against its spending cuts.

A joint statement by Wolfgang Schäuble, German finance minister, and Luis de Guindos, Spanish economy minister, condemned the high interest rates demanded for the sale of Spanish bonds as failing to reflect “the fundamentals of the Spanish economy, its growth potential and the sustainability of its public debt”.

…

Mr de Guindos flew to Berlin for the talks with the German finance minister as Miguel Angel Fernandez Ordonez, former governor of the Spanish central bank, launched a fierce criticism of his government in the Spanish parliament.

“In the first half of the year we have witnessed a collapse in confidence in Spain and its financial system to levels unimaginable seven months ago,” he said. “Now we are not only worse than Italy, but worse than Ireland, a country that has been rescued.”

Or, as Shakespeare had it:

though this be madness, yet there’s method in it.

## Oh dear, oh dear, Wren-Lewis gets it so wrong – again!

25 July, 2012 at 21:07 | Posted in Economics | 10 CommentsCommenting once again on my critique (here and here) of microfounded “New Keynesian” macroeconomics in general and on Wren-Lewis himself more specifically – Wren-Lewis writes on his blog (italics added):

Lars Syll gave a list recently [on heterodox alternatives to the microfounded macroeconomics that Wren-Lewis in an earlier post had intimated didn’t really exist]: Hyman Minsky, Michal Kalecki, Sidney Weintraub, Johan Åkerman, Gunnar Myrdal, Paul Davidson, Fred Lee, Axel Leijonhufvud, Steve Keen.

I cannot recall reading papers or texts by Akerman or Lee, but I have read at least something of all the others. I was also taught Neo-Ricardian economics when young, so I have read plenty by Robinson, Sraffa, Pasinetti etc. I do not know much about the Austrians.

But actually my concern is not what any particular author thought, but with the divide I talked about. Mainstream economists can and in some cases have learnt a lot from some of these authors, and a number of mainstream economists have acknowledged this. (

One of these days I want to write a paper arguing that Leijonhufvud was the first New Keynesian economist.)

**Axel Leijonhufvud the first “New Keynesian”? No way! This is so wrong, so wrong.**

The last time I met Axel was in Roskilde and Copenhagen back in April 2008. We were both invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?” Axel’s speech was later published as Keynes and the crisis and contains the following thought provoking passages:

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

The real interest rate …The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of

New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedthat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

**Axel Leijonhufvud the first “New Keynesian” ecoonomist? Forget it! **

## The euro “irreversible”? I’ll be dipped!

25 July, 2012 at 10:27 | Posted in Economics | 1 CommentThe Spanish and Italian yield curves look worrying all over. 10-Year Yields day after day over 7% is no good sign. On the contrary. Everything indicates that these countries are in for a long period of recession. They have no growth. Debt servicing is rising. And the 10-Year Spread against Germany widens and widens. There are today every sign that the euro crisis is spinning out of control. And that the GIIPS countries’ lack of monetary sovereignty will lead to the downfall of the euro.

And still the euro zone is said not to be in danger of breaking up according to European Central Bank President **Mario Draghi**. Interviewed in *Le Monde* Draghi says that

l’euro n’est absolument pas en danger et la monnaie unique est irréversible.

I’ve lost count on how many times we’ve heard that before. And still things keep on getting worse and worse …

Blog at WordPress.com. | The Pool Theme.

Entries and comments feeds.