Explained variance and Pythagoras’ theorem

31 Jul, 2012 at 14:39 | Posted in Statistics & Econometrics | Comments Off on Explained variance and Pythagoras’ theorem

In many statistical and econometric studies R2 is used to measure goodness of fit – or more technically, the fraction of variance ”explained” by a regression.

But it’s actually a rather weird measure. As eminent mathematical statistician David Freedman writes:

The math is fine, but the concept is a little peculiar … Let’s take an example. Sacramento is about 78 miles from San Francisco, as the crow flies. Or, the crow could fly 60 miles East and 50 miles North, passing near Stockton at the turn. If we take the 60 and 50 as exact, Pythagoras tells us that the squared hypotenuse in the triangle is

602 + 502 = 3600 + 2500 = 6100 miles2.

With “explained” as in “explained variance”, the geography lesson can be cruelly summarized. The area – squared distance – between San Francisco and Sacramento is 6100 miles2, of which 3600 is explained by East …

The theory of explained variance boils down to Pythagoras’ theorem on the crow’s triangular flight. Explainig the area between San Francisco and Sacramento by East is zany, and explained variance may not be much better.

Dumb and dumber – the Real Business Cycles version

31 Jul, 2012 at 12:19 | Posted in Economics | 3 Comments

Real business cycles theory (RBC) basically says that economic cycles are caused by technologyinduced changes in productivity. It says that employment goes up or down because people choose to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense – and how on earth those guys that promoted this theory could be awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is really beyond comprehension.

In yours truly’s History of Economic Theories (4th ed, 2007, p. 405) it was concluded that

the problem is that it has turned out to be very difficult to empirically verify the theory’s view on economic fluctuations as being effects of rational actors’ optimal intertemporal choices … Empirical studies have not been able to corroborate the assumption of the sensitivity of  labour supply to changes in intertemporal relative  prices. Most studies rather points to expected changes in real wages having only rather little influence on the supply of labour.   

And this is what Lawrence Summers – in Some Skeptical Observations on Real Business Cycle Theory – had to say about RBC: 

The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …

If these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …

The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …

My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …

Presoctt’s growth model is not an inconceivable representation of reality. But to claim that its prameters are securely tied down by growth and micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …

In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …

Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.

Yet Pescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …

Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.

Thomas Sargent was awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2011 for his “empirical research on cause and effect in the macroeconomy”. In an interview with Sargent in The Region (September 2010), however, one could read the following defense of “modern macro” (my emphasis):

Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.

Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?

Sargent: Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.

Are these the words of an empirical macroeconomist? I’ll be dipped! To me it sounds like the same old axiomatic-deductivist mumbo jumbo that parades as economic science of today.

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. But “facts kick”, as Gunnar Myrdal used to say. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

When that day comes The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel will hopefully be awarded a real macroeconomist and not axiomatic-deductivist modellers like Thomas Sargent and economists of that ilk in the efficient-market-rational-expectations camp .

Mario Draghi sends a cold shiver down my back

31 Jul, 2012 at 10:25 | Posted in Economics, Politics & Society | 1 Comment

 

ECB is ready to do whatever it takes to preserve the euro.

The euro is already a monumental fiasco.

Draghi’s statement is not a promise. It’s a threat.

Non-ergodic economics, expected utility and the Kelly criterion (wonkish)

31 Jul, 2012 at 00:11 | Posted in Economics, Theory of Science & Methodology | 6 Comments

Suppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T), should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played – the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 – with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly-strategy – after the American physicist and information theorist John L. Kelly, who in the article A New Interpretation of Information Rate (1956) suggested this criterion for how to optimize the size of the bet – under which the optimum is to invest a specific fraction (x) of wealth (T) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times more than before, and when I lose (1 – x) times less. After n rounds, when I have won v times and lost n – v times, my new bankroll (W) is

The bankroll increases multiplicatively – “compound interest” – and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left hand side now represents the average growth rate (g) in each game. On the right hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>
p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 – that is, 20% of my bankroll. The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average to be times more than when I started (≈ 1.22 times more).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

On a more economic-theoretical level, the Kelly strategy highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly strategy gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the “arrow of time ” make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles .

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (x) of his assets (T) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly strategy says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 11% (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

Keynes and Knight on uncertainty – ontology vs. epistemology

29 Jul, 2012 at 20:41 | Posted in Economics, Theory of Science & Methodology | 10 Comments

A couple of weeks ago yours truly had an interesting discussion – on the Real-World Economics Review Blog – with Paul Davidson, founder and editor of the Journal of Post Keynesian Economics, on uncertainty and ergodicity. It all started with me commenting on Davidson’s article Is economics a science? Should economics be rigorous? :

LPS:

Davidson’s article is a nice piece – but ergodicity is a difficult concept that many students of economics have problems with understanding. To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all.

Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article of physicist and mathematician Ole Peters in the Santa Fe Institute Bulletin from 2009 – “On Time and Risk” – that makes it perfectly clear that the flaw in thinking about uncertainty in terms of “rational expectations” and ensemble averages has had real repercussions on the functioning of the financial system:

“In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase, when correlation hinders diversification, when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?”

PD:

Lars, if the stochastic process is ergodic, then for for an infinite realizations, the time and space (ensemble) averages will coincide. An ensemble a is samples drawn at a fixed point of time drawn from a universe of realizations For finite realizations, the time and space statistical averages tend to converge (with a probability of one) the more data one has.

Even in physics there are some processes that physicists recognize are governed by nonergodic stochastic processes. [ see A. M. Yaglom, An Introduction to Stationary Random Functions [1962, Prentice Hall]]

I do object to Ole Peters exposition quote where he talks about “when risks increase”. Nonergodic systems are not about increasing or decreasing risk in the sense of the probability distribution variances differing. It is about indicating that any probability distribution based on past data cannot be reliably used to indicate the probability distribution governing any future outcome. In other words even if (we could know) that the future probability distribution will have a smaller variance (“lower risks”) than the past calculated probability distribution, then the past distribution is not is not a reliable guide to future statistical means and other moments around the means.

LPS:

Paul, re nonergodic processes in physics I would even say that most processes definitely are nonergodic. Re Ole Peters I totally agree that what is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules the roost. That was something both Keynes and Knight basically said in their 1921 books. But I still think that Peters’ discussion is a good example of how thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

PD:

Lars, there is a difference between the uncertainty concept developed by Keynes and the one developed by Knight.

As I have pointed out, Keynes’s concept of uncertainty involves a nonergodic stochastic process . On the other hand, Knight’s uncertainty — like Taleb’s black swan — assumes an ergodic process. The difference is the for Knight (and Taleb) the uncertain outcome lies so far out in the tail of the unchanging (over time) probability distribution that it appears empirically to be [in Knight’s terminology] “unique”. In other words, like Taleb’s black swan, the uncertain outcome already exists in the probability distribution but is so rarely observed that it may take several lifetimes for one observation — making that observation “unique”.

In the latest edition of Taleb’s book , he was forced to concede that philosophically there is a difference between a nonergodic system and a black swan ergodic system –but then waves away the problem with the claim that the difference is irrelevent.

LPS:

Paul, on the whole, I think you’re absolutely right on this. Knight’s uncertainty concept has an epistemological founding and Keynes’s definitely an ontological founding. Of course this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’s view is the most warranted of the two.

BUT – from a “practical” point of view I have to agree with Taleb. Because if there is no reliable information on the future, whether you talk of epistemological or ontological uncertainty, you can’t calculate probabilities.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if you subscribe to the former, knightian view – as Taleb and “black swan” theorists basically do – you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As both you and Keynes convincingly have argued, that is ontologically just not possible.

PD:

Lars, your last sentence says it all. If you believe it is an ergodic system and epistemology is the only problem, then you should urge more transparency , better data collection, hiring more “quants” on Wall Street to generate “better” risk management computer problems, etc — and above all keep the government out of regulating financial markets — since all the government can do is foul up the outcome that the ergodic process is ready to deliver.

Long live Stiglitz and the call for transparency to end asymmetric information — and permit all to know the epistemological solution for the ergodic process controlling the economy.

Or as Milton Friedman would say, those who make decisions “as if” they knew the ergodic stochastic process create an optimum market solution — while those who make mistakes in trying to figure out the ergodic process are like the dinosaurs, doomed to fail and die off — leaving only the survival of the fittest for a free market economy to prosper on. The proof is why all those 1% far cats CEO managers in the banking business receive such large salaries for their “correct” decisions involving financial assets.

Alternatively, if the financial and economic system is non ergodic then there is a positive role for government to regulate what decision makers can do so as to prevent them from mass destruction of themselves and other innocent bystanders — and also for government to take positive action when the herd behavior of decision makers are causing the economy to run off the cliff.

So this distinction between ergodic and nonergodic is essential if we are to build institutional structures that make running off the cliff almost impossible. — and for the government to be ready to take action when some innovative fool(s) discovers a way to get around institutional barriers and starts to run the economy off the cliff.

To Keynes the source of uncertainty was in the nature of the real – nonergodic – world. It had to do, not only – or primarily – with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilites and expectations.

Sporadic blogging

26 Jul, 2012 at 09:32 | Posted in Varia | Comments Off on Sporadic blogging

Touring again – and I’m supposed to be on holiday! Regular blogging will be resumed early next week.
 

The Great Obfuscation

26 Jul, 2012 at 09:26 | Posted in Economics | Comments Off on The Great Obfuscation

As I commented on yesterday, the Spanish and Italian prospectives for the future look desperately bleak and there are today every sign that the euro crisis is spinning out of control.

And still

l’euro n’est absolument pas en danger et la monnaie unique est irréversible”

according to European Central Bank President Mario Draghi.

As if this wasn’t enough, Financial Times now reports:

Germany on Tuesday threw its considerable weight behind the reform and austerity programme of the Spanish government, in the face of a continuing surge in the cost of borrowing for Madrid, and strong protests against its spending cuts.

A joint statement by Wolfgang Schäuble, German finance minister, and Luis de Guindos, Spanish economy minister, condemned the high interest rates demanded for the sale of Spanish bonds as failing to reflect “the fundamentals of the Spanish economy, its growth potential and the sustainability of its public debt”.

Mr de Guindos flew to Berlin for the talks with the German finance minister as Miguel Angel Fernandez Ordonez, former governor of the Spanish central bank, launched a fierce criticism of his government in the Spanish parliament.

“In the first half of the year we have witnessed a collapse in confidence in Spain and its financial system to levels unimaginable seven months ago,” he said. “Now we are not only worse than Italy, but worse than Ireland, a country that has been rescued.”

Or, as Shakespeare had it:

though this be madness, yet there’s method in it.

Oh dear, oh dear, Wren-Lewis gets it so wrong – again!

25 Jul, 2012 at 21:07 | Posted in Economics | 10 Comments

Commenting once again on my critique (here and here) of microfounded “New Keynesian” macroeconomics in general and on Wren-Lewis himself  more specifically – Wren-Lewis writes on his blog (italics added):

Lars Syll gave a list recently [on heterodox alternatives to the microfounded macroeconomics that Wren-Lewis in an earlier post had intimated didn’t really exist]: Hyman Minsky, Michal Kalecki, Sidney Weintraub, Johan Åkerman, Gunnar Myrdal, Paul Davidson, Fred Lee, Axel Leijonhufvud, Steve Keen.

I cannot recall reading papers or texts by Akerman or Lee, but I have read at least something of all the others. I was also taught Neo-Ricardian economics when young, so I have read plenty by Robinson, Sraffa, Pasinetti etc. I do not know much about the Austrians.

But actually my concern is not what any particular author thought, but with the divide I talked about. Mainstream economists can and in some cases have learnt a lot from some of these authors, and a number of mainstream economists have acknowledged this. (One of these days I want to write a paper arguing that Leijonhufvud was the first New Keynesian economist.)

Axel Leijonhufvud the first “New Keynesian”? No way! This is so wrong, so wrong.

The  last time I met Axel was in Roskilde and Copenhagen back in April 2008. We were both invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?”  Axel’s speech was later published as Keynes and the crisis and contains the following thought provoking passages:

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

The real interest rate … The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.

The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedthat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

Axel Leijonhufvud the first “New Keynesian” ecoonomist? Forget it! 

The euro “irreversible”? I’ll be dipped!

25 Jul, 2012 at 10:27 | Posted in Economics | 1 Comment

The Spanish and Italian yield curves look worrying all over. 10-Year Yields day after day over 7%  is no good sign.  On the contrary. Everything indicates that these countries are in for a long period of recession. They have no growth. Debt servicing is rising. And the 10-Year Spread against Germany widens and widens. There are today every sign that the euro crisis is spinning out of control. And that  the GIIPS countries’ lack of monetary sovereignty will lead to the downfall of the euro.

And still the  euro zone is said not to be in danger of breaking up according to European Central Bank President Mario Draghi. Interviewed in Le Monde Draghi says that

l’euro n’est absolument pas en danger et la monnaie unique est irréversible.

I’ve lost count on how many times we’ve heard that before. And still things keep on getting worse and worse …

My favourite book on Keynes

24 Jul, 2012 at 22:10 | Posted in Economics | 3 Comments

Keynes’s economic theory is intimately connected with the epistemological and methodological view he presented already in his Treatise on Probability. To Keynes, economic theory is always unsatisfactory if it is based on the scientist distancing himself from a reality characterized by our knowledge of the future being fluctuating,vague and uncertain.  The main difference between Keynes and neoclassical macroeconomics is – on the deepest level – centered around this point.

In a world in equilibrium there is no difference between now and then. In such a world there is no need for Keynes. But – it is also a fact that the cradle of equlibrium analysis silences all really interesting economic questions.

Amartya Sen on the state of modern economics

24 Jul, 2012 at 21:29 | Posted in Economics | 3 Comments

Interviewed by Olaf Storbeck and Dorit Hess earlier this year, Nobel laureate Amartya Sen made some interesting remarks on the state of modern economics:

Question: Professor Sen, do you have the impression that economists and economic policy makers are learning the right lessons from the most severe economic and financial crisis since the Great Depression?

Answer: I don’t think that at all. I’m quite disappointed by the nature of economic thinking as well as social thinking that connects economics with politics.

You make a lot of references to old economic thinkers like Smith, Keynes and so on. However, if you look at the current economic research that is published in the journals and taught at universities, the history of economic thought does not play a big role anymore…

Yes, absolutely. The history of economic thought has been woefully neglected by the profession in the last decades. This has been one of the major mistakes of the profession. One of the earliest reminders that we are going in the wrong direction has come from Kenneth Arrow about 30 years ago when he said: These days, I get surprised when I find the students don’t seem to know any economics that was written 25 or 30 years ago.

Is there any hope that this trend can be reversed?

Yes, I’m quite optimistic in this regard. I get the impression that this seems to be getting corrected right now. I’m particularly delighted that the corrective has come to a great extent from student interest. I’m very struck by the fact that at the university where I teach – Harvard – the demand for more history of economic thought has mostly come from students. As a result there is a lot more attempt by the department of economics as well as history and government to look for the history of political economy. Last year, along with my wife Emma Rothschild, I offered a course on Adam Smith’s philosophy and political economy. It drew a lot of interest and we got some of the finest students at Harvard.

Do you think the focus on mathematics in current economics is the flip side of the neglect of the history of economic thought?

I don’t think that there is any conflict between mathematical reasoning and being interested in the history of thought. Many of our early thinkers were quite mathematical. The connection between mathematics and economics is very strong, and there is no reason to be ashamed of it. What is to be avoided is to be concentrated only on mathematical economics. We must not neglect the insights that come from parts of the subject where mathematics is not sensible to use and different kinds of reasoning are useful. I don’t think the conflict is between mathematics and other kinds of methods. The conflict is between taking an integrated, broad, comprehensive view as opposed to a narrow view whether it is mathematical or anti-mathematical.

Rigour – a poor substitute for relevance and realism

24 Jul, 2012 at 18:07 | Posted in Economics, Theory of Science & Methodology | 2 Comments

The mathematization of economics since WW II has made mainstream – neoclassical – economists more or less obsessed with formal, deductive-axiomatic models. Confronted with the critique that they do not solve real problems, they  often react as Saint-Exupéry‘s Great Geographer, who, in response to the questions posed by The Little Prince, says that he is too occupied with his scientific work to be be able to say anything about reality. Confronting economic theory’s lack of relevance and ability to tackle real probems, one retreats into the wonderful world of economic models. One goes into the “shack of tools” – as my old mentor Erik Dahmén used to say – and stays there. While the economic problems in the world around us steadily increase, one is rather happily playing along with the latest toys in the mathematical toolbox.

Paul Krugman has been having similar critical thoughts on our “queen of social science”. In a piece called Irregular Economics he writes:

Why, exactly, are we to have such faith in “regular economics”? What is the compelling evidence that the vision of a competitive, efficient economy allocating resources to the right uses is actually a good description of the world we live in?

I mean, it’s a lovely model, and one I, like everyone else in economics, use a lot. But I would not have said that it’s a model backed by lots of evidence. We do know that demand curves generally slope down; it’s a lot harder to give good examples of supply curves that slope up (as a textbook author, believe me, I’ve looked); and it’s a very long way from there to the vision of Pareto efficiency and all that which Barro wants us to take as the true economics. Realistically, imperfect competition, market failure, and more are everywhere.

Meanwhile, there’s actually a lot of evidence for a broadly Keynesian view of the world. Not, to be fair, for fiscal policy, mainly because clean fiscal experiments are rare. But there’s huge evidence for sticky prices, lots of evidence that monetary shocks have real effects — and it’s hard to produce a coherent model in which that’s true that doesn’t also leave room for fiscal policy.

In short, there’s no reason at all to consider microeconomics the “real” economics and macroeconomics some kind of flaky impostor. Yes, micro is a lot more rigorous — but if it’s rigorously wrong, who cares?

Instead of making the model the message, I think we are better served by economists who more  than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

Causality and economics (wonkish)

24 Jul, 2012 at 10:54 | Posted in Economics, Theory of Science & Methodology | 1 Comment

A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.

What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.

But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Everyone – both “labs” and “experimentalists” – should consider the following lines from David Salsburg’s The Lady Tasting Tea (Henry Holt 2001:146):

In Kolmogorov’s axiomatization of probability theory, we assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify Kolmogorov’s abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard …  Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under  very restrictive conditions!

Science philosopher Nancy Cartwright has succinctly summarized the value of randomization. In The Lancet 23/4 2011 she states:

But recall the logic of randomized control trials … [T]hey are ideal for supporting ‘it-works-somewhere’ claims. But they are in no way ideal for other purposes; in particular they provide no better bases for extrapolating or generalising than knowledge that the treatmet caused the outcome in any other individuals in any other circumstances … And where no capacity claims obtain, there is seldom warrant for assuming that a treatment that works somewhere will work anywhere else. (The exception is where there is warrant to believe that the study population is a representative sample of the target population – and cases like this are hard to come by.)

And in BioSocieties 2/2007:

We experiment on a population of individuals each of whom we take to be described (or ‘governed’) by the same fixed causal structure (albeit unknown) and fixed probability measure (albeit unknown). Our deductive conclusions depend on that cvery causal structure and probability. How do we know what individuals beyond those in our experiment this applies to? … The [randomized experiment], with its vaunted rigor, takes us only a very small part of the way we need to go for practical knowledge. This is what disposes me to warn about the vanity of rigor in [randomized experiments].

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

Kenneth Arrow on rational expectations and microfoundations

23 Jul, 2012 at 21:39 | Posted in Economics, Theory of Science & Methodology | 1 Comment

One of the greatest economists of all time – Kenneth Arrow – made an assessment of microfoundations and rational expectations in an article – in Journal of Business (1986) – entitled Rationality of Self and Others in an Economic System. It ought to be mandatory reading for all students of “modern” macroeconomics:

[The power of both “new classical” and “rational expectations models”] is obtained by adding strong supplementary assumptions to the general model of rationality. Most prevalent of all is the assumption that all individuals have the same utility function … But this postulate leads to curious and, to my mind, serious difficulties in the interpretation of evidence … [I]f all individuals are alike, why do they not make the same choice? Why do we observe a dispersion? … Analogously, in macroeconomic models … the assumption of homogeneous agents implies that there will never be any trading, though there will be changes in prices.

This dilemma is intrinsic. If agents are alike, there is really no room for trade. The very basis of economic analysis, from Smith on, is the existence of differences in agents …

The new theoretical paradigm of rational expectations holds that ach individual forms expectations of the future on the basis of a correct model of the economy, in fact, the same model that the econometrician is using … Since the world is uncertain, the expectations take the form of probability distributions, and each agent’s expectations are conditional on the information available to him or her …

Each agent has to have a model of the entire economy to preserve rationality. The cost of knowledge, so emphasized by the defenders of the price system as against centralized planning, has disappeared; each agent is engaged in very extensive information gathering and data processing.

Rational expectations theory is a stochastic form of perfect foresight. Not only the feasibility but even the logical consistency of this hypothesis was attacked long ago … Rational expectations … require not only extensive first-order knowledge but also common knowledge, since predictions of the future depend on other individuals’ predictions of the future.

So you want to run yet another regression? Think twice!

23 Jul, 2012 at 14:18 | Posted in Statistics & Econometrics | 3 Comments

The cost of computing has dropped exponentially, but the cost of thinking is what it always was. That is why we see so many articles with so many regressions and so little thought.

Zvi Griliches

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.