My blog is skyrocketing!

31 July, 2014 at 21:59 | Posted in Varia | 2 Comments

happy-cartoon-boy-jumping-and-smiling3

Tired of the idea of an infallible mainstream neoclassical economics and its perpetuation of spoon-fed orthodoxy, yours truly launched this blog in March 2011. The number of visitors has increased steadily, and now, three and a half years later, with almost 125 000 views per month, I have to admit of still being — given the somewhat wonkish character of the blog, with posts mostly on economic theory, statistics, econometrics, theory of science and methodology — rather gobsmacked that so many are interested and take their time to read the often rather geeky stuff on this blog.

In the 21st century the blogosphere has without any doubts become one of the greatest channels for dispersing new knowledge and information. images-4As a blogger I can specia-lize in those particular topics an economist and critical realist professor of social science happens to have both deep knowledge of and interest in. That, of course, also means — in the modern long tail world — being able to target a segment of readers with much narrower and specialized interests than newspapers and magazines as a rule could aim for — and still attract quite a lot of readers.

Advertisements

Economic growth and the male organ — does size matter?

31 July, 2014 at 19:51 | Posted in Economics | Comments Off on Economic growth and the male organ — does size matter?

Economic growth has since long interested economists. Not least, the question of which factors are behind high growth rates has been in focus. The factors usually pointed at are mainly economic, social and political variables. In an interesting study from the University of  Helsinki, Tatu Westling has expanded the potential causal variables to also include biological and sexual variables. In  the report Male Organ and Economic Growth: Does Size Matter (2011), he has — based on the “cross-country” data of Mankiw et al (1992), Summers and Heston (1988), Polity IV Project data of political regime types and a new data set on average penis size in 76 non-oil producing countries (www.everyoneweb.com/worldpenissize) — been able to show that the level and growth of GDP per capita between 1960 and 1985 varies with penis size. Replicating Westling’s study — I have used my favourite program Gretl — we obtain the following two charts:


The Solow-based model estimates show that the maximum GDP is achieved with the penis of about 13.5 cm and that the male reproductive organ (OLS without control variables) are negatively correlated with — and able to explain 20% of the variation in — GDP growth.

Even with reservation for problems such as endogeneity and confounders one can not but agree with Westling’s final assessment that “the ‘male organ hypothesis’ is worth pursuing in future research” and that it “clearly seems that the ‘private sector’ deserves more credit for economic development than is typically acknowledged.” Or? …

Nancy Cartwright on RCTs

31 July, 2014 at 08:56 | Posted in Theory of Science & Methodology | Comments Off on Nancy Cartwright on RCTs

16720017-abstract-word-cloud-for-randomized-controlled-trial-with-related-tags-and-termsI’m fond of science philosophers like Nancy Cartwright. With razor-sharp intellects they immediately go for the essentials. They have no time for bullshit. And neither should we.

In Evidence: For Policy — downloadable here — Cartwirght has assembled her papers on how better to use evidence from the sciences “to evaluate whether policies that have been tried have succeeded and to predict whether those we are thinking of trying will produce the outcomes we aim for.” Many of the collected papers center around what can and cannot be inferred from results in well-done randomised controlled trials (RCTs).

A must-read for everyone with an interest in the methodology of science.

Wren-Lewis on economic methodology

30 July, 2014 at 17:09 | Posted in Economics | 3 Comments

Simon Wren-Lewis has a post up today discussing why the New Classical Counterrevolution (NCCR) was successful in replacing older theories, despite the fact that the New Classical models weren’t able to explain what happened to output and inflation in the 1970s and 1980s:

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example …

However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM) …

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions …

So why does this matter? … If mainstream academic macroeconomists were seduced by anything, it was a methodology – a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous!

Wren-Lewis seems to be überimpressed by the “rigour” brought to macroeconomics by the New Classical counterrevolution and its rational expectations, microfoundations and ‘Lucas Critique’.

I fail to see why.

Contrary to what Wren-Lewis seems to argue, I would say the recent economic crisis and the fact that New Classical economics has had next to nothing to contribute in understanding it, shows that New Classical economics is a degenerative research program in dire need of replacement.

The predominant strategy in mainstream macroeconomics today is to build models and make things happen in these “analogue-economy models.” But although macro-econometrics may have supplied economists with rigorous replicas of real economies, if the goal of theory is to be able to make accurate forecasts or explain what happens in real economies, this ability to — ad nauseam — construct toy models, does not give much leverage.

“Rigorous” and “precise” New Classical models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

And — applying a “Lucas critique” on New Classical models, it is obvious that they too fail. Changing “policy rules” cannot just be presumed not to influence investment and consumption behavior and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as “a FORTRAN program” and “gain some confidence that the component parts of the program are in some sense reliable prior to running it” therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters “tastes” and “technology” shows that if you neglect ontological considerations pertaining to the target system, ultimately reality gets its revenge when at last questions of bridging and exportation of model exercises are laid on the table.

No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real world economies.

keynes-right-and-wrong

RBC and the Lucas-Rapping theory of unemployment

30 July, 2014 at 13:07 | Posted in Economics | 2 Comments

unemployed-thumbLucas and Rapping (1969) claim that cyclical increases in unemployment occur when workers quit their jobs because wages or salaries fall below expectations …

According to this explanation, when wages are unusually low, people become unemployed in order to enjoy free time, substituting leisure for income at a time when they lose the least income …

According to the theory, quits into unemployment increase during recessions, whereas historically quits decrease sharply and roughly half of unremployed workers become jobless because they are laid off … During the recession I studied, people were even afraid to change jobs because new ones might prove unstable and lead to unemployment …

If wages and salaries hardly ever fall, the intertemporal substitution theory is widely applicable only if the unemployed prefer jobless leisure to continued employment at their old pay. However, the attitude and circumstances of the unemployed are not consistent with their having made this choice …

In real business cycle theory, unemployment is interpreted as leisure optimally selected by workers, as in the Lucas-Rapping model. It has proved difficult to construct business cycle models consistent with this assumption and with real wage fluctuations as small as they are in reality, relative to fluctuations in employment.

Truman F. Bewley

This is, of course, only what you would expect of New Classical Chicago economists.

But sadly enough this extraterrestial view of unemployment is actually shared by so called New Keynesians, whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfunded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

The final court of appeal for macroeconomic models is the real world.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

To Keynes this was self-evident. But obviously not so to New Classical and ‘New Keynesian’ economists.

Normativ multikulturalism

29 July, 2014 at 23:03 | Posted in Politics & Society | 3 Comments

Häromdagen lyssnade jag till en manlig journalist som satt i en panel och var mycket upprörd över att invandrare utpekades som kvinnoförtryckare bara för att en del av dem misshandlade sina kvinnor och tvingade dem att bära slöja och hålla sig inomhus. Att skriva om sånt i tidningar var rasistiskt och vi skulle inte inbilla oss att vi var så bra på jämställdhet i Sverige heller! Det finns fortfarande löneskillnader här, så det så! Och förresten är det en kulturfråga!

jeff

I panelen satt ett antal invandrarkvinnor som blev så arga att de nästan fick blodstörtning. Det är skillnad på svenska löneorättvisor och faraonisk omskärelse, hot och “hedersmord”. “Ska vi hålla tyst om vad som händer bara för att inte fläcka våra Mäns rykte?” sa de. “Och om invandrare skulle börja slakta svenska män för ärans skull, vore det då fortfarande en “kulturfråga”?

Katarina Mazetti, Mazettis blandning (2001)

Jag har full förståelse för dessa kvinnors upprördhet.

Vad frågan i grund och botten handlar om är huruvida vi som medborgare i ett modernt demokratiskt samhälle ska tolerera de intoleranta.

Människor i vårt land som kommer från länder eller tillhör grupperingar av olika slag – vars fränder och trosbröder kanske sitter vid makten och styr med brutal intolerans – måste självklart omfattas av vår tolerans. Men lika självklart är att denna tolerans bara gäller så länge intoleransen inte tillämpas i vårt samhälle.

Kultur, identitet, etnicitet, genus, religiositet får aldrig accepteras som grund för intolerans i politiska och medborgerliga hänseenden. I ett modernt demokratiskt samhälle måste människor som tillhör dessa olika grupper kunna räkna med att samhället också skyddar dem mot intoleransens övergrepp. Alla medborgare måste ha friheten och rätten att också ifrågasätta och lämna den egna gruppen. Mot dem som inte accepterar den toleransen måste vi vara intoleranta.

I Sverige har vi länge okritiskt omhuldat en ospecificerad och odefinierad mångkulturalism. Om vi med mångkulturalism menar att det i vårt samhälle finns flera olika kulturer ställer detta inte till med problem. Då är vi alla mångkulturalister.

Men om vi med mångkulturalism menar att det med kulturell tillhörighet och identitet också kommer specifika moraliska, etiska och politiska rättigheter och skyldigheter, talar vi om något helt annat. Då talar vi om normativ mångkulturalism. Och att acceptera normativ mångkulturalism, innebär också att tolerera oacceptabel intolerans, eftersom den normativa mångkulturalismen innebär att specifika kulturella gruppers rättigheter kan komma att ges högre dignitet än samhällsmedborgarens allmänmänskliga rättigheter – och därigenom indirekt bli till försvar för dessa gruppers (eventuella) intolerans. I ett normativt mångkulturalistiskt samhälle kan institutioner och regelverk användas för att inskränka människors frihet utifrån oacceptabla och intoleranta kulturella värderingar.

Den normativa mångkulturalismen innebär precis som främlingsfientlighet och rasism att individer på ett oacceptabelt sätt reduceras till att vara passiva medlemmar av kultur- eller identitetsbärande grupper. Men tolerans innebär inte att vi måste ha en värderelativistisk inställning till identitet och kultur. De som i vårt samhälle i handling visar att de inte respekterar andra människors rättigheter, kan inte räkna med att vi ska vara toleranta mot dem. De som med våld vill tvinga andra människor att underordna sig en speciell grupps religion, ideologi eller ”kultur” är själva ansvariga för den intolerans de måste bemötas med.

Om vi ska värna om det moderna demokratiska samhällets landvinningar måste samhället vara intolerant mot den intoleranta normativa mångkulturalismen. Och då kan inte samhället själv omhulda en normativ mångkulturalism. I ett modernt demokratiskt samhälle måste rule of law gälla – och gälla alla!

Mot dem som i vårt samhälle vill tvinga andra att leva efter deras egna religiösa, kulturella eller ideologiska trosföreställningar och tabun, ska samhället vara intolerant. Mot dem som vill tvinga samhället att anpassa lagar och regler till den egna religionens, kulturens eller gruppens tolkningar, ska samhället vara intolerant. Mot dem som i handling är intoleranta ska vi inte vara toleranta.

The Weight

29 July, 2014 at 19:30 | Posted in Varia | Comments Off on The Weight

 

Austrian economics — a methodological critique

29 July, 2014 at 17:04 | Posted in Theory of Science & Methodology | 5 Comments


[h/t Jan Milch]

This is a fair presentation and critique of Austrian methodology. But beware! In theoretical and methodological questions it’s not always either-or. We have to be open-minded and pluralistic enough not to throw out the baby with the bath water — and fail to secure insights like this:

What is the problem we wish to solve when we try to construct a rational economic order? … If we possess all the relevant information, if we can start out from a given system of preferences, and if we command complete knowledge of available means, the problem which remains is purely one of logic …

The-Use-of-Knowledge-in-Society_800x600-05_2014-172x230This, however, is emphatically not the economic problem which society faces … The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is … a problem of the utilization of knowledge which is not given to anyone in its totality.

This character of the fundamental problem has, I am afraid, been obscured rather than illuminated by many of the recent refinements of economic theory … Many of the current disputes with regard to both economic theory and economic policy have their common origin in a misconception about the nature of the economic problem of society. This misconception in turn is due to an erroneous transfer to social phenomena of the habits of thought we have developed in dealing with the phenomena of nature …

To assume all the knowledge to be given to a single mind in the same manner in which we assume it to be given to us as the explaining economists is to assume the problem away and to disregard everything that is important and significant in the real world.

Compare this relevant and realist wisdom with the rational expectations hypothesis (REH) used by almost all mainstream macroeconomists today. REH presupposes – basically for reasons of consistency – that agents have complete knowledge of all of the relevant probability distribution functions. And when trying to incorporate learning in these models – trying to take the heat of some of the criticism launched against it up to date – it is always a very restricted kind of learning that is considered. A learning where truly unanticipated, surprising, new things never take place, but only rather mechanical updatings – increasing the precision of already existing information sets – of existing probability functions.

Nothing really new happens in these ergodic models, where the statistical representation of learning and information is nothing more than a caricature of what takes place in the real world target system. This follows from taking for granted that people’s decisions can be portrayed as based on an existing probability distribution, which by definition implies the knowledge of every possible event (otherwise it is in a strict mathematical-statistically sense not really a probability distribution) that can be thought of taking place.

The rational expectations hypothesis presumes consistent behaviour, where expectations do not display any persistent errors. In the world of rational expectations we are always, on average, hitting the bull’s eye. In the more realistic, open systems view, there is always the possibility (danger) of making mistakes that may turn out to be systematic. It is because of this, presumably, that we put so much emphasis on learning in our modern knowledge societies.

As Hayek wrote:

When it comes to the point where [equilibrium analysis] misleads some of our leading thinkers into believing that the situation which it describes has direct relevance to the solution of practical problems, it is high time that we remember that it does not deal with the social process at all and that it is no more than a useful preliminary to the study of the main problem.

The vain glory of the ‘New Keynesian’ club

28 July, 2014 at 10:10 | Posted in Economics | Comments Off on The vain glory of the ‘New Keynesian’ club

Vain Glory Sinner

Paul Krugman’s economic analysis is always stimulating and insightful, but there is one issue on which I think he persistently falls short. That issue is his account of New Keynesianism’s theoretical originality and intellectual impact … The model of nominal wage rigidity and the Phillips curve that I described comes from my 1990 dissertation, was published in March 1994, and has been followed by substantial further published research. That research also introduces ideas which are not part of the New Keynesian model and are needed to explain the Phillips curve in a higher inflation environment.

Similar precedence issues hold for scholarship on debt-driven business cycles, financial instability, the problem of debt-deflation in recessions and depressions, and the endogenous credit-driven nature of the money supply. These are all topics my colleagues and I, working in the Post- and old Keynesian traditions, have been writing about for years – No, decades!

Since 2008, some New Keynesians have discovered these same topics and have developed very similar analyses. That represents progress which is good news for economics. However, almost nowhere will you find citation of this prior work, except for token citation of a few absolutely seminal contributors (like Tobin and Minsky) …

By citing the seminal critical thinkers, mainstream economists lay claim to the intellectual lineage. And by overlooking more recent work, they capture the ideas of their critics.

This practice has enormous consequences. At the personal level, there is the matter of vain glory. At the sociological level, it suffocates debate and pluralism in economics. It is as if the critics have produced nothing so there is no need for debate, and nor are the critics deserving of a place in the academy …

For almost thirty years, New Keynesians have dismissed other Keynesians and not bothered to stay acquainted with their research. But now that the economic crisis has forced awareness, the right thing is to acknowledge and incorporate that research. The failure to do so is another element in the discontent of critics, which Krugman dismisses as just “Frustrations of the Heterodox.”

Thomas Palley

Added July 29: Krugman answers here.

Nights In White Satin

25 July, 2014 at 18:50 | Posted in Varia | Comments Off on Nights In White Satin


Old love never rusts …

What Americans can learn from Sweden’s school choice disaster

25 July, 2014 at 11:02 | Posted in Education & School | 4 Comments

School_ChoiceAdvocates for choice-based solutions should take a look at what’s happened to schools in Sweden, where parents and educators would be thrilled to trade their country’s steep drop in PISA scores over the past 10 years for America’s middling but consistent results. What’s caused the recent crisis in Swedish education? Researchers and policy analysts are increasingly pointing the finger at many of the choice-oriented reforms that are being championed as the way forward for American schools. While this doesn’t necessarily mean that adding more accountability and discipline to American schools would be a bad thing, it does hint at the many headaches that can come from trying to do so by aggressively introducing marketlike competition to education.
There are differences between the libertarian ideal espoused by Friedman and the actual voucher program the Swedes put in place in the early ’90s … But Swedish school reforms did incorporate the essential features of the voucher system advocated by Friedman. The hope was that schools would have clear financial incentives to provide a better education and could be more responsive to customer (i.e., parental) needs and wants when freed from the burden imposed by a centralized bureaucracy …

But in the wake of the country’s nose dive in the PISA rankings, there’s widespread recognition that something’s wrong with Swedish schooling … Competition was meant to discipline government schools, but it may have instead led to a race to the bottom …

It’s the darker side of competition that Milton Friedman and his free-market disciples tend to downplay: If parents value high test scores, you can compete for voucher dollars by hiring better teachers and providing a better education—or by going easy in grading national tests. Competition was also meant to discipline government schools by forcing them to up their game to maintain their enrollments, but it may have instead led to a race to the bottom as they too started grading generously to keep their students …

Maybe the overall message is … “there are no panaceas” in public education. We tend to look for the silver bullet—whether it’s the glories of the market or the techno-utopian aspirations of education technology—when in fact improving educational outcomes is a hard, messy, complicated process. It’s a lesson that Swedish parents and students have learned all too well: Simply opening the floodgates to more education entrepreneurs doesn’t disrupt education. It’s just plain disruptive.

Ray Fisman

[h/t Jan Milch]

For my own take on this issue — only in Swedish, sorry — see here, here, here and here.

James Heckman — the ultimate take down of teflon-coated defenders of rational expectations

24 July, 2014 at 21:16 | Posted in Economics | 4 Comments

heckman

James Heckman, winner of the “Nobel Prize” in economics (2000), did an inteview with John Cassidy in 2010. It’s an interesting read (Cassidy’s words in italics):

What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?

I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”

It became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lard Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.

What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?

Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.

What about you? When rational expectations was sweeping economics, what was your reaction to it? I know you are primarily a micro guy, but what did you think?

What struck me was that we knew Keynesian theory was still alive in the banks and on Wall Street. Economists in those areas relied on Keynesian models to make short-run forecasts. It seemed strange to me that they would continue to do this if it had been theoretically proven that these models didn’t work.

What about the efficient-markets hypothesis? Did Chicago economists go too far in promoting that theory, too?

Some did. But there is a lot of diversity here. You can go office to office and get a different view.

[Heckman brought up the memoir of the late Fischer Black, one of the founders of the Black-Scholes option-pricing model, in which he says that financial markets tend to wander around, and don’t stick closely to economics fundamentals.]

[Black] was very close to the markets, and he had a feel for them, and he was very skeptical. And he was a Chicago economist. But there was an element of dogma in support of the efficient-market hypothesis. People like Raghu [Rajan] and Ned Gramlich [a former governor of the Federal Reserve, who died in 2007] were warning something was wrong, and they were ignored. There was sort of a culture of efficient markets—on Wall Street, in Washington, and in parts of academia, including Chicago.

What was the reaction here when the crisis struck?

Everybody was blindsided by the magnitude of what happened. But it wasn’t just here. The whole profession was blindsided. I don’t think Joe Stiglitz was forecasting a collapse in the mortgage market and large-scale banking collapses.

So, today, what survives of the Chicago School? What is left?

I think the tradition of incorporating theory into your economic thinking and confronting it with data—that is still very much alive. It might be in the study of wage inequality, or labor supply responses to taxes, or whatever. And the idea that people respond rationally to incentives is also still central. Nothing has invalidated that—on the contrary.

So, I think the underlying ideas of the Chicago School are still very powerful. The basis of the rocket is still intact. It is what I see as the booster stage—the rational-expectation hypothesis and the vulgar versions of the efficient-markets hypothesis that have run into trouble. They have taken a beating—no doubt about that. I think that what happened is that people got too far away from the data, and confronting ideas with data. That part of the Chicago tradition was neglected, and it was a strong part of the tradition.

When Bob Lucas was writing that the Great Depression was people taking extended vacations—refusing to take available jobs at low wages—there was another Chicago economist, Albert Rees, who was writing in the Chicago Journal saying, No, wait a minute. There is a lot of evidence that this is not true.

Milton Friedman—he was a macro theorist, but he was less driven by theory and by the desire to construct a single overarching theory than by attempting to answer empirical questions. Again, if you read his empirical books they are full of empirical data. That side of his legacy was neglected, I think.

When Friedman died, a couple of years ago, we had a symposium for the alumni devoted to the Friedman legacy. I was talking about the permanent income hypothesis; Lucas was talking about rational expectations. We have some bright alums. One woman got up and said, “Look at the evidence on 401k plans and how people misuse them, or don’t use them. Are you really saying that people look ahead and plan ahead rationally?” And Lucas said, “Yes, that’s what the theory of rational expectations says, and that’s part of Friedman’s legacy.” I said, “No, it isn’t. He was much more empirically minded than that.” People took one part of his legacy and forgot the rest. They moved too far away from the data.

 

Yes indeed, they certainly “moved too far away from the data.”

In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:

it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.

And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:

More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.

For even more on the issue, permit me to self-indulgently recommend reading my article Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world in real-world economics review no. 62.

Sverker Sörlins lustmord på Jan Björklund

24 July, 2014 at 15:16 | Posted in Education & School | Comments Off on Sverker Sörlins lustmord på Jan Björklund

De senaste åren har en omfattande politisk energi gått ut på att försöka minska friskolornas svängrum och, måste man nog säga, skadeverkningar … Kanske kan det rentav vara så att svikten i skolans resultat kan knytas, i alla fall till någon del, till denna revolution av skolans huvudmannaskap?

Jan BjšrklundI en regering där Moderaterna är största parti och dess värderingar och ideal dominerar får det inte råda någon tvekan på denna punkt. Det gör det inte heller. Jan Björklund är en kompromisslös försvarare av friskolereformen …

Jan Björklund var surrad vid masten för att han inte skulle kunna lystra till sirenernas sång som handlar om att det finns en annan värld som är möjlig. En där vi börjar om igen och försöker samarbeta. Där skolan är vår gemensamma uppgift, som den var en gång i ett ljusare samhälle där FP var med och byggde världens bästa skola …

Det borde inte vara omöjligt. Men med Jan Björklund var det faktiskt just — omöjligt.

Sverker Sörlin

Sörlins tio sidor långa analys i Magasinet Arena av Jan Björklunds tid som svensk skolminister är ett absolut “must read”!

Read my lips — statistical significance is NOT a substitute for doing real science!

24 July, 2014 at 14:12 | Posted in Theory of Science & Methodology | 2 Comments

Noah Smith has a post up today telling us that his Bayesian Superman wasn’t intended to be a knock on Bayesianism and that he thinks Frequentism is a bit underrated these days:

Frequentist hypothesis testing has come under sustained and vigorous attack in recent years … But there are a couple of good things about Frequentist hypothesis testing that I haven’t seen many people discuss. Both of these have to do not with the formal method itself, but with social conventions associated with the practice …

Why do I like these social conventions? Two reasons. First, I think they cut down a lot on scientific noise.i_do_not_think_it_[significant]_means_what_you_think_it_means “Statistical significance” is sort of a first-pass filter that tells you which results are interesting and which ones aren’t. Without that automated filter, the entire job of distinguishing interesting results from uninteresting ones falls to the reviewers of a paper, who have to read through the paper much more carefully than if they can just scan for those little asterisks of “significance”.

Hmm …

A non-trivial part of teaching statistics is made up of teaching students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p-values – really are, still most students misinterpret them. And a lot of researchers obviously also fall pray to the same mistakes:

Are women three times more likely to wear red or pink when they are most fertile? No, probably not. But here’s how hardworking researchers, prestigious scientific journals, and gullible journalists have been fooled into believing so.

The paper I’ll be talking about appeared online this month in Psychological Science, the flagship journal of the Association for Psychological Science, which represents the serious, research-focused (as opposed to therapeutic) end of the psychology profession.

images-11“Women Are More Likely to Wear Red or Pink at Peak Fertility,” by Alec Beall and Jessica Tracy, is based on two samples: a self-selected sample of 100 women from the Internet, and 24 undergraduates at the University of British Columbia. Here’s the claim: “Building on evidence that men are sexually attracted to women wearing or surrounded by red, we tested whether women show a behavioral tendency toward wearing reddish clothing when at peak fertility. … Women at high conception risk were more than three times more likely to wear a red or pink shirt than were women at low conception risk. … Our results thus suggest that red and pink adornment in women is reliably associated with fertility and that female ovulation, long assumed to be hidden, is associated with a salient visual cue.”

Pretty exciting, huh? It’s (literally) sexy as well as being statistically significant. And the difference is by a factor of three—that seems like a big deal.

Really, though, this paper provides essentially no evidence about the researchers’ hypotheses …

The way these studies fool people is that they are reduced to sound bites: Fertile women are three times more likely to wear red! But when you look more closely, you see that there were many, many possible comparisons in the study that could have been reported, with each of these having a plausible-sounding scientific explanation had it appeared as statistically significant in the data.

The standard in research practice is to report a result as “statistically significant” if its p-value is less than 0.05; that is, if there is less than a 1-in-20 chance that the observed pattern in the data would have occurred if there were really nothing going on in the population. But of course if you are running 20 or more comparisons (perhaps implicitly, via choices involved in including or excluding data, setting thresholds, and so on), it is not a surprise at all if some of them happen to reach this threshold.

The headline result, that women were three times as likely to be wearing red or pink during peak fertility, occurred in two different samples, which looks impressive. But it’s not really impressive at all! Rather, it’s exactly the sort of thing you should expect to see if you have a small data set and virtually unlimited freedom to play around with the data, and with the additional selection effect that you submit your results to the journal only if you see some catchy pattern. …

Statistics textbooks do warn against multiple comparisons, but there is a tendency for researchers to consider any given comparison alone without considering it as one of an ensemble of potentially relevant responses to a research question. And then it is natural for sympathetic journal editors to publish a striking result without getting hung up on what might be viewed as nitpicking technicalities. Each person in this research chain is making a decision that seems scientifically reasonable, but the result is a sort of machine for producing and publicizing random patterns.

There’s a larger statistical point to be made here, which is that as long as studies are conducted as fishing expeditions, with a willingness to look hard for patterns and report any comparisons that happen to be statistically significant, we will see lots of dramatic claims based on data patterns that don’t represent anything real in the general population. Again, this fishing can be done implicitly, without the researchers even realizing that they are making a series of choices enabling them to over-interpret patterns in their data.

Andrew Gelman

Indeed. If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero –  even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and  here)- it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on a weird backward logic that students and researchers usually don’t understand?

Suppose that we as educational reformers have a hypothesis that implementing a voucher system would raise the mean test results with 100 points (null hypothesis). Instead, when sampling, it turns out it only raises it with 75 points and has a standard error (telling us how much the mean varies from one sample to another) of 20.

statisticsexplainedDoes this imply that the data do not disconfirm the hypothesis? Given the usual normality assumptions on sampling distributions the one-tailed p-value is approximately 0.11. Thus, approximately 11% of the time we would expect a score this low or lower if we were sampling from this voucher system population. That means  – using the ordinary 5% significance-level — we would not reject the null hypothesis although the test has shown that it is “likely” that the hypothesis is false.

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypothesis by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” But looking at our example, standard scientific methodology tells us that since there is only 11% probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

And, most importantly, of course we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-value of 0.11 means next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Free to choose — and lose

24 July, 2014 at 12:45 | Posted in Varia | Comments Off on Free to choose — and lose

100-will-buy-this-car-great-depression-stock-crash
[h/t barnilsson]

Bayesian inference gone awry

24 July, 2014 at 10:57 | Posted in Theory of Science & Methodology | 2 Comments

There is a nice YouTube video with Tony O’Hagan interviewing Dennis Lindley. Of course, Dennis is a legend and his impact on the field of statistics is huge.


At one point, Tony points out that some people liken Bayesian inference to a religion. Dennis claims this is false. Bayesian inference, he correctly points out, starts with some basic axioms and then the rest follows by deduction. This is logic, not religion.

I agree that the mathematics of Bayesian inference is based on sound logic. But, with all due respect, I think Dennis misunderstood the question. When people say that “Bayesian inference is like a religion,” they are not referring to the logic of Bayesian inference. They are referring to how adherents of Bayesian inference behave.

(As an aside, detractors of Bayesian inference do not deny the correctness of the logic. They just don’t think the axioms are relevant for data analysis. For example, no one doubts the axioms of Peano arithmetic. But that doesn’t imply that arithmetic is the foundation of statistical inference. But I digress.)

The vast majority of Bayesians are pragmatic, reasonable people. But there is a sub-group of die-hard Bayesians who do treat Bayesian inference like a religion. By this I mean:

They are very cliquish.
They have a strong emotional attachment to Bayesian inference.
They are overly sensitive to criticism.
They are unwilling to entertain the idea that Bayesian inference might have flaws.
When someone criticizes Bayes, they think that critic just “doesn’t get it.”
They mock people with differing opinions …

No evidence you can provide would ever make the die-hards doubt their ideas. To them, Sir David Cox, Brad Efron and other giants in our field who have doubts about Bayesian inference, are not taken seriously because they “just don’t get it.”

So is Bayesian inference a religion? For most Bayesians: no. But for the thin-skinned, inflexible die-hards who have attached themselves so strongly to their approach to inference that they make fun of, or get mad at, critics: yes, it is a religion.

Larry Wasserman

Bayesianism — preposterous mumbo jumbo

23 July, 2014 at 09:46 | Posted in Theory of Science & Methodology | 2 Comments

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump” argument – susceptible to being ruined by some clever “bookie”.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

The view that Bayesian decision theory is only genuinely valid in a small world was asserted very firmly by Leonard Savage when laying down the principles of the theory in his path-breaking Foundations of Statistics. He makes the distinction between small and large worlds in a folksy way by quoting the proverbs ”Look before you leap” and ”Cross that bridge when you come to it”. You are in a small world if it is feasible always to look before you leap. You are in a large world if there are some bridges that you cannot cross before you come to them.

As Savage comments, when proverbs conflict, it is proverbially true that there is some truth in both—that they apply in different contexts. He then argues that some decision situations are best modeled in terms of a small world, but others are not. He explicitly rejects the idea that all worlds can be treated as small as both ”ridiculous” and ”preposterous” … Frank Knight draws a similar distinction between making decision under risk or uncertainty …

Bayesianism is understood [here] to be the philosophical principle that Bayesian methods are always appropriate in all decision problems, regardless of whether the relevant set of states in the relevant world is large or small. For example, the world in which financial economics is set is obviously large in Savage’s sense, but the suggestion that there might be something questionable about the standard use of Bayesian updating in financial models is commonly greeted with incredulity or laughter.

Someone who acts as if Bayesianism were correct will be said to be a Bayesianite. It is important to distinguish a Bayesian like myself—someone convinced by Savage’s arguments that Bayesian decision theory makes sense in small worlds—from a Bayesianite. In particular, a Bayesian need not join the more extreme Bayesianites in proceeding as though:

• All worlds are small.
• Rationality endows agents with prior probabilities.
• Rational learning consists simply in using Bayes’ rule to convert a set of prior
probabilities into posterior probabilities after registering some new data.

Bayesianites are often understandably reluctant to make an explicit commitment to these principles when they are stated so baldly, because it then becomes evi-dent that they are implicitly claiming that David Hume was wrong to argue that the principle of scientific induction cannot be justified by rational argument …

cartoon05Bayesianites believe that the subjective probabilities of Bayesian decision theory can be reinterpreted as logical probabilities without any hassle. Its adherents therefore hold that Bayes’ rule is the solution to the problem of scientific induction. No support for such a view is to be found in Savage’s theory—nor in the earlier theories of Ramsey, de Finetti, or von Neumann and Morgenstern. Savage’s theory is entirely and exclusively a consistency theory. It says nothing about how decision-makers come to have the beliefs ascribed to them; it asserts only that, if the decisions taken are consistent (in a sense made precise by a list of axioms), then they act as though maximizing expected utility relative to a subjective probability distribution …

A reasonable decision-maker will presumably wish to avoid inconsistencies. A Bayesianite therefore assumes that it is enough to assign prior beliefs to as decisionmaker, and then forget the problem of where beliefs come from. Consistency then forces any new data that may appear to be incorporated into the system via Bayesian updating. That is, a posterior distribution is obtained from the prior distribution using Bayes’ rule.

The naiveté of this approach doesn’t consist in using Bayes’ rule, whose validity as a piece of algebra isn’t in question. It lies in supposing that the problem of where the priors came from can be quietly shelved.

Savage did argue that his descriptive theory of rational decision-making could be of practical assistance in helping decision-makers form their beliefs, but he didn’t argue that the decision-maker’s problem was simply that of selecting a prior from a limited stock of standard distributions with little or nothing in the way of soulsearching. His position was rather that one comes to a decision problem with a whole set of subjective beliefs derived from one’s previous experience that may or may not be consistent …

But why should we wish to adjust our gut-feelings using Savage’s methodology? In particular, why should a rational decision-maker wish to be consistent? After all, scientists aren’t consistent, on the grounds that it isn’t clever to be consistently wrong. When surprised by data that shows current theories to be in error, they seek new theories that are inconsistent with the old theories. Consistency, from this point of view, is only a virtue if the possibility of being surprised can somehow be eliminated. This is the reason for distinguishing between large and small worlds. Only in the latter is consistency an unqualified virtue.

Ken Binmore

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

Its-the-lawThat feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of KeynesA Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

51DD5REVJFLThe bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

Chicago Follies (XI)

22 July, 2014 at 23:48 | Posted in Economics | 2 Comments

In their latest book, Think Like a Freak, co-authors Steven Levitt and Stephen Dubner tell a story about meeting David Cameron in London before he was Prime Minister. They told him that the U.K.’s National Health Service — free, unlimited, lifetime heath care — was laudable but didn’t make practical sense.

“We tried to make our point with a thought experiment,” they write. “We suggested to Mr. Cameron that he consider a similar policy in a different arena. What if, for instance…everyone were allowed to go down to the car dealership whenever they wanted and pick out any new model, free of charge, and drive it home?”

1643.Lebowski.jpg-610x0Rather than seeing the humor and realizing that health care is just like any other part of the economy, Cameron abruptly ended the meeting, demon-strating one of the risks of ‘thinking like a freak,’ Dubner says in the accompanying video.

“Cameron has been open to [some] inventive thinking but if you start to look at things in a different way you’ll get some strange looks,” he says. “Tread with caution.”

So what do Dubner and Levitt make of the Affordable Care Act, aka Obamacare, which has been described as a radical rethinking of America’s health care system?

“I do not think it’s a good approach at all,” says Levitt, a professor of economics at the University of Chicago. “Fundamentally with health care, until people have to pay for what they’re buying it’s not going to work. Purchasing health care is almost exactly like purchasing any other good in the economy. If we’re going to pretend there’s a market for it, let’s just make a real market for it.”

Aaron Task

Portraying health care as “just like any other part of the economy” is of course nothing but total horseshit. So, instead of “thinking like a freak,” why not e. g. read what Kenneth Arrow wrote on the issue of medical care already back in 1963?

Under ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. ???????????????????????????????????????????????????????????????????????????????????????????????????????????????In its absence he wants to have some guarantee that at leats the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.

One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if  he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act  as thoroughly in the buyer’s  behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.

Kenneth Arrow, “Uncertainty and the Welfare Economics of Medical Care”. American Economic Review, 53 (5).

Bayesianism — a dangerous religion that harms science

22 July, 2014 at 19:57 | Posted in Theory of Science & Methodology | 9 Comments

One of my favourite bloggers — Noah Smith — has a nice post up today on Bayesianism:

Consider Proposition H: “God is watching out for me, and has a special purpose for me and me alone. Therefore, God will not let me die. No matter how dangerous a threat seems, it cannot possibly kill me, because God is looking out for me – and only me – at all times.”
calvin-math-atheist3-2Suppose that you believe that there is a nonzero probability that H is true. And suppose you are a Bayesian – you update your beliefs according to Bayes’ Rule. As you survive longer and longer – as more and more threats fail to kill you – your belief about the probability that H is true must increase and increase. It’s just mechanical application of Bayes’ Rule:

P(H|E) = (P(E|H)P(H))/P(E)

Here, E is “not being killed,” P(E|H)=1, and P(H) is assumed not to be zero. P(E) is less than 1, since under a number of alternative hypotheses you might get killed (if you have a philosophical problem with this due to the fact that anyone who observes any evidence must not be dead, just slightly tweak H so that it’s possible to receive a “mortal wound”).

So P(H|E) is greater than P(H) – every moment that you fail to die increases your subjective probability that you are an invincible superman, the chosen of God. This is totally and completely rational, at least by the Bayesian definition of rationality.

The nodal point here is — of course — that although Bayes’ Rule is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. As another of my favourite bloggers — statistician Andrew Gelman — puts it:

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that di erent methods work well in different settings … Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective belief, and second, that it’s not clear how to assess subjective knowledge in any case.

bayesfunBeyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an allpurpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for a generation of statistics to be ignorant of experimental design and analysis of variance, instead becoming experts on the convergence of the Gibbs sampler. In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer scientists, and others to move in and fill in the gap …

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence …

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16 …

Andrew Gelman

Understanding discrete random variables (student stuff)

22 July, 2014 at 11:13 | Posted in Statistics & Econometrics | Comments Off on Understanding discrete random variables (student stuff)

 

The Sonnenschein-Mantel-Debreu results after forty years

21 July, 2014 at 16:39 | Posted in Economics | 1 Comment

Along with the Arrow-Debreu existence theorem and some results on regular economies, SMD theory fills in many of the gaps we might have in our understanding of general equilibrium theory …

It is also a deeply negative result. SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

24958274Given how sweeping the changes wrought by SMD theory seem to be, it is understand-able that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.” There was a realization of a similar gap in the foundations of empirical economics. General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds — not so little — of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

Austrian Newspeak

21 July, 2014 at 14:49 | Posted in Economics | 1 Comment

I see that Robert Murphy of the Mises Institute has taken the time to pen a thoughtful critique of my gentle admonishment of followers of the school of quasi-economic thought commonly known as “Austrianism” …

Much of my original article discussed the failed Austrian prediction that QE would cause inflation (i.e., a rise in the general level of consumer prices). Robert reiterates four standard Austrian defenses:

newspeak21. Consumer prices rose more than the official statistics suggest.

2. Asset prices rose.

3. “Inflation” doesn’t mean “a rise in the general level of consumer prices,” it means “an increase in the monetary base”, so QE is inflation by definition.

4. Austrians do not depend on evidence to refute their theories; the theories are deduced from pure logic.

Noah Smith

Makes me come to think of — wonder why  — Keynes’s review of Austrian übereconomist Friedrich von Hayek’s Prices and Production:

The book, as it stands, seems to me to be one of the most frightful muddles I have ever read, with scarcely a sound proposition in it beginning with page 45, and yet it remains a book of some interest, which is likely to leave its mark on the mind of the reader. It is an extraordinary example of how, starting with a mistake, a remorseless logician can end up in bedlam …

J.M. Keynes, Economica 34 (1931)

Ett sällsynt vidrigt försök att sätta munkavle på svenska lärare har slagits tillbaka

21 July, 2014 at 13:56 | Posted in Education & School | Comments Off on Ett sällsynt vidrigt försök att sätta munkavle på svenska lärare har slagits tillbaka

För några dagar sedan kunde vi som följer läraren, författaren och skolstrategen Per Kornhall på Facebook se att han och Lärarnas Riksförbund vunnit den första delsegern mot Upplands Väsby kommun. Arbetsdomstolen underkänner Väsby kommuns skäl för avsked

ygph23jrmsyp84yco2a4Nu läser vi om det i lokaltidningen Vi i Väsby. I ”Per Kornhall får tillbaka jobbet” skriver tidningen att Arbetsdomstolen kommit med ett så kallat interimistiskt beslut, som gäller fram till dess att konflikten är löst. AD dömer att Kornhall ska få ha kvar sin anställning på kommunen från den 11 juli till dess att tvisten slutligen avgjorts.

Per Kornhall och Lärarnas Riksförbund som företräder honom, har fått rätt på alla punkter mot arbetsgivaren Upplands Väsby, som nu kommit ut i riksmedia som en arbetsplats där man vill tysta sina medarbetare. Det har skrivits om hans fall inte bara i lokaltidningen utan också i riksmedia. Kornhall är en välkänd och respekterad författare och debattör i skolfrågor.

När Vi i Väsby frågar om han framöver skulle kunna tänka sig att komma tillbaka till arbetet på kommunen blir svaret:
”Som läget är nu är jag naturligtvis inte intresserad av att komma tillbaka så som jag blivit behandlad.” Vi förstår honom.

Man kan hoppas att andra lärare och anställda i skolan vågar höja rösten efter detta. Tidigare har det funnits oroande tecken på att inte bara i privata skolor utan också i kommuner känner sig lärare åtsatta och tystade.

Se rapporten ”Tystade lärare” som togs fram av Lärarnas Riksförbund och som visade att en majoritet av de svarande lärarna, nästan 70 procent av de friskoleanställda och 53 procent av de kommunala lärarna, inte skulle våga uttala sig offentligt i media om de var missnöjda med sin arbetsgivare/skola.

Per Kornhall har gjort det lättare för lärare och andra engagerade i skolan att stå emot hot från okunniga arbetsgivare och höja sina röster när de ansvariga inte sköter sig.

Zoran Alagic

Helena von Schantz skriver — som vanligt klokt och personligt — mer om detta sällsynt vidriga försök att sätta munkavle på en visselblåsare i svensk skola.

Expected utility theory

21 July, 2014 at 11:58 | Posted in Economics | Comments Off on Expected utility theory

rabinIn Matthew Rabin’s modern classic Risk Aversion and Expected-Utility Theory: A Calibration Theorem it is forcefully and convincingly shown that expected utility theory does not explain actual behaviour and choices.

What is still surprising, however, is that although the expected utility theory obviously is descriptively inadequate and doesn’t pass the Smell Test, colleagues all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

That cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them!

Rabin writes:

Using expected-utility theory, economists model risk aversion as arising solely because the utility function over wealth is concave. This diminishing-marginal-utility-of-wealth theory of risk aversion is psychologically intuitive, and surely helps explain some of our aversion to large-scale risk: We dislike vast uncertainty in lifetime wealth because a dollar that helps us avoid poverty is more valuable than a dollar that helps us become very rich.

Yet this theory also implies that people are approximately risk neutral when stakes are small. Arrow (1971, p. 100) shows that an expected-utility maximizer with a differentiable utility function will always want to take a sufficiently small stake in any positive-expected-value bet. That is, expected-utility maximizers are (almost everywhere) arbitrarily close to risk neutral when stakes are arbitrarily small. While most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for negligible stakes, but for quite sizable and economically important stakes. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality.While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition among some subsets of researchers, and has been illustrated in writing in a variety of different contexts using standard utility functions.

In this paper, I reinforce this previous research by presenting a theorem which calibrates a relationship between risk attitudes over small and large stakes. The theorem shows that, within the expected-utility model, anything but virtual risk neutrality over modest stakes implies manifestly unrealistic risk aversion over large stakes. The theorem is entirely ‘‘non-parametric’’, assuming nothing about the utility function except concavity. In the next section I illustrate implications of the theorem with examples of the form ‘‘If an expected-utility maximizer always turns down modest-stakes gamble X, she will always turn down large-stakes gamble Y.’’ Suppose that, from any initial wealth level, a person turns down gambles where she loses $100 or gains $110, each with 50% probability. Then she will turn down 50-50 bets of losing $1,000 or gaining any sum of money. A person who would always turn down 50-50 lose $1,000/gain $1,050 bets would always turn down 50-50 bets of losing $20,000 or gaining any sum. These are implausible degrees of risk aversion. The theorem not only yields implications if we know somebody will turn down a bet for all initial wealth levels. Suppose we knew a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than $350,000, but knew nothing about the degree of her risk aversion for wealth levels above $350,000. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670.

The intuition for such examples, and for the theorem itself, is that within the expected-utility framework turning down a modest-stakes gamble means that the marginal utility of money must diminish very quickly for small changes in wealth. For instance, if you reject a 50-50 lose $10/gain $11 gamble because of diminishing marginal utility, it must be that you value the 11th dollar above your current wealth by at most 10/11 as much as you valued the 10th-to-last-dollar of your current wealth.

Iterating this observation, if you have the same aversion to the lose $10/gain $11 bet if you were $21 wealthier, you value the 32nd dollar above your current wealth by at most 10/11 x 10/11 ~ 5/6 as much as your 10th-to-last dollar. You will value your 220th dollar by at most 3/20 as much as your last dollar, and your 880 dollar by at most 1/2000 of your last dollar. This is an absurd rate for the value of money to deteriorate — and the theorem shows the rate of deterioration implied by expected-utility theory is actually quicker than this. Indeed, the theorem is really just an algebraic articulation of how implausible it is that the consumption value of a dollar changes significantly as a function of whether your lifetime wealth is $10, $100, or even $1,000 higher or lower. From such observations we should conclude that aversion to modest-stakes risk has nothing to do with the diminishing marginal utility of wealth.

Expected-utility theory seems to be a useful and adequate model of risk aversion for many purposes, and it is especially attractive in lieu of an equally tractable alternative model. ‘‘Extremelyconcave expected utility’’ may even be useful as a parsimonious tool for modeling aversion to modest-scale risk. But this and previous papers make clear that expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes. Moreover, when the specific structure of expected-utility theory is used to analyze situations involving modest stakes — such as in research that assumes that large-stake and modest-stake risk attitudes derive from the same utility-for-wealth function — it can be very misleading. In the concluding section, I discuss a few examples of such research where the expected-utility hypothesis is detrimentally maintained, and speculate very briefly on what set of ingredients may be needed to provide a better account of risk attitudes. In the next section, I discuss the theorem and illustrate its implications.

Expected-utility theory makes wrong predictions about the relationship between risk aversion over modest stakes and risk aversion over large stakes. Hence, when measuring risk attitudes maintaining the expected-utility hypothesis, differences in estimates of risk attitudes may come from differences in the scale of risk comprising data sets, rather than from differences in risk attitudes of the people being studied. Data sets dominated by modest-risk investment opportunities are likely to yield much higher estimates of risk aversion than data sets dominated by larger-scale investment opportunities. So not only are standard measures of risk aversion somewhat hard to interpret given that people are not expected-utility maximizers, but even attempts to compare risk attitudes so as to compare across groups will be misleading unless economists pay due attention to the theory’s calibrational problems.

Indeed, what is empirically the most firmly established feature of risk preferences, loss aversion, is a departure from expected-utility theory that provides a direct explanation for modest-scale risk aversion. Loss aversion says that people are significantly more averse to losses relative to the status quo than they are attracted by gains, and more generally that people’s utilities are determined by changes in wealth rather than absolute levels. Preferences incorporating loss aversion can reconcile significant small-scale risk aversion with reasonable degrees of large-scale risk aversion … Variants of this or other models of risk attitudes can provide useful alternatives to expected-utility theory that can reconcile plausible risk attitudes over large stakes with non-trivial risk aversion over modest stakes.

Methodological arrogance

20 July, 2014 at 14:40 | Posted in Theory of Science & Methodology | Comments Off on Methodological arrogance

arroganceSo what do I mean by methodo-logical arrogance? I mean an attitude that invokes micro-foundations as a methodo-logical principle — philosophical reductionism in Popper’s terminology — while dismissing non-microfounded macromodels as unscientific. To be sure, the progress of science may enable us to reformulate (and perhaps improve) explanations of certain higher-level phenomena by expressing those relationships in terms of lower-level concepts. That is what Popper calls scientific reduction. But scientific reduction is very different from rejecting, on methodological principle, any explanation not expressed in terms of more basic concepts.

And whenever macrotheory seems inconsistent with microtheory, the inconsistency poses a problem to be solved. Solving the problem will advance our understanding. But simply to reject the macrotheory on methodological principle without evidence that the microfounded theory gives a better explanation of the observed phenomena than the non-microfounded macrotheory … is arrogant. Microfoundations for macroeconomics should result from progress in economic theory, not from a dubious methodological precept.

David Glasner

For more on microfoundations and methodological arrogance, read yours truly’s Micro versus Macro in Real-World Economics Review (issue no. 66, January 2014).

Macroeconomic quackery

20 July, 2014 at 13:41 | Posted in Economics | 2 Comments

In a recent interview Chicago übereconomist  Robert Lucas said

the evidence on postwar recessions … overwhelmingly supports the dominant importance of real shocks.

So, according to Lucas, changes in tastes and technologies should be able to explain the main fluctuations in e.g. the unemployment that we have seen during the last six or seven decades. But really — not even a Nobel laureate could in his wildest imagination come up with any warranted and justified explanation solely based on changes in tastes and technologies.

How do we protect ourselves from this kind of scientific nonsense? In The Scientific Illusion in Empirical Macroeconomics Larry Summers has a suggestion well worth considering:

Modern scientific macroeconomics sees a (the?) crucial role of theory as the development of pseudo worlds or in Lucas’s (1980b) phrase the “provision of fully articulated, artificial economic systems that can serve as laboratories in which policies that would be prohibitively expensive to experiment with in actual economies can be tested out at much lower cost” and explicitly rejects the view that “theory is a collection of assertions about the actual economy” …

image

A great deal of the theoretical macroeconomics done by those professing to strive for rigor and generality, neither starts from empirical observation nor concludes with empirically verifiable prediction …

The typical approach is to write down a set of assumptions that seem in some sense reasonable, but are not subject to empirical test … and then derive their implications and report them as a conclusion. Since it is usually admitted that many considerations are omitted, the conclusion is rarely treated as a prediction …

However, an infinity of models can be created to justify any particular set of empirical predictions … What then do these exercises teach us about the world? … If empirical testing is ruled out, and persuasion is not attempted, in the end I am not sure these theoretical exercises teach us anything at all about the world we live in …

Reliance on deductive reasoning rather than theory based on empirical evidence is particularly pernicious when economists insist that the only meaningful questions are the ones their most recent models are designed to address. Serious economists who respond to questions about how today’s policies will affect tomorrow’s economy by taking refuge in technobabble about how the question is meaningless in a dynamic games context abdicate the field to those who are less timid. No small part of our current economic difficulties can be traced to ignorant zealots who gained influence by providing answers to questions that others labeled as meaningless or difficult. Sound theory based on evidence is surely our best protection against such quackery.

Added 23:00 GMT: Commenting on this post, Brad DeLong writes:

What is Lucas talking about?

If you go to Robert Lucas’s Nobel Prize Lecture, there is an admission that his own theory that monetary (and other demand) shocks drove business cycles because unanticipated monetary expansions and contractions caused people to become confused about the real prices they faced simply did not work:

Robert Lucas (1995): Monetary Neutrality:
“Anticipated monetary expansions … are not associated with the kind of stimulus to employment and production that Hume described. Unanticipated monetary expansions, on the other hand, can stimulate production as, symmetrically, unanticipated contractions can induce depression. The importance of this distinction between anticipated and unanticipated monetary changes is an implication of every one of the many different models, all using rational expectations, that were developed during the 1970s to account for short-term trade-offs…. The discovery of the central role of the distinction between anticipated and unanticipated money shocks resulted from the attempts, on the part of many researchers, to formulate mathematically explicit models that were capable of addressing the issues raised by Hume. But I think it is clear that none of the specific models that captured this distinction in the 1970s can now be viewed as a satisfactory theory of business cycles”

And Lucas explicitly links that analytical failure to the rise of attempts to identify real-side causes:

“Perhaps in part as a response to the difficulties with the monetary-based business cycle models of the 1970s, much recent research has followed the lead of Kydland and Prescott (1982) and emphasized the effects of purely real forces on employ- ment and production. This research has shown how general equilibrium reasoning can add discipline to the study of an economy’s distributed lag response to shocks, as well as to the study of the nature of the shocks themselves…. Progress will result from the continued effort to formulate explicit theories that fit the facts, and that the best and most practical macroeconomics will make use of developments in basic economic theory.”

But these real-side theories do not appear to me to “fit the facts” at all.

And yet Lucas’s overall conclusion is:

“In a period like the post-World War II years in the United States, real output fluctuations are modest enough to be attributable, possibly, to real sources. There is no need to appeal to money shocks to account for these movements”

It would make sense to say that there is “no need to appeal to money shocks” only if there were a well-developed theory and models by which pre-2008 post-WWII business-cycle fluctuations are modeled as and explained by identified real shocks. But there isn’t. All Lucas will say is that post-WWII pre-2008 business-cycle fluctuations are “possibly” “attributable… to real shocks” because they are “modest enough”. And he says this even though:

“An event like the Great Depression of 1929-1933 is far beyond anything that can be attributed to shocks to tastes and technology. One needs some other possibilities. Monetary contractions are attractive as the key shocks in the 1929-1933 years, and in other severe depressions, because there do not seem to be any other candidates”

as if 2008-2009 were clearly of a different order of magnitude with a profoundly different signature in the time series than, say, 1979-1982.

Why does he think any of these things?

Yes, indeed, how could any person think any of those things …

Peter Dorman on economists’ obsession with homogeneity and average effects

19 July, 2014 at 20:41 | Posted in Economics | 6 Comments

Peter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focussed on economists’ infatuation with homogeneity and averages:

You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + …. + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

notes7-2What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …

The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes writes (emphasis added) in Treatise on Probability (1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to the causal process lying behind events and disclose the causal forces behind what appears to be simple facts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

Den svarta bilden

19 July, 2014 at 14:34 | Posted in Varia | Comments Off on Den svarta bilden

 

Till Isagel

19 July, 2014 at 09:54 | Posted in Varia | Comments Off on Till Isagel

 

Underbara tonsättningar av vår kanske främste diktarspråkekvilibrist — Harry Martinson
[h/t Jan Milch]

Chicago Follies (X)

19 July, 2014 at 08:34 | Posted in Economics | Comments Off on Chicago Follies (X)

 
capitalism-works-best

Although I never believed it when I was young and held scholars in great respect, it does seem to be the case that ideology plays a large role in economics. How else to explain Chicago’s acceptance of not only general equilibrium but a particularly simplified version of it as ‘true’ or as a good enough approximation to the truth? Or how to explain the belief that the only correct models are linear and that the von Neuman prices are those to which actual prices converge pretty smartly? This belief unites Chicago and the Classicals; both think that the ‘long-run’ is the appropriate period in which to carry out analysis. There is no empirical or theoretical proof of the correctness of this. But both camps want to make an ideological point. To my mind that is a pity since clearly it reduces the credibility of the subject and its practitioners.

Frank Hahn

Out of the 74 persons that have been awarded “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,” 28 — almost 40 % — have been affiliated to The University of Chicago.

The world is really a small place when it comes to economics …

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.