My blog is skyrocketing!

31 Jul, 2014 at 21:59 | Posted in Varia | 2 Comments

happy-cartoon-boy-jumping-and-smiling3

Tired of the idea of an infallible mainstream neoclassical economics and its perpetuation of spoon-fed orthodoxy, yours truly launched this blog in March 2011. The number of visitors has increased steadily, and now, three and a half years later, with almost 125 000 views per month, I have to admit of still being — given the somewhat wonkish character of the blog, with posts mostly on economic theory, statistics, econometrics, theory of science and methodology — rather gobsmacked that so many are interested and take their time to read the often rather geeky stuff on this blog.

In the 21st century the blogosphere has without any doubts become one of the greatest channels for dispersing new knowledge and information. images-4As a blogger I can specia-lize in those particular topics an economist and critical realist professor of social science happens to have both deep knowledge of and interest in. That, of course, also means — in the modern long tail world — being able to target a segment of readers with much narrower and specialized interests than newspapers and magazines as a rule could aim for — and still attract quite a lot of readers.

Economic growth and the male organ — does size matter?

31 Jul, 2014 at 19:51 | Posted in Economics | Comments Off on Economic growth and the male organ — does size matter?

Economic growth has since long interested economists. Not least, the question of which factors are behind high growth rates has been in focus. The factors usually pointed at are mainly economic, social and political variables. In an interesting study from the University of  Helsinki, Tatu Westling has expanded the potential causal variables to also include biological and sexual variables. In  the report Male Organ and Economic Growth: Does Size Matter (2011), he has — based on the “cross-country” data of Mankiw et al (1992), Summers and Heston (1988), Polity IV Project data of political regime types and a new data set on average penis size in 76 non-oil producing countries (www.everyoneweb.com/worldpenissize) — been able to show that the level and growth of GDP per capita between 1960 and 1985 varies with penis size. Replicating Westling’s study — I have used my favourite program Gretl — we obtain the following two charts:


The Solow-based model estimates show that the maximum GDP is achieved with the penis of about 13.5 cm and that the male reproductive organ (OLS without control variables) are negatively correlated with — and able to explain 20% of the variation in — GDP growth.

Even with reservation for problems such as endogeneity and confounders one can not but agree with Westling’s final assessment that “the ‘male organ hypothesis’ is worth pursuing in future research” and that it “clearly seems that the ‘private sector’ deserves more credit for economic development than is typically acknowledged.” Or? …

Nancy Cartwright on RCTs

31 Jul, 2014 at 08:56 | Posted in Theory of Science & Methodology | Comments Off on Nancy Cartwright on RCTs

16720017-abstract-word-cloud-for-randomized-controlled-trial-with-related-tags-and-termsI’m fond of science philosophers like Nancy Cartwright. With razor-sharp intellects they immediately go for the essentials. They have no time for bullshit. And neither should we.

In Evidence: For Policy — downloadable here — Cartwirght has assembled her papers on how better to use evidence from the sciences “to evaluate whether policies that have been tried have succeeded and to predict whether those we are thinking of trying will produce the outcomes we aim for.” Many of the collected papers center around what can and cannot be inferred from results in well-done randomised controlled trials (RCTs).

A must-read for everyone with an interest in the methodology of science.

Wren-Lewis on economic methodology

30 Jul, 2014 at 17:09 | Posted in Economics | 3 Comments

Simon Wren-Lewis has a post up today discussing why the New Classical Counterrevolution (NCCR) was successful in replacing older theories, despite the fact that the New Classical models weren’t able to explain what happened to output and inflation in the 1970s and 1980s:

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example …

However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM) …

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions …

So why does this matter? … If mainstream academic macroeconomists were seduced by anything, it was a methodology – a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous!

Wren-Lewis seems to be überimpressed by the “rigour” brought to macroeconomics by the New Classical counterrevolution and its rational expectations, microfoundations and ‘Lucas Critique’.

I fail to see why.

Contrary to what Wren-Lewis seems to argue, I would say the recent economic crisis and the fact that New Classical economics has had next to nothing to contribute in understanding it, shows that New Classical economics is a degenerative research program in dire need of replacement.

The predominant strategy in mainstream macroeconomics today is to build models and make things happen in these “analogue-economy models.” But although macro-econometrics may have supplied economists with rigorous replicas of real economies, if the goal of theory is to be able to make accurate forecasts or explain what happens in real economies, this ability to — ad nauseam — construct toy models, does not give much leverage.

“Rigorous” and “precise” New Classical models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

And — applying a “Lucas critique” on New Classical models, it is obvious that they too fail. Changing “policy rules” cannot just be presumed not to influence investment and consumption behavior and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as “a FORTRAN program” and “gain some confidence that the component parts of the program are in some sense reliable prior to running it” therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters “tastes” and “technology” shows that if you neglect ontological considerations pertaining to the target system, ultimately reality gets its revenge when at last questions of bridging and exportation of model exercises are laid on the table.

No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real world economies.

keynes-right-and-wrong

RBC and the Lucas-Rapping theory of unemployment

30 Jul, 2014 at 13:07 | Posted in Economics | 2 Comments

unemployed-thumbLucas and Rapping (1969) claim that cyclical increases in unemployment occur when workers quit their jobs because wages or salaries fall below expectations …

According to this explanation, when wages are unusually low, people become unemployed in order to enjoy free time, substituting leisure for income at a time when they lose the least income …

According to the theory, quits into unemployment increase during recessions, whereas historically quits decrease sharply and roughly half of unremployed workers become jobless because they are laid off … During the recession I studied, people were even afraid to change jobs because new ones might prove unstable and lead to unemployment …

If wages and salaries hardly ever fall, the intertemporal substitution theory is widely applicable only if the unemployed prefer jobless leisure to continued employment at their old pay. However, the attitude and circumstances of the unemployed are not consistent with their having made this choice …

In real business cycle theory, unemployment is interpreted as leisure optimally selected by workers, as in the Lucas-Rapping model. It has proved difficult to construct business cycle models consistent with this assumption and with real wage fluctuations as small as they are in reality, relative to fluctuations in employment.

Truman F. Bewley

This is, of course, only what you would expect of New Classical Chicago economists.

But sadly enough this extraterrestial view of unemployment is actually shared by so called New Keynesians, whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfunded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

The final court of appeal for macroeconomic models is the real world.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

To Keynes this was self-evident. But obviously not so to New Classical and ‘New Keynesian’ economists.

Normativ multikulturalism

29 Jul, 2014 at 23:03 | Posted in Politics & Society | 3 Comments

Häromdagen lyssnade jag till en manlig journalist som satt i en panel och var mycket upprörd över att invandrare utpekades som kvinnoförtryckare bara för att en del av dem misshandlade sina kvinnor och tvingade dem att bära slöja och hålla sig inomhus. Att skriva om sånt i tidningar var rasistiskt och vi skulle inte inbilla oss att vi var så bra på jämställdhet i Sverige heller! Det finns fortfarande löneskillnader här, så det så! Och förresten är det en kulturfråga!

jeff

I panelen satt ett antal invandrarkvinnor som blev så arga att de nästan fick blodstörtning. Det är skillnad på svenska löneorättvisor och faraonisk omskärelse, hot och “hedersmord”. “Ska vi hålla tyst om vad som händer bara för att inte fläcka våra Mäns rykte?” sa de. “Och om invandrare skulle börja slakta svenska män för ärans skull, vore det då fortfarande en “kulturfråga”?

Katarina Mazetti, Mazettis blandning (2001)

Jag har full förståelse för dessa kvinnors upprördhet.

Vad frågan i grund och botten handlar om är huruvida vi som medborgare i ett modernt demokratiskt samhälle ska tolerera de intoleranta.

Människor i vårt land som kommer från länder eller tillhör grupperingar av olika slag – vars fränder och trosbröder kanske sitter vid makten och styr med brutal intolerans – måste självklart omfattas av vår tolerans. Men lika självklart är att denna tolerans bara gäller så länge intoleransen inte tillämpas i vårt samhälle.

Kultur, identitet, etnicitet, genus, religiositet får aldrig accepteras som grund för intolerans i politiska och medborgerliga hänseenden. I ett modernt demokratiskt samhälle måste människor som tillhör dessa olika grupper kunna räkna med att samhället också skyddar dem mot intoleransens övergrepp. Alla medborgare måste ha friheten och rätten att också ifrågasätta och lämna den egna gruppen. Mot dem som inte accepterar den toleransen måste vi vara intoleranta.

I Sverige har vi länge okritiskt omhuldat en ospecificerad och odefinierad mångkulturalism. Om vi med mångkulturalism menar att det i vårt samhälle finns flera olika kulturer ställer detta inte till med problem. Då är vi alla mångkulturalister.

Men om vi med mångkulturalism menar att det med kulturell tillhörighet och identitet också kommer specifika moraliska, etiska och politiska rättigheter och skyldigheter, talar vi om något helt annat. Då talar vi om normativ mångkulturalism. Och att acceptera normativ mångkulturalism, innebär också att tolerera oacceptabel intolerans, eftersom den normativa mångkulturalismen innebär att specifika kulturella gruppers rättigheter kan komma att ges högre dignitet än samhällsmedborgarens allmänmänskliga rättigheter – och därigenom indirekt bli till försvar för dessa gruppers (eventuella) intolerans. I ett normativt mångkulturalistiskt samhälle kan institutioner och regelverk användas för att inskränka människors frihet utifrån oacceptabla och intoleranta kulturella värderingar.

Den normativa mångkulturalismen innebär precis som främlingsfientlighet och rasism att individer på ett oacceptabelt sätt reduceras till att vara passiva medlemmar av kultur- eller identitetsbärande grupper. Men tolerans innebär inte att vi måste ha en värderelativistisk inställning till identitet och kultur. De som i vårt samhälle i handling visar att de inte respekterar andra människors rättigheter, kan inte räkna med att vi ska vara toleranta mot dem. De som med våld vill tvinga andra människor att underordna sig en speciell grupps religion, ideologi eller ”kultur” är själva ansvariga för den intolerans de måste bemötas med.

Om vi ska värna om det moderna demokratiska samhällets landvinningar måste samhället vara intolerant mot den intoleranta normativa mångkulturalismen. Och då kan inte samhället själv omhulda en normativ mångkulturalism. I ett modernt demokratiskt samhälle måste rule of law gälla – och gälla alla!

Mot dem som i vårt samhälle vill tvinga andra att leva efter deras egna religiösa, kulturella eller ideologiska trosföreställningar och tabun, ska samhället vara intolerant. Mot dem som vill tvinga samhället att anpassa lagar och regler till den egna religionens, kulturens eller gruppens tolkningar, ska samhället vara intolerant. Mot dem som i handling är intoleranta ska vi inte vara toleranta.

The Weight

29 Jul, 2014 at 19:30 | Posted in Varia | Comments Off on The Weight

 

Austrian economics — a methodological critique

29 Jul, 2014 at 17:04 | Posted in Theory of Science & Methodology | 5 Comments


[h/t Jan Milch]

This is a fair presentation and critique of Austrian methodology. But beware! In theoretical and methodological questions it’s not always either-or. We have to be open-minded and pluralistic enough not to throw out the baby with the bath water — and fail to secure insights like this:

What is the problem we wish to solve when we try to construct a rational economic order? … If we possess all the relevant information, if we can start out from a given system of preferences, and if we command complete knowledge of available means, the problem which remains is purely one of logic …

The-Use-of-Knowledge-in-Society_800x600-05_2014-172x230This, however, is emphatically not the economic problem which society faces … The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is … a problem of the utilization of knowledge which is not given to anyone in its totality.

This character of the fundamental problem has, I am afraid, been obscured rather than illuminated by many of the recent refinements of economic theory … Many of the current disputes with regard to both economic theory and economic policy have their common origin in a misconception about the nature of the economic problem of society. This misconception in turn is due to an erroneous transfer to social phenomena of the habits of thought we have developed in dealing with the phenomena of nature …

To assume all the knowledge to be given to a single mind in the same manner in which we assume it to be given to us as the explaining economists is to assume the problem away and to disregard everything that is important and significant in the real world.

Compare this relevant and realist wisdom with the rational expectations hypothesis (REH) used by almost all mainstream macroeconomists today. REH presupposes – basically for reasons of consistency – that agents have complete knowledge of all of the relevant probability distribution functions. And when trying to incorporate learning in these models – trying to take the heat of some of the criticism launched against it up to date – it is always a very restricted kind of learning that is considered. A learning where truly unanticipated, surprising, new things never take place, but only rather mechanical updatings – increasing the precision of already existing information sets – of existing probability functions.

Nothing really new happens in these ergodic models, where the statistical representation of learning and information is nothing more than a caricature of what takes place in the real world target system. This follows from taking for granted that people’s decisions can be portrayed as based on an existing probability distribution, which by definition implies the knowledge of every possible event (otherwise it is in a strict mathematical-statistically sense not really a probability distribution) that can be thought of taking place.

The rational expectations hypothesis presumes consistent behaviour, where expectations do not display any persistent errors. In the world of rational expectations we are always, on average, hitting the bull’s eye. In the more realistic, open systems view, there is always the possibility (danger) of making mistakes that may turn out to be systematic. It is because of this, presumably, that we put so much emphasis on learning in our modern knowledge societies.

As Hayek wrote:

When it comes to the point where [equilibrium analysis] misleads some of our leading thinkers into believing that the situation which it describes has direct relevance to the solution of practical problems, it is high time that we remember that it does not deal with the social process at all and that it is no more than a useful preliminary to the study of the main problem.

The vain glory of the ‘New Keynesian’ club

28 Jul, 2014 at 10:10 | Posted in Economics | Comments Off on The vain glory of the ‘New Keynesian’ club

Vain Glory Sinner

Paul Krugman’s economic analysis is always stimulating and insightful, but there is one issue on which I think he persistently falls short. That issue is his account of New Keynesianism’s theoretical originality and intellectual impact … The model of nominal wage rigidity and the Phillips curve that I described comes from my 1990 dissertation, was published in March 1994, and has been followed by substantial further published research. That research also introduces ideas which are not part of the New Keynesian model and are needed to explain the Phillips curve in a higher inflation environment.

Similar precedence issues hold for scholarship on debt-driven business cycles, financial instability, the problem of debt-deflation in recessions and depressions, and the endogenous credit-driven nature of the money supply. These are all topics my colleagues and I, working in the Post- and old Keynesian traditions, have been writing about for years – No, decades!

Since 2008, some New Keynesians have discovered these same topics and have developed very similar analyses. That represents progress which is good news for economics. However, almost nowhere will you find citation of this prior work, except for token citation of a few absolutely seminal contributors (like Tobin and Minsky) …

By citing the seminal critical thinkers, mainstream economists lay claim to the intellectual lineage. And by overlooking more recent work, they capture the ideas of their critics.

This practice has enormous consequences. At the personal level, there is the matter of vain glory. At the sociological level, it suffocates debate and pluralism in economics. It is as if the critics have produced nothing so there is no need for debate, and nor are the critics deserving of a place in the academy …

For almost thirty years, New Keynesians have dismissed other Keynesians and not bothered to stay acquainted with their research. But now that the economic crisis has forced awareness, the right thing is to acknowledge and incorporate that research. The failure to do so is another element in the discontent of critics, which Krugman dismisses as just “Frustrations of the Heterodox.”

Thomas Palley

Added July 29: Krugman answers here.

Nights In White Satin

25 Jul, 2014 at 18:50 | Posted in Varia | Comments Off on Nights In White Satin


Old love never rusts …

What Americans can learn from Sweden’s school choice disaster

25 Jul, 2014 at 11:02 | Posted in Education & School | 4 Comments

School_ChoiceAdvocates for choice-based solutions should take a look at what’s happened to schools in Sweden, where parents and educators would be thrilled to trade their country’s steep drop in PISA scores over the past 10 years for America’s middling but consistent results. What’s caused the recent crisis in Swedish education? Researchers and policy analysts are increasingly pointing the finger at many of the choice-oriented reforms that are being championed as the way forward for American schools. While this doesn’t necessarily mean that adding more accountability and discipline to American schools would be a bad thing, it does hint at the many headaches that can come from trying to do so by aggressively introducing marketlike competition to education.
There are differences between the libertarian ideal espoused by Friedman and the actual voucher program the Swedes put in place in the early ’90s … But Swedish school reforms did incorporate the essential features of the voucher system advocated by Friedman. The hope was that schools would have clear financial incentives to provide a better education and could be more responsive to customer (i.e., parental) needs and wants when freed from the burden imposed by a centralized bureaucracy …

But in the wake of the country’s nose dive in the PISA rankings, there’s widespread recognition that something’s wrong with Swedish schooling … Competition was meant to discipline government schools, but it may have instead led to a race to the bottom …

It’s the darker side of competition that Milton Friedman and his free-market disciples tend to downplay: If parents value high test scores, you can compete for voucher dollars by hiring better teachers and providing a better education—or by going easy in grading national tests. Competition was also meant to discipline government schools by forcing them to up their game to maintain their enrollments, but it may have instead led to a race to the bottom as they too started grading generously to keep their students …

Maybe the overall message is … “there are no panaceas” in public education. We tend to look for the silver bullet—whether it’s the glories of the market or the techno-utopian aspirations of education technology—when in fact improving educational outcomes is a hard, messy, complicated process. It’s a lesson that Swedish parents and students have learned all too well: Simply opening the floodgates to more education entrepreneurs doesn’t disrupt education. It’s just plain disruptive.

Ray Fisman

[h/t Jan Milch]

For my own take on this issue — only in Swedish, sorry — see here, here, here and here.

James Heckman — the ultimate take down of teflon-coated defenders of rational expectations

24 Jul, 2014 at 21:16 | Posted in Economics | 4 Comments

heckman

James Heckman, winner of the “Nobel Prize” in economics (2000), did an inteview with John Cassidy in 2010. It’s an interesting read (Cassidy’s words in italics):

What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?

I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”

It became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lard Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.

What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?

Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.

What about you? When rational expectations was sweeping economics, what was your reaction to it? I know you are primarily a micro guy, but what did you think?

What struck me was that we knew Keynesian theory was still alive in the banks and on Wall Street. Economists in those areas relied on Keynesian models to make short-run forecasts. It seemed strange to me that they would continue to do this if it had been theoretically proven that these models didn’t work.

What about the efficient-markets hypothesis? Did Chicago economists go too far in promoting that theory, too?

Some did. But there is a lot of diversity here. You can go office to office and get a different view.

[Heckman brought up the memoir of the late Fischer Black, one of the founders of the Black-Scholes option-pricing model, in which he says that financial markets tend to wander around, and don’t stick closely to economics fundamentals.]

[Black] was very close to the markets, and he had a feel for them, and he was very skeptical. And he was a Chicago economist. But there was an element of dogma in support of the efficient-market hypothesis. People like Raghu [Rajan] and Ned Gramlich [a former governor of the Federal Reserve, who died in 2007] were warning something was wrong, and they were ignored. There was sort of a culture of efficient markets—on Wall Street, in Washington, and in parts of academia, including Chicago.

What was the reaction here when the crisis struck?

Everybody was blindsided by the magnitude of what happened. But it wasn’t just here. The whole profession was blindsided. I don’t think Joe Stiglitz was forecasting a collapse in the mortgage market and large-scale banking collapses.

So, today, what survives of the Chicago School? What is left?

I think the tradition of incorporating theory into your economic thinking and confronting it with data—that is still very much alive. It might be in the study of wage inequality, or labor supply responses to taxes, or whatever. And the idea that people respond rationally to incentives is also still central. Nothing has invalidated that—on the contrary.

So, I think the underlying ideas of the Chicago School are still very powerful. The basis of the rocket is still intact. It is what I see as the booster stage—the rational-expectation hypothesis and the vulgar versions of the efficient-markets hypothesis that have run into trouble. They have taken a beating—no doubt about that. I think that what happened is that people got too far away from the data, and confronting ideas with data. That part of the Chicago tradition was neglected, and it was a strong part of the tradition.

When Bob Lucas was writing that the Great Depression was people taking extended vacations—refusing to take available jobs at low wages—there was another Chicago economist, Albert Rees, who was writing in the Chicago Journal saying, No, wait a minute. There is a lot of evidence that this is not true.

Milton Friedman—he was a macro theorist, but he was less driven by theory and by the desire to construct a single overarching theory than by attempting to answer empirical questions. Again, if you read his empirical books they are full of empirical data. That side of his legacy was neglected, I think.

When Friedman died, a couple of years ago, we had a symposium for the alumni devoted to the Friedman legacy. I was talking about the permanent income hypothesis; Lucas was talking about rational expectations. We have some bright alums. One woman got up and said, “Look at the evidence on 401k plans and how people misuse them, or don’t use them. Are you really saying that people look ahead and plan ahead rationally?” And Lucas said, “Yes, that’s what the theory of rational expectations says, and that’s part of Friedman’s legacy.” I said, “No, it isn’t. He was much more empirically minded than that.” People took one part of his legacy and forgot the rest. They moved too far away from the data.

 

Yes indeed, they certainly “moved too far away from the data.”

In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:

it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.

And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:

More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.

For even more on the issue, permit me to self-indulgently recommend reading my article Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world in real-world economics review no. 62.

Sverker Sörlins lustmord på Jan Björklund

24 Jul, 2014 at 15:16 | Posted in Education & School | Comments Off on Sverker Sörlins lustmord på Jan Björklund

De senaste åren har en omfattande politisk energi gått ut på att försöka minska friskolornas svängrum och, måste man nog säga, skadeverkningar … Kanske kan det rentav vara så att svikten i skolans resultat kan knytas, i alla fall till någon del, till denna revolution av skolans huvudmannaskap?

Jan BjšrklundI en regering där Moderaterna är största parti och dess värderingar och ideal dominerar får det inte råda någon tvekan på denna punkt. Det gör det inte heller. Jan Björklund är en kompromisslös försvarare av friskolereformen …

Jan Björklund var surrad vid masten för att han inte skulle kunna lystra till sirenernas sång som handlar om att det finns en annan värld som är möjlig. En där vi börjar om igen och försöker samarbeta. Där skolan är vår gemensamma uppgift, som den var en gång i ett ljusare samhälle där FP var med och byggde världens bästa skola …

Det borde inte vara omöjligt. Men med Jan Björklund var det faktiskt just — omöjligt.

Sverker Sörlin

Sörlins tio sidor långa analys i Magasinet Arena av Jan Björklunds tid som svensk skolminister är ett absolut “must read”!

Read my lips — statistical significance is NOT a substitute for doing real science!

24 Jul, 2014 at 14:12 | Posted in Theory of Science & Methodology | 2 Comments

Noah Smith has a post up today telling us that his Bayesian Superman wasn’t intended to be a knock on Bayesianism and that he thinks Frequentism is a bit underrated these days:

Frequentist hypothesis testing has come under sustained and vigorous attack in recent years … But there are a couple of good things about Frequentist hypothesis testing that I haven’t seen many people discuss. Both of these have to do not with the formal method itself, but with social conventions associated with the practice …

Why do I like these social conventions? Two reasons. First, I think they cut down a lot on scientific noise.i_do_not_think_it_[significant]_means_what_you_think_it_means “Statistical significance” is sort of a first-pass filter that tells you which results are interesting and which ones aren’t. Without that automated filter, the entire job of distinguishing interesting results from uninteresting ones falls to the reviewers of a paper, who have to read through the paper much more carefully than if they can just scan for those little asterisks of “significance”.

Hmm …

A non-trivial part of teaching statistics is made up of teaching students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p-values – really are, still most students misinterpret them. And a lot of researchers obviously also fall pray to the same mistakes:

Are women three times more likely to wear red or pink when they are most fertile? No, probably not. But here’s how hardworking researchers, prestigious scientific journals, and gullible journalists have been fooled into believing so.

The paper I’ll be talking about appeared online this month in Psychological Science, the flagship journal of the Association for Psychological Science, which represents the serious, research-focused (as opposed to therapeutic) end of the psychology profession.

images-11“Women Are More Likely to Wear Red or Pink at Peak Fertility,” by Alec Beall and Jessica Tracy, is based on two samples: a self-selected sample of 100 women from the Internet, and 24 undergraduates at the University of British Columbia. Here’s the claim: “Building on evidence that men are sexually attracted to women wearing or surrounded by red, we tested whether women show a behavioral tendency toward wearing reddish clothing when at peak fertility. … Women at high conception risk were more than three times more likely to wear a red or pink shirt than were women at low conception risk. … Our results thus suggest that red and pink adornment in women is reliably associated with fertility and that female ovulation, long assumed to be hidden, is associated with a salient visual cue.”

Pretty exciting, huh? It’s (literally) sexy as well as being statistically significant. And the difference is by a factor of three—that seems like a big deal.

Really, though, this paper provides essentially no evidence about the researchers’ hypotheses …

The way these studies fool people is that they are reduced to sound bites: Fertile women are three times more likely to wear red! But when you look more closely, you see that there were many, many possible comparisons in the study that could have been reported, with each of these having a plausible-sounding scientific explanation had it appeared as statistically significant in the data.

The standard in research practice is to report a result as “statistically significant” if its p-value is less than 0.05; that is, if there is less than a 1-in-20 chance that the observed pattern in the data would have occurred if there were really nothing going on in the population. But of course if you are running 20 or more comparisons (perhaps implicitly, via choices involved in including or excluding data, setting thresholds, and so on), it is not a surprise at all if some of them happen to reach this threshold.

The headline result, that women were three times as likely to be wearing red or pink during peak fertility, occurred in two different samples, which looks impressive. But it’s not really impressive at all! Rather, it’s exactly the sort of thing you should expect to see if you have a small data set and virtually unlimited freedom to play around with the data, and with the additional selection effect that you submit your results to the journal only if you see some catchy pattern. …

Statistics textbooks do warn against multiple comparisons, but there is a tendency for researchers to consider any given comparison alone without considering it as one of an ensemble of potentially relevant responses to a research question. And then it is natural for sympathetic journal editors to publish a striking result without getting hung up on what might be viewed as nitpicking technicalities. Each person in this research chain is making a decision that seems scientifically reasonable, but the result is a sort of machine for producing and publicizing random patterns.

There’s a larger statistical point to be made here, which is that as long as studies are conducted as fishing expeditions, with a willingness to look hard for patterns and report any comparisons that happen to be statistically significant, we will see lots of dramatic claims based on data patterns that don’t represent anything real in the general population. Again, this fishing can be done implicitly, without the researchers even realizing that they are making a series of choices enabling them to over-interpret patterns in their data.

Andrew Gelman

Indeed. If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero –  even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and  here)- it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on a weird backward logic that students and researchers usually don’t understand?

Suppose that we as educational reformers have a hypothesis that implementing a voucher system would raise the mean test results with 100 points (null hypothesis). Instead, when sampling, it turns out it only raises it with 75 points and has a standard error (telling us how much the mean varies from one sample to another) of 20.

statisticsexplainedDoes this imply that the data do not disconfirm the hypothesis? Given the usual normality assumptions on sampling distributions the one-tailed p-value is approximately 0.11. Thus, approximately 11% of the time we would expect a score this low or lower if we were sampling from this voucher system population. That means  – using the ordinary 5% significance-level — we would not reject the null hypothesis although the test has shown that it is “likely” that the hypothesis is false.

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypothesis by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” But looking at our example, standard scientific methodology tells us that since there is only 11% probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

And, most importantly, of course we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-value of 0.11 means next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Free to choose — and lose

24 Jul, 2014 at 12:45 | Posted in Varia | Comments Off on Free to choose — and lose

100-will-buy-this-car-great-depression-stock-crash
[h/t barnilsson]

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.