Svenska Dagbladet ljuger om friskolorna

31 March, 2013 at 20:25 | Posted in Education & School | 8 Comments

På ledarsidan i Svenska Dagbladet skriver idag dess politiske chefredaktör att “det finns mycket” som pekar i riktning mot att fler friskolor leder till bättre resultat. Under åberopande av en studie genomförd av Anders Böhlmark och Mikael Lindahl påstås också att “[ä]ven elever i kommunala skola gör bättre ifrån sig tack vare konkurrensen.”

Det låter ju jättefint. Problemet är bara att det är ljug!

Låt mig förklara varför jag anser att Svenska Dagbladet ljuger om friskolorna och samtidig reda ut vad forskning och data verkligen säger om friskolornas effekter på skolors och elevers resultat.
Läs vidare

Trickle-down – the USA and Sweden show how it looks in reality

30 March, 2013 at 17:44 | Posted in Economics, Politics & Society | 4 Comments

In a post up today on his blog, Paul Krugman notices that there “doesn’t seem to be much trickle-down going on” in the USA.

Unfortunately we can see the same pattern developing in Sweden. Look at the figure below, which shows how the distribution of mean income and wealth (expressed in year 2009 prices) for the top 0.1% and the bottom 90% has changed in Sweden for the last 30 years:


Source: The World Top Incomes Database

A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed. It’s high time to put an end to this the worst Juggernaut of  our times!

Friskolorna och segregationen

26 March, 2013 at 14:12 | Posted in Education & School | 13 Comments

Förre chefredaktören och friskoleivraren Hans Bergström hade häromdagen en synnerligen illa underbyggd artikel på newsmill där han påstod att svenska friskolor minskar segregationen.

Sten Svensson – tidigare chefredaktör på Lärarnas tidning – svarar på detta grodors plums och ankors plask med ett välargumenterat inlägg i dagens newsmill:

segregeringUnder sin tid som chefredaktör för Dagens Nyheter bedrev Hans Bergström en exempellös svartmålning av den svenska skolan. Den var slapp, flummig kunskapsfientlig och usel på alla upptänkliga sätt …

Nu har vi precis den skola som Bergström har strävat efter och ändå är det inte bra. Elevresultaten faller och allt flera forskarrapporter visar att den svenska skolan är på väg åt helt fel håll …

Bergström försöker bevisa att den svenska skolan inte är segregerad och att segregationen inte har ökat på grund det fria valet och de fristående skolorna. Han talar mot bättre vetande.

I Skolverkets rapport om skolans likvärdighet, som kom förra året, kan man läsa:

”Spridningen mellan skolors genomsnittliga resultat har ökat markant. Mellanskolsvariationen, det mått som används för att beskriva hur mycket resultaten skiljer sig mellan olika skolor, har från en i ett internationellt perspektiv låg nivåer mer än fördubblats sedan slutet av 1990-talet. 2011 låg mellanskolsvariation på över 18 procent enligt betygen. De internationella PISA-undersökningarna visar på samma utveckling. Även spridningen i resultat mellan elever har ökat men inte i samma omfattning som ökningen mellan skolor.” (Likvärdig utbildning i svensk grundskola? Rapport 374. 2012.)

Enligt skolverket har de ökade skillnaderna flera orsaker. En del förklaras med att eleverna är segregerade efter socioekonomiska grunder och en annan del förklaras av att eleverna segregeras efter grunder som inte syns i statistiken.

”Skolorna verkar däremot bli mer och mer segregerade efter egenskaper som inte visar sig i den vanliga statistiken, till exempel att mer studiemotiverade elever (oavsett socioekonomisk bakgrund) i större utsträckning tenderar att utnyttja det fria skolvalet och söka sig till skolor där det finns många andra studiemotiverade elever. På så sätt blir eleverna mer sorterade efter resultat och dolda egenskaper än efter konventionella mått på socioekonomisk bakgrund.”

Skolverket bedömer att likvärdigheten i den svenska grundskolan har försämrats och att ”Valfrihets- och decentraliseringsreformerna i början av 1990-talet har med stor sannolikhet bidragit till denna utveckling även om andra faktorer också kan ha spelat en viss roll.”

Efter sin tid som chefredaktör för Dagens Nyheter har Hans Bergström varit verksam inom friskolebranschen som delägare i ett skolbolag. Han tillhör därmed den grupp som tjänar pengar på en segregerad skola. Skolbolagens vinster kommer till stor del från att de har låg lärartäthet. Om aktiebolagsskolorna, trots låg lärartäthet, har hyfsade elevresultat beror det på att de har ett positivt elevurval. Eleverna har i större grad välutbildade föräldrar som kan stötta sina barn på olika sätt. Som skolverket visar ovan får de också välmotiverade och försigkomna elever som inte har välutbildade föräldrar. Skulle bolagsskolarna ha samma elevunderlag som en ordinär förortskola vore det omöjligt att få goda elevresultat med låg lärartäthet. Skolbolagens vinster kräver således en positivt segregerad skola och vinsterna driver på skolsegregationen.

Hans Bergström kritiserade hårt den gamla sammanhållna och likvärdiga skolan och hans mål var att avskaffa den. Nu är det genomfört och den likvärdiga skolan är avskaffad. Det har varit bra för Hans Bergström och friskolebranschen som kunnat tjäna pengar på systemet men för skolan och eleverna har utvecklingen varit katastrofal.

När det gäller frågan om friskolornas effekt på den svenska skolans kvalitet pekar yours truly i en artikel på Skola och samhälle (längre version här) att det mesta av forskningen tyder på att det inte går att fastslå att friskolorna haft en positiv inverkan.

Så segregationen ökar och kvaliteten ökar inte – vad ska vi då ha friskolorna för? Att bolagsskolornas ägare och direktörer skor sig på våra skattepengar är räcker inte som argument.

Of what use are RCTs?

25 March, 2013 at 13:00 | Posted in Theory of Science & Methodology | 1 Comment

In this video science philosopher Nancy Cartwright explains why using Randomized Controlled Trials (RCTs) is not at all the “gold standard” that it has lately often been portrayed as. control-group1As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

Why the notion of austerity is a massive failure

25 March, 2013 at 10:07 | Posted in Economics | 1 Comment

 

(h/t Jan Milch)

Om vikten av att skilja på risk och osäkerhet

24 March, 2013 at 18:07 | Posted in Economics | 2 Comments

riple På en punkt är det särskilt angeläget att uppgradera den finansiella kunskapen, och det är i förståelsen av osäkerhet …

Läroboken säger i praktiken att vi kan sopa osäkerheten under mattan. När den påstår att vi kan “precisera en sannolikhets-fördelning” försöker den om-vandla osäkerheten till risk …

Det är dags att inse hur farlig denna syn är. Finansmarknaderna är osäkra. Osäkerheten är inte kvantifierbar. Att försöka släta över osäkerheten och prata om risk som ett statistiskt mått är ansvarslöst, för i den sekund man formulerar en risksiffra är det många som börjar tro på den.

Bäst uttrycker kanske ändå postkeynesianernas grand old man – Paul Davidson – vikten och betydelsen av att som John Maynard Keynes (för att nu inte nämna Frank Knight och Gunnar Myrdal) skilja på probabilistisk risk och genuin osäkerhet:

uncertaintyroadUnfortunately as we have all learned in the world of experience, little is known with certainty about future payoffs of investment decisions made today. If the return on economic decisions made today is never known with certainty, then how can financial managers make optimal decisions on where to put their firm’s money and householder’s where to put their saving today?

If theorists invent a world remote from reality and then lived in it consistently, then Keynes [1936, p.16] argued these economic thinkers were “like Euclidean geometers in a non-Euclidean world who discover that apparent parallel lines collide, rebuke these lines for not keeping straight. Yet, in truth there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics” …

As any statistician will tell you, in order to draw any statistical (probabilistic risk) inferences regarding the values of any population universe, one should draw and statistically analyze a sample from that universe. Drawing a sample from the future economic universe of financial markets, however, is impossible. Simply stated the ergodic axiom presumes that the future is already predetermined by an unchanging probability distribution and therefore a sample from the past is equivalent to drawing a sample from the future … Assuming ergodicity permits one to believe one can calculate an actuarial certainty about future events from past data.

Efficient market theorists must implicitly presume decision makers can reliably calculate the future. The economy, therefore, must be governed by an ergodic stochastic process, so that calculating a probability distribution from past statistical data samples is the same as calculating the risks from a sample drawn from the future. If financial markets are governed by the ergodic axiom, then we might ask why do mutual funds that advertise their wonderful past earnings record always note in the advertisement that past performance does not guarantee future results …

This ergodic axiom is an essential foundation for all the complex risk management computer models developed by the “quants” on Wall Street. It is also the foundation for econometricians who believe that their econometric models will correctly predict the future GDP, employment, inflation rate, etc. If, however, the economy is governed by a non-ergodic stochastic process, then econometric estimates generated from past market data are not reliable estimates that would be obtained if one could draw a sample from the future …

In sum, the ergodic axiom underlying the typical risk management and efficient market models represents, in a Keynes view, a model remote from an economic reality that is truly governed by non-ergodic conditions. Keynes, his Post Keynesian followers, and George Soros all reject the assumption that people can know the economic future since it is not predetermined. Instead they assert that people “know” they cannot know the future outcome of crucial economic decisions made today. The future is truly uncertain and not just probabilistic risky.

Paul Davidson

How to be more European

24 March, 2013 at 10:25 | Posted in Varia | 1 Comment

 
we-could-be-unemployed

Sunday morning and The Telegraph obits

24 March, 2013 at 09:34 | Posted in Varia | Comments Off on Sunday morning and The Telegraph obits

One of yours truly’s Sunday morning rituals is reading the obituary column of The Telegraph. Today this obit caught my eyes:

Peter Scott, who has died aged 82, was a highly accomplished cat burglar, and as Britain’s most prolific plunderer of the great and good took particular pains to select his victims from the ranks of aristocrats, film stars and even royalty.

Peter Scott, 'King of thee Cat Burglers'According to a list of 100 names he supplied to The Daily Telegraph, he targeted figures such as Soraya Khashoggi, Shirley MacLaine, the Shah of Iran, Judy Garland and even Queen Elizabeth the Queen Mother — although he added apologetically that, in her case, the authorities had covered up by issuing a “D-notice ”.

In 1994 Scott wrote to the newspaper to say that he would consider it “a massive disappointment if I were not to get a mention in [its] illustrious obituary column”. He explained that he derived much pleasure from reading accounts of the exploits of war heroes, adding: “I would like to think I would have fronted the Hun with the same enthusiasm as I did the fleshpots in Mayfair.” He added that he had been a Telegraph reader since 1957, when newspapers were first allowed in prisons, “on account of its broad coverage on crime”.

In the course of thieving jewellery and artworks from Mayfair mansions, Bond Street shops and stately homes, Scott also served Fleet Street as handy headline fodder, being variously hailed the “King of the Cat Burglars”, “Burglar to the Stars” or the “Human Fly”. He identified a Robin Hood streak in himself, too, asserting in his memoirs that he had been “sent by God to take back some of the wealth that the outrageously rich had taken from the rest of us”.

“I felt like a missionary seeing his flock for the first time,” he explained when he recalled casing Dropmore House, the country house of the press baron Viscount Kemsley, on a rainy night in 1956 and squinting through the window at the well-heeled guests sitting down to dinner. “I decided these people were my life’s work.”

Always a meticulous planner, Scott bought a new suit before each job, so that he would not look out of place in the premises he was burgling. Fear, the possibility of capture, excited him.

During one break-in “a titled lady appeared at the top of the stairs. ‘Everything’s all right, madam,’ I shouted up, and she went off to bed thinking I was the butler.” On other occasions, if disturbed by the occupier, he would shout reassuringly: “It’s only me!”

In all, by his own reckoning, Scott stole jewels, furs and artworks worth more than £30 million. He held none of his victims in great esteem (“upper-class prats chattering in monosyllables”). The roll-call of “marks” from whom he claimed to have stolen valuables included Zsa Zsa Gabor, Lauren Bacall, Elizabeth Taylor, Vivien Leigh, Sophia Loren, Maria Callas and the gambling club and zoo owner John Aspinall. “Robbing that bastard Aspinall was one of my favourites,” he recollected. “Sophia Loren got what she deserved too.”

Scott stole a £200,000 necklace from the Italian star when she was in Britain filming The Millionairess in 1960. Billed in the newspapers as Britain’s biggest jewellery theft, it yielded Scott £30,000 from a “fence”. After Miss Loren had pointed at him on television saying: “I come from a long line of gipsies. You will have no luck,” Scott lost every penny in the Palm Beach Casino at Cannes.

In the 1950s and 1960s he pinpointed his targets by perusing the society columns in the Daily Mail and Daily Express. Nor did he ease up with the approach of middle-age; in the 1980s he was still scaling walls and drainpipes. In one Bond Street caper alone he stole jewellery worth £1.5 million, and in 1985 he was jailed for four years. On his release he expanded his social horizons by becoming a celebrity “tennis bum”, a racquet for hire at a smart London club where — as he put it in his autobiography — he coached still more potential “rich prats”.

By the mid-1990s, Scott had served 12 years in prison in the course of half a dozen separate stretches, and claimed to have laid down his “cane” [jemmy] and retired from a life of crime.

But in 1998 he was jailed for another three and a half years for handling, following the theft of Picasso’s Tête de Femme from the Lefevre Gallery in Mayfair the year before. To the impassive detectives who arrested him, Scott quoted a line from WE Henley: “Under the bludgeonings of chance, my head is bloody but unbowed.” He often drew on literary allusions, quoting Confucius, Oscar Wilde and Proust.

Scott was also a past-master in self-justification of his crimes and misdemeanours: “The people I burgled got rich by greed and skulduggery. They indulged in the mechanics of ostentation — they deserved me and I deserved them. If I rob Ivana Trump, it is just a meeting of two different kinds of degeneracy on a dark rooftop.”

In his memoirs, Gentleman Thief (1995), Scott admitted to an even stronger motivation than fear as he contemplated another “job”: “Even now, after 30 years, it was a sexual thrill.” There was the additional satisfaction in his assumption that the millions reading about his exploits in the papers were silently cheering him on.

John Maynard Keynes on graphs and statistics

23 March, 2013 at 17:49 | Posted in Statistics & Econometrics | Comments Off on John Maynard Keynes on graphs and statistics

In his review in 1938 of Historical Development of the Graphical Representation of Statistical Data, by H. Gray Funkhauser, for The Economic Journal, the great economist writes:

“Perhaps the most striking outcome of Mr. Funkhouser’s researches is the fact of the very slow progress which graphical methods made until quite recently … In the first fifty volumes of the Statistical Journal, 1837-87, only fourteen graphs are printed altogether. It is surprising to be told that Laplace never drew a graph of the normal law of error … Edgeworth made no use of statistical charts as distinct from mathematical diagrams.

Apart from Quetelet and Jevons, the most important influences were probably those of Galton and of Mulhall’s Dictionary, first published in 1884. Galton was indeed following his father and grandfather in this field, but his pioneer work was mainly restricted to meteorological maps, and he did not contribute to the development of the graphical representation of economic statistics.”

So far so good. But then comes the kicker:

“Mr. Funkhouser has made an extremely interesting and valuable contribution to the history of statistical method. I wish, however, that he could have added a warning, supported by horrid examples, of the evils of the graphical method unsupported by tables of figures. Both for accurate understanding, and particularly to facilitate the use of the same material by other people, it is essential that graphs should not be published by themselves, but only when supported by the tables which lead up to them. It would be an exceedingly good rule to forbid in any scientific periodical the publication of graphs unsupported by tables.”

I’m ok with that—if they also forbid the publication of all tables unsupported by graphs. Also if they allow graphs by themselves. Then I’m totally on board.

Andrew Gelman

Well, actually, I think Keynes was more right than wrong, considering how difficult it was back in the 1930s to get hold of other people’s data. Including tables made the data available to other researchers and thereby made it easier to evaluate the validity of the statistical analysis.

Cypern som eurosystemrisk

23 March, 2013 at 12:12 | Posted in Economics | 1 Comment

Men det handlar inte om den direkta ekonomiska smällen som en krasch på Cypern orsakar. Hela det cypriotiska banksystemet är hälften så stort som svenska SEB.

Faran ligger i något helt annat. Ett tänkbart scenario som uppenbarligen diskuterats i EU-kretsar de senaste dagarna är att Cypern faktiskt lämnar valutasamarbetet. Därmed skulle eurobunkerns igenmurade nödutgång, den som en enig politikerarmé deklarerat måste förbli stängd om så helvetet startar skridskoskola, plötsligt börja stå och slå på vid gavel. Befolkningarna i Spanien, Portugal och andra krisländer skulle ju kunna få för sig ett och annat.

Skulle de försöka fly euron skulle det få desto större konsekvenser för Europa …

Precis så är det med hela eurokrisen. Intressen ställs mot varandra. Sunda principer möter en kompromisslös verklighet. Det som verkar klokt i Berlin låter sinnesjukt i Madrid, London eller Nicosia. För varje lösning på ett problem uppstår två nya. Och därför får också till synes små avgöranden oproportioneligt stor betydelse.

Andreas Cervenka

Det börjar med andra ord bli läge för Carl Bastiat Hamilton att åter föreslå att Sverige ska gå med i eurosamarbetet …

Inflation targeting – an unmitigated failure

22 March, 2013 at 16:18 | Posted in Economics | 4 Comments

inflationtargeting

The Riksbank in 1993 announced an official target for CPI inflation of 2 percent. Over the last 15 years, average CPI inflation has equaled 1.4 percent and has thus fallen short of the target by 0.6 percentage points. Has this undershooting of the inflation target had any costs in terms of higher average unemployment? This depends on whether the long-run Phillips curve in Sweden is vertical or not. During the last 15 years, inflation expectations in Sweden have become anchored to the inflation target in the sense that average inflation expectations have been close to the target. The inflation target has thus become credible. If inflation expectations are anchored to the target also when average inflation deviates from the target, the long-run Phillips curve is no longer vertical but downward-sloping. Then average inflation below the credible target means that average unemployment is higher than the rationalexpectations steady-state (RESS) unemployment rate. The data indicate that the average unemployment rate has been 0.8 percentage points higher than the RESS rate over the last 15 years. This is a large unemployment cost of undershooting the inflation target. Some simple robustness tests indicate that the estimate of the unemployment cost is rather robust, but the estimate is preliminary and further scrutiny is needed to assess its robustness.

During 1997-2011, average CPI inflation has fallen short of the inflation target of 2 percent by 0.6 percentage points. But average inflation expectations according to the TNS Sifo Prospera survey have been close to the target. Thus, average inflation expectations have been anchored to the target and the target has become credible. If average inflation expectations are anchored to the target when average inflation differ from the target, the long-run Phillips curve is not vertical. Then lower average inflation means higher average unemployment. The data indicate that average inflation below target has been associated with average unemployment being 0.8 percentage points higher over the last 15 years than would have been the case if average inflation had been equal to the target. This is a large unemployment cost of average inflation below a credible target. Some simple robustness tests indicate that the estimate of the unemployment cost is rather robust, but the estimate is preliminary and further scrutiny is needed to assess its robustness.

The difference between average inflation and average inflation expectations and the apparent existence of a downward-sloping long-run Phillips curve raises several urgent questions that I believe need to be addressed. Why have average inflation expectations exceeded average inflation for 15 years? Why has average inflation fallen below the target for 15 years? Could average inflation have fallen below average inflation expectations and the inflation target without the large unemployment cost estimated here? Could the large unemployment cost have been avoided with a different monetary policy? What are the policy implications for the future? Do these findings make price-level targeting or the targeting of average inflation over a longer period relatively more attractive, since they would better ensure that average inflation over longer periods equals the target?

Lars E.O. Svensson, The Possible Unemployment Cost of Average Inflation below a Credible Target

According to Lars E. O. Svensson – deputy governor of the Riksbank – the Swedish Riksbank has been pursuing a policy during the years 1998-2011 that in reality has made inflation on average 0.6 percentage units lower than the goal set by the Riksbank. The Phillips Curve he estimates shows that unemployment as a result of this overly “austere” inflation level has been almost 1% higher than if one had stuck to the set inflation goal of 2%.

What Svensson is saying, without so many words, is that the Swedish Fed for no reason at all has made people unemployed. As a consequence of a faulty monetary policy the unemployment is considerably higher than it would have been if the Swedish Fed had done its job adequately.

From a more methodological point of view it is of course also interesting to consider the use made of the rational expectations hypothesis in these model-based calculations (and models of the same ilk that abounds in “modern” macroeconomics). When data tells us that “average inflation expectations exceeded average inflation for 15 years” – wouldn’t it be high time to put the REH where it belongs – in the dustbin of history!

To me Svensson’s paper basically confirms what I wrote a couple of months ago:

Models based on REH impute beliefs to the agents that is not based on any real informational considerations, but simply stipulated to make the models mathematically-statistically tractable.

Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system.

Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. Of course, one could perhaps accept REH if it had produced lots of verified predictions and good explanations. But it has done nothing of the kind. Therefore the burden of proof is on those who still want to use models built on ridiculously unreal assumptions – models devoid of all empirical interest.

In reality, REH is a rather harmful modeling assumption, since it contributes to perpetuating the ongoing transformation of economics into a kind of science-fiction-economics. If economics is to guide us, help us make forecasts, explain or better understand real world phenomena, it is in fact next to worthless.

Data snooping

22 March, 2013 at 10:10 | Posted in Statistics & Econometrics | Comments Off on Data snooping

Naturligtvis kan man inte på minsta sätt bevisa att en tärning är obalanserad genom att sitta och kasta den tills man har lyckats få två 6:or i rad. I princip är detta dock ett vanligt fel. För att man ska kunna göra statistiska tester på ett datamaterial måste materialet vara ett resultat av ett slumpmässigt urval där testförfarandet inte på något sätt är påverkat av vad man redan har noterat i datamaterialet. lantzOm man t.ex. sitter och spelar ett brädspel med tärning, t.ex. Fia med knuff, och efter åtta tärningskast noterar att man tydligen fick 3:or i fyra av dessa åtta kast, kan det inte användas som bevis för att tärningen skulle vara obalanserad. Via beräkningar kan man visa att sannolikheten för att få minst fyra 3:or i en sekvens om åtta slumpmässiga kast med en balanserad tärning i och för sig är lägre än 5 %, men en händelse som redan har ägt rum är ju ingen slumphändelse. Det som hände har redan hänt med sannolikheten 100 %. Denna typ av feltänkande, avsiktligt eller ej, kallas data snooping …

Man kan inte heller vända på analysen och hävda att experimentet skulle ha bevisat att tärningen är välbalanserad om antalet 6:or i experimentet blev lägre än två … Anta t.ex. att tärningen faktiskt är så pass obalanserad att den i medeltal visar en 6:a varannan gång. Sannolikheten för att vi vid två kast ska få två 6:or är då 1/2*1/2 = 1/4, d.v.s. 25 %. Sannolikheten för att få färre än två 6:or av en slump är alltså hela 75 % – trots att tärningen faktiskt är rejält obalanserad!

Randomization is no panacea

21 March, 2013 at 16:49 | Posted in Statistics & Econometrics, Theory of Science & Methodology | Comments Off on Randomization is no panacea

When it comes to questions of causality, randomized controlled trials (RCTs) are nowadays considered some kind of “gold standard” in social sciences and policies. Everything has to be “evidence based,” and the evidence preferably has to come from randomized experiments.

But randomization is basically – just as e. g. econometrics – a deductive method. Given warranted assumptions (manipulability, transitivity, separability, additivity, linearity etc)  this method delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are warranted and a fortiori being able to justify our causal conclusions. Although randomization may contribute to controlling for “confounding,” it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. Even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

So RCTs are not at all the “gold standard” they have lately often been portrayed as. RCTs usually do not provide evidence that their results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

Even though I can present evidence for being able to sharpen my pencils with Rube Goldberg’s ingenious construction – mainly becuase flying kites in my windy hometown (Lund, Sweden) is no match – it does not come with a warranted export license. Most people would probably find ordinary pencil sharpeners more efficacious.

När dåliga prognoser är bättre än bra

21 March, 2013 at 10:59 | Posted in Economics | 1 Comment

ripleVad är värst, en dålig prognos eller ingen prognos? Svaret är enkelt. Så fort du exponeras för en prognos är du i en sämre position än du var innan …

Expertprognoser gör med all sannolikhet mer skada än nytta. Det är därför det lönar sig att snabbt bläddra förbi tidningsartiklar med rubriker som ‘Så kommer börsen gå i år’ …

Tänk dig att du har som jobb att sköta ditt företags valutaväxlingar … Du måste bestämma att antingen säkra växelkursen redan nu, eller vänta tills beloppet anländer och växla till den kurs som gäller då … Som tur är har du analytikernas dollarprognoser till hjälp. De gör det inte ett dugg lättare att förutspå dollarkursen. Men de kan hjälpa dig ändå.

Om du lyckas göra rätt spelar inte analyserna någon större roll. Men om dollarn faller som en sten och du har valt att inte säkra växelkursen, kommer företagsledningen att vilja veta varför du har sumpat bort företagets penga … Du kan dra en lång historia om historiska valutatrender, ekonomisk tillväxt, betalningbalans och ränteskillnader. Till slut kommer alla att hålla med om att du agerade rätt mot bakgrund av den information du hade på förhand.

Analyserna gör att du kommer undan. Särskilt de som hade mest fel … Prognoserna har inget ekonomiskt värde, vare sig för företaget eller samhället. Värdet är att de räddar ditt skinn.

Fondförvaltare Alf Riple – med bakgrund som chefsanalytiker på Nordea och rådgivare på norska Finansdepartementet – har skrivit en suveränt bra bok. Läs den!

Den ensamma människan

21 March, 2013 at 09:54 | Posted in Varia | Comments Off on Den ensamma människan

Jag tror på den ensamma människan,
på henne som vandrar ensam,
som inte hundlikt löper till vittring,
som inte varglikt flyr för mänskovittring:
På en gång människa och anti-människa.

Hur nå gemenskap?
Fly den övre och yttre vägen:
Det som är boskap i andra är boskap också i dig.
Gå den undre och inre vägen:
Det som är botten i dig är botten också i andra.

Svårt att vänja sig vid sig själv.
Svårt att vänja sig av med sig själv.

Den som gör det skall ändå aldrig bli övergiven.
Den som gör det skall ändå alltid förbli solidarisk.
Det opraktiska är det enda praktiska
i längden.

Gunnar Ekelöf

IS-LM is bad economics no matter what Krugman says

20 March, 2013 at 14:26 | Posted in Economics | 13 Comments

Paul Krugman has a post up on his blog once again defending “the whole enterprise of Keynes/Hicks macroeconomic theory” and especially his own somewhat idiosyncratic version of IS-LM.

The main problem is simpliciter that there is no such thing as a Keynes-Hicks macroeconomic theory!

So, let us get some things straight.

There is nothing in the post-General Theory writings of Keynes that suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory in the 1937 QJE-article there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. So of course there can’t be any “vindication for the whole enterprise of Keynes/Hicks macroeconomic theory” – simply because “Keynes/Hicks” never existed.

And it gets even worse!

John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General TheoryMr. Keynes and the ‘Classics’. A Suggested Interpretation – returned to it in an article in 1980 – IS-LM: an explanation – in Journal of Post Keynesian Economics. Self-critically he wrote:

I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

 
The editor of JPKE, Paul Davidson, gives the background to Hicks’s article:

I originally published an article about Keynes’s finance motive — which in 1937 Keynes added to his other liquidity preference motives (transactions, precautionary, speculative motives) , I showed that adding this finance motive required that Hicks’s IS curve and LM curves to be interdependent — and thus when the IS curve shifted so would the LM curve.
Hicks and I then discussed this when we met several times.
When I first started to think about the ergodic vs. nonergodic dischotomy, I sent to Hicks some preliminary drafts of articles I would be writing about nonergodic processes. Then John and I met several times to discuss this matter further and I finally convinced him to write the article — which I published in the Journal of Post Keynesian Economics– in which he renounces the IS-LM apparatus. Hicks then wrote me a letter in which he thought the word nonergodic was wonderful and said he wanted to lable his approach to macroeconomics as nonergodic!

So – back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. In 1980 he openly admits he wasn’t.

What Hicks acknowledges in 1980 is basically that his original review totally ignored the very core of Keynes’ theory – uncertainty. In doing this he actually turned the train of macroeconomics on the wrong tracks for decades. It’s about time that neoclassical economists – as Krugman, Mankiw, or what have you – set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to non-ergodicity and uncertainty?

Paul Krugman persists in talking about a Keynes-Hicks-IS-LM-model that really never existed. It’s deeply disappointing. You would expect more from a Nobel prize winner.

Svensk skola – en av de mest segregerade i världen

20 March, 2013 at 10:29 | Posted in Education & School | Comments Off on Svensk skola – en av de mest segregerade i världen

Internationell forskning pekar på farorna med full valfrihet i det svenska skolsystemet. Sedan friskolereformen i början av 90-talet har det svenska skattefinansierade skolsystemet utvecklats till det mest marknadsanpassade och konkurrensutsatta i världen. Enligt Henry Levin, professor i utbildningsekonomi på lärarhögskolan vid Columbia University i USA, finns stora faror med detta, och han varnar för att det nuvarande svenska skolsystemet driver på segregationen ytterligare.

Lyssna på dagens P1-intervju med Levin här.

Misunderstanding the p-value – here we go again

19 March, 2013 at 20:59 | Posted in Statistics & Econometrics | 6 Comments

A non-trivial part of teaching statistics is made up of learning students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p values – really are, still most students misinterpret them.

Giving a statistics course for the Swedish National Research School in History, I asked the students at the exam to explain how one should correctly interpret p-values. Although the correct definition is p(data|null hypothesis), a majority of the students either misinterpreted the p value as being the likelihood of a sampling error (which of course is wrong, since the very computation of the p value is based on the assumption that sampling errors are what causes the sample statistics not coinciding with the null hypothesis) or that the p value is the probability of the null hypothesis being true, given the data (which of course also is wrong, since that is p(null hypothesis|data) rather than the correct p(data|null hypothesis)).

This is not to blame on students’ ignorance, but rather on significance testing not being particularly transparent (conditional probability inference is difficult even to those of us who teach and practice it). A lot of researchers fall pray to the same mistakes. So – given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on weird backward logic that students and researchers usually don’t understand?

That media often misunderstand what p-values and significance testing are all about is well-known. Andrew Gelman gives a recent telling example:

The New York Times has a feature in its Tuesday science section, Take a Number … Today’s column, by Nicholas Balakar, is in error. The column begins:

“When medical researchers report their findings, they need to know whether their result is a real effect of what they are testing, or just a random occurrence. To figure this out, they most commonly use the p-value.”

This is wrong on two counts. First, whatever researchers might feel, this is something they’ll never know. Second, results are a combination of real effects and chance, it’s not either/or.

Perhaps the above is a forgivable simplification, but I don’t think so; I think it’s a simplification that destroys the reason for writing the article in the first place. But in any case I think there’s no excuse for this, later on:

“By convention, a p-value higher than 0.05 usually indicates that the results of the study, however good or bad, were probably due only to chance.”

This is the old, old error of confusing p(A|B) with p(B|A). I’m too rushed right now to explain this one, but it’s in just about every introductory statistics textbook ever written. For more on the topic, I recommend my recent paper, P Values and Statistical Practice, which begins:

“The casual view of the P value as posterior probability of the truth of the null hypothesis is false and not even close to valid under any reasonable model, yet this misunderstanding persists even in high-stakes settings … The formal view of the P value as a probability conditional on the null is mathematically correct but typically irrelevant to research goals (hence, the popularity of alternative—if wrong—interpretations) …”

I can’t get too annoyed at science writer Bakalar for garbling the point—it confuses lots and lots of people—but, still, I hate to see this error in the newspaper.

On the plus side, if a newspaper column runs 20 times, I guess it’s ok for it to be wrong once—we still have 95% confidence in it, right?

Statistical significance doesn’t say that something is important or true. And since there already are far better and more relevant testing that can be done (see e. g. here and here), it is high time to give up on this statistical fetish. 

The limits to probabilistic reasoning

19 March, 2013 at 17:40 | Posted in Statistics & Econometrics, Theory of Science & Methodology | Comments Off on The limits to probabilistic reasoning

Probabilistic reasoning in science – especially Bayesianism – reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there are strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.

probIn many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of KeynesA Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by probabilistically reasoning Bayesian economists.

In an interesting article on his blog, John Kay shows that these strictures on probabilistic-reductionist reasoning do not only apply to everyday life and science, but also to the law:

English law recognises two principal standards of proof. The criminal test is that a charge must be “beyond reasonable doubt”, while civil cases are decided on “the balance of probabilities”.

The meaning of these terms would seem obvious to anyone trained in basic statistics. Scientists think in terms of confidence intervals – they are inclined to accept a hypothesis if the probability that it is true exceeds 95 per cent. “Beyond reasonable doubt” appears to be a claim that there is a high probability that the hypothesis – the defendant’s guilt – is true. Perhaps criminal conviction requires a higher standard than the scientific norm – 99 per cent or even 99.9 per cent confidence is required to throw you in jail. “On the balance of probabilities” must surely mean that the probability the claim is well founded exceeds 50 per cent.

And yet a brief conversation with experienced lawyers establishes that they do not interpret the terms in these ways. One famous illustration supposes you are knocked down by a bus, which you did not see (that is why it knocked you down). Say Company A operates more than half the buses in the town. Absent other evidence, the probability that your injuries were caused by a bus belonging to Company A is more than one half. But no court would determine that Company A was liable on that basis.

A court approaches the issue in a different way. You must tell a story about yourself and the bus. Legal reasoning uses a narrative rather than a probabilistic approach, and when the courts are faced with probabilistic reasoning the result is often a damaging muddle …

When I have raised these issues with people with scientific training, they tend to reply that lawyers are mostly innumerate and with better education would learn to think in the same way as statisticians. Probabilistic reasoning has become the dominant method of structured thinking about problems involving risk and uncertainty – to such an extent that people who do not think this way are derided as incompetent and irrational …

It is possible – common, even – to believe something is true without being confident in that belief. Or to be sure that, say, a housing bubble will burst without being able to attach a high probability to any specific event, such as “house prices will fall 20 per cent in the next year”. A court is concerned to establish the degree of confidence in a narrative, not to measure a probability in a model.

Such narrative reasoning is the most effective means humans have developed of handling complex and ill-defined problems … Probabilistic thinking … often fails when we try to apply it to idiosyncratic events and open-ended problems. We cope with these situations by telling stories, and we base decisions on their persuasiveness. Not because we are stupid, but because experience has told us it is the best way to cope. That is why novels sell better than statistics texts.

Guess I must be doing something right

19 March, 2013 at 11:31 | Posted in Varia | Comments Off on Guess I must be doing something right

happy-cartoon-boy-jumping-and-smiling3 Yours truly launched this blog two years ago. The number of visitors has  increased steadily. From having only a couple of hundred visits per month at the start, I’m now having almost 60 000 visits per month. A blog is sure not a beauty contest, but given the rather “wonkish” character of the blog – with posts mostly on economic theory, statistics, econometrics, theory of science and methodology – it’s rather gobsmacking that so many are interested and take their time to read and comment on it. I am – of course – truly awed, honoured and delighted!

Next Page »

Blog at WordPress.com.
Entries and comments feeds.