The big problem with randomization (wonkish)

22 Jan, 2015 at 09:54 | Posted in Statistics & Econometrics | Comments Off on The big problem with randomization (wonkish)

But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …

randomizeAbsent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …

If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings. This is the error made by the bond rating agencies in the recent financial crash — they transferred findings from one historical experience to a domain in which they no longer applied because, I will suggest, social confounders were not included.

Ed Leamer

Hästskitsteoremet i svensk tappning

22 Jan, 2015 at 09:06 | Posted in Economics, Politics & Society | Comments Off on Hästskitsteoremet i svensk tappning

Så hur många svenska miljardärer krävs då för att det ska motsvara förmögenheten hos den fattigare halvan av den svenska befolkningen? Ja, enligt beräkningen ovan ungefär två; Ingvar Kamprad och Stefan Persson har med god marginal mer än 700 miljarder kronor och därmed mer än de 615 miljarder som den fattigare hälften av den svenska befolkningen har tillsammans …

imagesVad ska man nu dra för slutsatser av detta? Att Sverige är extremt ojämlikt kanske? Ja … men man kan förstås lika gärna dra slutsatsen att svenska miljardärer varit exceptionellt framgångsrika internationellt sett och att det varit till gagn för alla i Sverige.

Hur man ska se på förmögenhetsfördelning handlar i långt mycket större utsträckning om faktorer som hur förmögenheter skapas, på vilka sätt de med förmögenheter bidrar till samhället i stort och hur samhället i övrigt är organiserat. Det är långt ifrån självklart att en enorm förmögenhetskoncentration är dålig för samhället men det är inte heller uppenbart att det inte är något att bekymra sig om … I en svensk kontext finns skäl till att individer inte har samma behov av personlig förmögenhet som man har i andra länder eftersom många utgifter, förutsägbara såväl som oförutsägbara, till stor del finansieras gemensamt (skola, sjukvård, arbetslöshet etc.).

Jesper Roine

Ja, vad ska man säga om denna, med både hängslen och livrem försedda, ekonomistiska ojämlikhetsapologetik? Ibland säger en bild mer än tusen ord …

7360f8a6_Reagonomics-1200x959

Econometrics made easy — Gretl

21 Jan, 2015 at 21:53 | Posted in Statistics & Econometrics | Comments Off on Econometrics made easy — Gretl

 

Thanks to Allin Cottrell and Riccardo Lucchetti we today have access to a high quality tool for doing and teaching econometrics — Gretl. And, best of all, it is totally free!

Gretl is up to the tasks you may have, so why spend money on expensive commercial programs?

The latest snapshot version of Gretl – 1.9.92 – can be downloaded here.

So just go ahead. With a program like Gretl econometrics has never been easier to master!

[And yes, I do know there’s another fabulously nice and free program — R. But R hasn’t got as nifty a GUI as Gretl — and at least for students, it’s more difficult to learn to handle and program. I do think it’s preferable when students are going to learn some basic econometrics to use Gretl so that they can concentrate more on “content” rather than “technique.”]

Overconfident economists

20 Jan, 2015 at 13:14 | Posted in Statistics & Econometrics | 1 Comment

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly and where the full intricacies of the economy are completely revealed … Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us
disappointingly close to the State of Complete Ignorance whence we began …

overconfidenceWe may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid …

It would be much healthier for all of us if we could accept our fate, recognize that perfect knowledge will be forever beyond our reach and find happiness with what we have …

Can we economists agree that it is extremely hard work to squeeze truths from our data sets and what we genuinely understand will remain uncomfortably limited? We need words in our methodological vocabulary to express the limits … Those who think otherwise should be required to wear a scarlet-letter O around their necks, for “overconfidence.”

Ed Leamer

On abstraction and idealization in economics

20 Jan, 2015 at 10:18 | Posted in Theory of Science & Methodology | 4 Comments

When applying deductivist thinking to economics, neoclassical economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. idealization-in-cognitive-and-generative-linguistics-6-728The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply don’t hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.

Or as Hans Albert has it on the neoclassical style of thought:

In everyday situations, if, in answer to an inquiry about the weather forecast, one is told that the weather will remain the same as long as it does not change, then one does not normally go away with the impression of having been particularly well informed, although it cannot be denied that the answer refers to an interesting aspect of reality, and, beyond that, it is undoubtedly true …

We are not normally interested merely in the truth of a statement, nor merely in its relation to reality; we are fundamentally interested in what it says, that is, in the information that it contains …

Information can only be obtained by limiting logical possibilities; and this in principle entails the risk that the respective statement may be exposed as false. It is even possible to say that the risk of failure increases with the informational content, so that precisely those statements that are in some respects most interesting, the nomological statements of the theoretical hard sciences, are most subject to this risk. The certainty of statements is best obtained at the cost of informational content, for only an absolutely empty and thus uninformative statement can achieve the maximal logical probability …

hans_albertThe neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures – appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …

Science progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

The connections sketched out above are part of the general logic of the sciences and can thus be applied to the social sciences. Above all, with their help, it appears to be possible to illuminate a methodological peculiarity of neoclassical thought in economics, which probably stands in a certain relation to the isolation from sociological and social-psychological knowledge that has been cultivated in this discipline for some time: the model Platonism of pure economics, which comes to expression in attempts to immunize economic statements and sets of statements (models) from experience through the application of conventionalist strategies …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

SNS Konjunkturråd 2015 — en bedrövlig läsning

19 Jan, 2015 at 22:41 | Posted in Economics, Politics & Society | 6 Comments

På DN Debatt skrev SNS Konjunkturråd 2015 häromdagen att “Stora skulder är på många sätt tecken på ett väl fungerande finansiellt system” och att vi inte behöver oroa oss över svenskarnas bolåneskuldsättning eftersom “svenska hushåll har fullt personligt betalningsansvar.”

Herre du milde! Och detta grodors plums och ankors plask ska man behöva läsa år 2015. Man tager sig för pannan!

Man känner sig inte helt trygg med SNS konjunkturråds rekommendationer för att skapa ett bättre finansiellt system. De flesta rekommendationerna bygger på filosofin att marknaden — med lite stöd från bättre statistik och konsumentskydd — själv kan förbättra sig. Konjunkturrådets filosofi liknar mycket det som på 1980-talet präglade ekonomernas syn på den då framväxande och nyligen avreglerade finansmarknaden. Avregleringen på finansmarknaden var som bekant huvudorsaken till den värsta krisen 1991-93 i Sverige sedan 1930-talet.
3Monkeys6I en ledare i Ekonomisk Debatt hösten 1985 (nr 4) kan man läsa om hur ekonomerna resonerade om 1980-talets avregleringar. På frågan om bankerna är en kommande krisbransch svarar ledaren blank nej: ”Bankerna är ingen kommande krisbransch om vi ser frågan utifrån bankväsendets makroekonomiska stabilitet”. Ledaren avvisar också explicit regleringar som skydd för konsumenterna eftersom de står i strid med de av marknaden initierade strukturförändringarna: ”Därför måste den avreglering som påbörjats fortsätta och genomföras med kraft och snabbhet”. Vidare konstateras att ”Bankinspektionens möjligheter att övervaka utvecklingen minskar inte heller genom en avreglering. Tvärtom…”. Ledaren i Ekonomisk Debatt avslutas med: ”Kunskap är viktigare än regleringar för att påverka de finansiella marknaderna”. Samma tongångar då som nu?

Stig Tegle

Hushållens skuldsättning bottnar främst i den ökning av tillgångsvärden som letts av ökad långivning till hushållen och den därav uppkomna bostadsbubblan. På lång sikt är det självklart inte möjligt att bibehålla denna trend. Tillgångspriserna avspeglar i grunden förväntningar om framtida avkastning på investeringar.

hushållsskulder 2014
Källa: SCB och egna beräkningar

Med den skuldkvot vi ser hushållen tagit på sig riskerar vi få en skulddeflationskris som kommer att slå oerhört hårt mot svenska hushåll.

År 1638 kunde priset på en tulpanlök i Nederländerna vara så högt att det motsvarade två årslöner. Och hushållen var övertygade om att priserna bara skulle fortsätta att öka och öka. Som alla andra bubblor sprack dock även denna bubbla och lämnade mängder av utblottade och ruinerade människor efter sig. Liknande ting utspelade sig i exempelvis Mississippibubblan år 1720 och i IT-bubblan för tio år sedan. Hur svårt ska det vara att lära av historien?

Reala bostadsrättspriser har stigit med omkring 900 % de senaste 30 åren. Om detta inte utgör ett hot mot “den finansiella stabiliteten” vet jag inte vad som skulle kunna göra det!

I will argue here that the financial catastrophe that we have just experienced powerfully illustrates a reason why extrapolating from natural experiments will inevitably be hazardous. The misinterpretation of historical data that led rating agencies, investors, and even myself to guess that home prices would decline very little and default rates would be tolerable even in a severe recession should serve as a caution for all applied econometrics.

Ed Leamer

Added 20/1: Roine Vestman noterar i ett inlägg på ekonomistas apropå rapportens slutsats att sannolikheten för ett negativt makrostabilt scenario skulle vara liten, att här “finns utrymme för andra tolkningar.” Jäpp!

Distributive effects of financial deregulation

19 Jan, 2015 at 19:57 | Posted in Economics | Comments Off on Distributive effects of financial deregulation

One of the greatest sources of wealth for the top 0.1% class of super-rich is Wall Street, and Wall Street is also the source of the financial crisis that inflicted much economic pain on the bottom 90% in recent years. This has understandably led to a widespread feeling in American society that the rules governing Wall Street are stacked in favor of a small elite, at the expense of Main Street.

However, much of the recent academic work in economics ignores the distributive effects of financial regulation and focuses solely on efficiency. By contrast, our recent paper on “The Redistributive Effects of Financial Deregulation” in the Journal of Monetary Economics puts the focus squarely on distributional considerations.

obama-financial-regulationOur analysis is based on the observation that losses in the financial sector can impose massive costs on the real economy … During the 2008 financial crisis, banks took large losses which raised interest rate spreads and lowered access to credit for the real economy, which in turn reduced the earnings of workers. When financial institutions decide how much risk to take on, they do not take into account these losses. Instead, they take on more risk than is good for the rest of the economy …

We argue in the paper that the most insidious consequence of bailouts is not that they lead to an explicit transfer from Main Street to Wall Street but that they could lead to an even larger implicit transfer by encouraging greater risk-taking and thereby exposing the economy to more credit crunches. Moreover, it may be difficult to commit to not providing bailouts once a financial crisis has occurred because the real sector may prefer to provide a bailout rather than suffer a severe credit crunch. By contrast, regulating risk-taking directly does not suffer from this commitment problem …

How can we better protect Main Street from the externalities of Wall Street? The simplest way is to regulate risk-taking by banks, whether by increasing their capital adequacy requirements, separating risky investment activities like proprietary trading from systemically important traditional banking, limiting payouts that endanger the capitalization of the financial sector, or using structural policies including limits on asymmetric compensation schemes to reduce incentives for risk-taking. Unfortunately, the current movement towards rolling back financial regulation may serve the interests of Wall Street well but continues to expose Main Street to the risk of financial meltdowns.

Anton Korinek & Jonathan Kreamer

Kausalitet och korrelation — exemplet friskolor

19 Jan, 2015 at 13:48 | Posted in Education & School | Comments Off on Kausalitet och korrelation — exemplet friskolor

causation
När vi i Sverige 1992 genomförde en friskolereform fick familjer därigenom över lag större möjlighet att själva välja var man ville sätta sina barn i skola. I linje med det av Milton Friedman redan på 1950-talet förespråkade införandet av skolpeng (voucher) underlättades etablerandet av friskolor väsentligt.

Friskolorna har som följd av denna friskolereform – inte minst på senare år – ökat sin andel av skolmarknaden markant. Idag utbildas mer än 10 % av landets grundskoleelever vid en friskola och nästan 25 % av gymnasieeleverna får sin utbildning vid friskolor.

Friskoleexpansionen har dock rent geografiskt sett väldigt olika ut. Idag saknar lite mer än en tredjedel av kommunerna friskolor på grundskolenivå och två tredjedelar av kommunerna saknar friskolor på gymnasienivå. Och i genomsnitt har elever vid friskolor föräldrar med högre utbildningsnivå och inkomster än eleverna vid kommunala skolor.

Mot bland annat denna bakgrund har det bland forskare, utbildningsanordnare, politiker m.fl. blivit intressant att försöka undersöka vilka konsekvenser friskolereformen haft.

Nu är det självklart inte helt lätt att göra en sådan bedömning med tanke på hur mångfacetterade och vittomfattande de mål är som satts upp för skolverksamheten i Sverige.

Ett vanligt mål som man fokuserat på är elevernas prestationer i form av uppnående av olika kunskapsnivåer. När man genomförde friskolereformen var ett av de ofta framförda argumenten att friskolorna skulle höja elevernas kunskapsnivåer, både i friskolorna (”den direkta effekten”) och – via konkurrenstrycket – i de kommunala skolorna (”den indirekta effekten”). De kvantitativa mått man använt för att göra dessa värderingar är genomgående betyg och/eller resultat på nationella prov.

Vid en första anblick kan det kanske förefalla trivialt att göra sådana undersökningar. Det är väl bara att – kan det tyckas – plocka fram data och genomföra nödiga statistiska tester och regressioner.

Riktigt så enkelt är det nu inte. I själva verket är det – som Pontus Bäckström så förtjänstfullt visar – väldigt svårt att få fram entydiga kausala svar på den här typen av frågor:

För någon vecka sedan skrev Mats Edman, chefredaktör på SKL-tidningen Dagens Samhälle, en krönika i vilken han drog slutsatsen att fristående skolor är mycket bättre än de kommunala skolorna … Bland annat visar han att de fristående skolornas elever i snitt har 18 poäng högre meritvärde än de som gått kommunala skolor. Han visar också att de kommunala skolorna är starkt överrepresenterade bland de skolor som presterar sämst och de fristående skolorna bland de som presterar bäst.

Kritiken har dock inte låtit vänta på sig … Det är en alldeles för banal analys som ligger till grund för Edmans slutsatser i och med att han inte kontrollerar dessa skillnader mot skolornas elevsammansättning …

Syftet med detta inlägg är primärt att visa hur stor del av denna ”friskoleeffekt” som kan förklaras av de fristående skolornas elevsammansättning. För att åstadkomma en analys med pedagogiska och förhållandevis lättbegripliga resultat har jag därför gjort en regressionsanalys som först bara mäter den ”rena” friskoleeffekten. Detta görs genom att använda en dikotom variabel för huvudman (dvs en variabel som bara kan anta värdet 1 eller 0 (1 = fristående)).

Den okontrollerade medelvärdesskillnaden mellan kommunala och fristående skolor var drygt 18 poäng och det är den skillnaden som framgår av modell 1 …

I modell 2 tillförs sedan ett antal bakgrundsvariabler …Här är vi endast intresserade av att ta reda på hur ”friskoleffekten” förändras under kontroll för skolornas olika elevsammansättning.

Detta tar vi reda på genom att dividera den nya effektstorleken för huvudman (från modell 2) med den ursprungliga (från modell 1), därigenom kan vi se hur stor andel av den ursprungliga effekten som ”kontrollerats bort” av variablerna för elevsammansättning. I detta fall har alltså knappa 80 % av den ursprungliga effekten kontrollerats bort.

Som vi ser i B-koefficienten för huvudman i modell 2 återstår en oförklarad skillnad om ungefär 4 meritvärdespoäng, vilket mycket väl skulle kunna vara ett resultat av att fristående skolor är ”bättre” på det sätt Edman tänker sig. Samtidigt ska vi vara ödmjuka inför det faktum att det fortsatt finns en hel drös aspekter vi inte kontrollerat för även i dessa analyser, tex vilka lärare som arbetar på vilka skolor.

Ska man entydigt kunna visa att det föreligger effekter och att dessa är ett resultat av just friskolornas införande – och inget annat – måste man identifiera och därefter kontrollera för påverkan från alla ”störande bakgrundsvariabler” av typen föräldrars utbildning, socioekonomisk status, etnicitet, geografisk hemhörighet, religion m.m. – så att vi kan vara säkra på att det inte är skillnader i dessa variabler som är de i fundamental mening verkliga kausalt bakomliggande förklaringarna till eventuella genomsnittliga effektskillnader.

Idealt sett skulle vi, för att verkligen vinnlägga oss om att kunna göra en sådan kausalanalys, vilja genomföra ett experiment där vi plockar ut en grupp elever och låter dem gå i friskolor och efter en viss tid utvärderar effekterna på deras kunskapsnivåer. Sedan skulle vi vrida tillbaka klockan och låta samma grupp av elever istället gå i kommunala skolor och efter en viss tid utvärdera effekterna på deras kunskapsnivåer. Genom att på detta experimentvis kunna isolera och manipulera undersökningsvariablerna så att vi verkligen kan säkerställa den unika effekten av friskolor – och inget annat – skulle vi kunna få ett exakt svar på vår fråga.

Eftersom tidens pil bara går i en riktning inser var och en att detta experiment aldrig går att genomföra i verkligheten.

Det nästbästa alternativet skulle istället vara att slumpmässigt dela in elever i grupper: en med elever som får gå i friskolor (”treatment”) och en med elever som får gå i kommunala skolor (”control”). Genom randomiseringen förutsätts bakgrundsvariablerna i genomsnitt vara identiskt likafördelade i de båda grupperna (så att eleverna i de båda grupperna i genomsnitt inte skiljer sig åt i vare sig observerbara eller icke-observerbara hänseenden) och därigenom möjliggöra en kausalanalys där eventuella genomsnittliga skillnader mellan grupperna kan återföras på (”förklaras av”) om man gått i friskola eller i kommunal skola.

Problemet är bara att man kan ifrågasätta om dessa så kallade randomiserade kontrollstudier är evidentiellt relevanta när vi exporterar resultaten från ”experimentsituationen” till en ny målpopulation. Med andra konstellationer av bakgrunds- och stödfaktorer säger oss den genomsnittliga effekten i en randomiserad kontrollstudie troligen inte mycket, och kan därför inte heller i någon större utsträckning vägleda oss i frågan om vi ska genomföra en policy/åtgärdsprogram eller ej.

Det i särklass vanligaste undersökningsförfarandet är – som  i Bäckströms analys – att man genomför en traditionell multipel regressionsanalys baserad på så kallade minstakvadrat (OLS) eller maximum likelihood (ML) skattningar av observationsdata, där man försöker ”konstanthålla” ett antal specificerade bakgrundsvariabler för att om möjligt kunna tolka regressionskoefficienterna i kausala termer. Vi vet att det föreligger risk för ett ”selektionsproblem” eftersom de elever som går på friskolor ofta skiljer sig från de som går på kommunala skolor vad avser flera viktiga bakgrundsvariabler, kan vi inte bara rakt av jämföra de två skolformerna kunskapsnivåer för att därur dra några säkra kausala slutsatser. Risken är överhängande att de eventuella skillnader vi finner och tror kan förklaras av skolformen, i själva verket helt eller delvis beror på skillnader i de bakomliggande variablerna (t.ex. bostadsområde, etnicitet, föräldrars utbildning, m.m.)

Ska man försöka sig på att sammanfatta de regressionsanalyser som genomförts är resultatet – precis som i Bäckströms exempel – att de kausala effekter på elevers prestationer man tyckt sig kunna identifiera av friskolor genomgående är små (och ofta inte ens statistiskt signifikanta på gängse signifikansnivåer). Till detta kommer också att osäkerhet råder om man verkligen kunnat konstanthålla alla relevanta bakgrundsvariabler – Bäckström nämner t. ex. lärarnas olika kompetens – och att därför de skattningar som gjorts ofta i praktiken är behäftade med otestade antaganden och en icke-försumbar osäkerhet och ”bias” som gör det svårt att ge en någorlunda entydig värdering av forskningsresultatens vikt och relevans. Enkelt uttryckt skulle man kunna säga att många – kanske de flesta – av de effektstudier av detta slag som genomförts, inte lyckats skapa tillräckligt jämföra grupper, och att – eftersom detta strikt sett är absolut nödvändigt för att de statistiska analyser man de facto genomför ska kunna tolkas på det sätt man gör – värdet av analyserna därför är svårt att fastställa. Det innebär också – och här ska man även väga in möjligheten av att det kan föreligga bättre alternativa modellspecifikationer (speciellt vad gäller ”gruppkonstruktionerna” i de använda urvalen) – att de ”känslighetsanalyser” forskare på området regelmässigt genomför, inte heller ger någon säker vägledning om hur pass ”robusta” de gjorda regressionsskattningarna egentligen är. Vidare är det stor risk för att de latenta, bakomliggande, ej specificerade variabler som representerar karakteristika som ej är uppmätta (intelligens, attityd, motivation m.m.) är korrelerade med de oberoende variabler som ingår i regressionsekvationerna och därigenom leder till ett problem med endogenitet.

Forskningen har inte generellt kunnat belägga att införandet av friskolor och ökad skolkonkurrens lett till några större effektivitetsvinster eller påtagligt ökade kunskapsnivåer hos eleverna i stort. De uppmätta effekterna är små och beror till stor del på hur de använda modellerna specificeras och hur de ingående variablerna mäts och vilka av dem som ”konstanthålls”. Det går således inte heller att säkerställa att de effekter man tyckt sig kunna detektera vad gäller resultatförbättringar i friskolor skulle bero på friskolorna som sådana. Metodologiskt har det visat sig vara svårt att konstruera robusta och bra kvalitetsmått och mätinstrument som möjliggör en adekvat hantering av alla de olika faktorer – observerbara och icke-observerbara – som påverkar konkurrensen mellan skolformerna och ger upphov till eventuella skillnader i elevprestationer mellan skolformerna. Följden blir att de små effekter man (i vissa undersökningar) kunnat konstatera föreligga sällan är behäftade med någon högre grad av evidentiell ”warrant”. Mycket av forskningsresultaten baseras på både otestade och i grunden otestbara modellantaganden (t.ex. vad avser linearitet, homogenitet, additivitet, icke-förekomst av interaktionsrelationer, oberoende, bakgrundskontextuell neutralitet m.m.) Resultaten är genomgående av en tentativ karaktär och de slutsatser forskare, politiker och opinionsbildare kan dra av dem bör därför återspeglas i en ”degree of belief” som står i paritet med denna deras epistemologiska status.

We are the 100%

19 Jan, 2015 at 10:44 | Posted in Economics | 1 Comment

we are the 100%

Debating “modern” macroeconomics I often get the feeling that mainstream economists when facing anomalies think that there is always some further “technical fix” that will get them out of the quagmire. But are these elaborations and amendments on something basically wrong really going to solve the problem? I doubt it. Acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

When criticizing the basic workhorse DSGE model for its inability to explain involuntary unemployment, DSGE defenders maintain that later elaborations — especially newer search models — manage to do just that. I strongly disagree.

One of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — nothing but nonsense on stilts?

Brad DeLong and the true nature of neoclassical economics

18 Jan, 2015 at 15:57 | Posted in Economics | 5 Comments

I think that modern neoclassical economics is in fine shape as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism. As long as we have relatively-self-interested liberal individuals who have relatively-strong beliefs that things are theirs, the competitive market in equilibrium is an absolutely wonderful mechanism for achieving truly extraordinary degree of societal coordination and productivity. We need to understand that. We need to value that. And that is what neoclassical economics does, and does well.

Of course, there are all the caveats to Arrow-Debreu-Mackenzie:

adb_poster_red_kickitover1   The market must be in equilibrium.
2   The market must be competitive.
3   The goods traded must be excludable.
4   The goods traded must be non-rival.
5   The quality of goods traded and of effort delivered must be known, or at least bonded, for adverse selection and moral hazard are poison.
6   Externalities must be corrected by successful Pigovian taxes or successful Coaseian carving of property rights at the joints.
7   People must be able to accurately calculate their own interests.
8   People must not be sadistic–the market does not work well if participating agents are either the envious or the spiteful.
9   The distribution of wealth must correspond to the societal consensus of need and desert.
10 The structure of debt and credit must be sound, or if it is not sound we need a central bank or a social-credit agency to make it sound and so make Say’s Law true in practice even though we have no reason to believe Say’s Law is true in theory.

Brad DeLong

An impressive list of caveats indeed. Not very much value left of “modern neoclassical economics” if you ask me …

what ifStill — almost a century and a half after Léon Walras founded neoclassical general equilibrium theory — “modern neoclassical economics” hasn’t been able to show that markets move economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. One however has to ask oneself — what good does that do?

As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is negligible. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up.

Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away) is a gross misallocation of intellectual resources and time.

And then, of course, there is Sonnenschein-Mantel-Debreu!

So what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why “modern neoclassical economics” — New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” — with its microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Instead of real maturity, we see that general equilibrium theory possesses only pseudo-maturity.kornai For the description of the economic system, mathematical economics has succeeded in constructing a formalized theoretical structure, thus giving an impression of maturity, but one of the main criteria of maturity, namely, verification, has hardly been satisfied. In comparison to the amount of work devoted to the construction of the abstract theory, the amount of effort which has been applied, up to now, in checking the assumptions and statements seems inconsequential.

Milton Friedman on econometric ‘groping in the dark’

16 Jan, 2015 at 19:12 | Posted in Statistics & Econometrics | Comments Off on Milton Friedman on econometric ‘groping in the dark’

Being sorta-kinda Keynesian, yours truly doesn’t often find the occasion to approvingly quote Milton Friedman. But on this issue I have no problem:

Granted that the final result will be capable of being expressed in the form of a system of simultaneous equations applying to the economy as a whole, it does not follow that the best way to get to that final result is by seeking to set such a system down now. As I am sure those who have tried to do so will agree, we now know so little about the dynamic mechanisms at work that there is enormous arbitrariness in any system set down. Limitations of resources – mental, computational, and statistical – enforce a model that, although complicated enough for our capacities, is yet enormously simple relative to the present state of understanding of the world we seek to explain. GropingUntil we can develop a simpler picture of the world, by an understanding of interrelations within sections of the economy, the construction of a model for the economy as a whole is bound to be almost a complete groping in the dark. The probability that such a process will yield a meaningful result seems to me almost negligible.

Milton Friedman (1951)

Statistical modeling and reality

15 Jan, 2015 at 11:24 | Posted in Statistics & Econometrics | 4 Comments

My critique is that the currently accepted notion of a statistical model is not scientific; rather, it is a guess at what might constitute (scientific) reality without the vital element of feedback, that is, without checking the hypothesized, postulated, wished-for, natural-looking (but in fact only guessed) model against that reality. To be blunt, as far as is known today, there is no such thing as a concrete i.i.d. (independent, identically distributed) process, not because this is not desirable, nice, or even beautiful, but because Nature does not seem to be like that … Kalman_dummiesAs Bertrand Russell put it at the end of his long life devoted to philosophy, “Roughly speaking, what we know is science and what we don’t know is philosophy.” In the scientific context, but perhaps not in the applied area, I fear statistical modeling today belongs to the realm of philosophy.

To make this point seem less erudite, let me rephrase it in cruder terms. What would a scientist expect from statisticians, once he became interested in statistical problems? He would ask them to explain to him, in some clear-cut cases, the origin of randomness frequently observed in the real world, and furthermore, when this explanation depended on the device of a model, he would ask them to continue to confront that model with the part of reality that the model was supposed to explain. Something like this was going on three hundred years ago … But in our times the idea somehow got lost when i.i.d. became the pampered new baby.

Rudolf Kalman

What does ‘autonomy’ mean in econometrics?

15 Jan, 2015 at 11:07 | Posted in Economics | Comments Off on What does ‘autonomy’ mean in econometrics?

The point of the discussion, of course, has to do with where Koopmans thinks we should look for “autonomous behaviour relations”. He appeals to experience but in a somewhat oblique manner. He refers to the Harvard barometer “to show that relationships between economic variables … not traced to underlying behaviour equations are unreliable as instruments for prediction” … His argument would have been more effectively put had he been able to give instances of relationships that have been “traced to underlying behaviour equations” and that have been reliable instruments for prediction. He did not do this, and I know of no conclusive case that he could draw upon. There are of course cases of economic models that he could have mentioned as having been unreliable predictors. But these latter instances demonstrate no more than the failure of Harvard barometer: all were presumably built upon relations that were more or less unstable in time. devoidThe meaning conveyed, we may suppose, by the term “fundamental autonomous relation” is a relation stable in time and not drawn as an inference from combinations of other relations. The discovery of such relations suitable for the prediction procedure that Koopmans has in mind has yet to be publicly presented, and the phrase “underlying behaviour equation” is left utterly devoid of content.

Rutledge Vining

If only Robert Lucas had read Vining …

Why is capitalism failing?

14 Jan, 2015 at 18:46 | Posted in Economics | Comments Off on Why is capitalism failing?

 

‘New Keynesian’ haiku economics

13 Jan, 2015 at 15:51 | Posted in Economics | 7 Comments

The neoliberal hegemony of the Reagan-Thatcher certainly decreased the debate among mainstream economists decline:

It is clear that the Great Depression and the Keynesian Revolution seemed to increase debate within the mainstream, and that as Joe says, the: “decline in debate… appears to have been associated with the emergence of a ‘neoliberal’ hegemony from the 1970s onwards.” That’s essentially correct.

And the decline in debate explains why Lucas could say in the early 1980s that: “at research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another.” And also why if you wanted to publish you basically had to accept the crazy New Classical models. Krugman admitted to that before, as I’ve already noticed. He argued that: “the only way to get non-crazy macroeconomics published was to wrap sensible assumptions about output and employment in something else, something that involved rational expectations and intertemporal stuff and made the paper respectable.” You must remember, you don’t publish, you don’t get tenure. So crazy models became the norm.

haiku-in-japaneseNot only heterodox economists were kicked out of mainstream departments, and had to create their own journals in the 1970s, but also the pressure within the mainstream to conform and silence dissent was strong indeed. Note that many, like Blanchard and Woodford for example, in the mainstream continue to suggest that there is a lot of consensus between New Keynesians, and Real Business Cycles types. In fact, the say there is more agreement now than in the 1970s. How is the consensus methodology in macroeconomics, you ask. From Blanchard’s paper above:

“To caricature, but only slightly: A macroeconomic article today often follows strict, haiku-like, rules: It starts from a general equilibrium structure, in which individuals maximize the expected present value of utility, firms maximize their value, and markets clear. Then, it introduces a twist, be it an imperfection or the closing of a particular set of markets, and works out the general equilibrium implications. It then performs a numerical simulation, based on calibration, showing that the model performs well. It ends with a welfare assessment.”

And yes that is also the basis of New Keynesian models. The haiku basically describes the crazy models in which reasonable results must be disguised if you’re to be taken seriously in academia. When everybody agrees, there is little need for debate. And you get stuck with crazy models. The lack of debate within the mainstream to this day is also, in part, what provides support for austerity policies around the globe, even when it is clear that they have failed.

Matias Vernengo/Naked Keynesianism

“Stuck with crazy models”? Absolutely! Let me just give one example.

A lot of mainstream economists out there still think that price and wage rigidities are the prime movers behind unemployment. What is even worse — I’m totally gobsmacked every time I come across this utterly ridiculous misapprehension — is that some of them even think that these rigidities are the reason John Maynard Keynes gave for the high unemployment of the Great Depression. This is of course pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage and price rigidities, he certainly did not hold this view.

Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will – according to Keynes – accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and – as Keynes noted – unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.

In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the “fallacy of composition.” Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.

Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. But to Keynes it would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen by Keynes as a general substitute for an expansionary monetary or fiscal policy.

Even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

So, what Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.

To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.

The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …
The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemploy-ment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …

Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.

J M Keynes General Theory

People calling themselves ‘New Keynesians’ ought to be rather embarrassed by the fact that the kind of microfounded dynamic stochastic general equilibrium models they use, cannot incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfunded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. Maybe that’s also the reason prominent ‘New Keynesian’ macroeconomist Simon Wren-Lewis can write

I think the labour market is not central, which was what I was trying to say in my post. It matters in a [New Keynesian] model only in so far as it adds to any change to inflation, which matters only in so far as it influences central bank’s decisions on interest rates.

In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than “hand waving” that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

To Keynes this was self-evident. But obviously not so to haiku-rule-following ‘New Keynesians’.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.