On the importance of power

31 January, 2013 at 17:20 | Posted in Statistics & Econometrics | Comments Off on On the importance of power

 

Advertisements

Did p values work? Read my lips – they didn’t!

29 January, 2013 at 21:36 | Posted in Statistics & Econometrics, Theory of Science & Methodology | Comments Off on Did p values work? Read my lips – they didn’t!

Jager and Leek may well be correct in their larger point, that the medical literature is broadly correct. But I don’t think the statistical framework they are using is appropriate for the questions they are asking. My biggest problem is the identification of scientific hypotheses and statistical “hypotheses” of the “theta = 0″ variety.

Based on the word “empirical” title, I thought the authors were going to look at a large number of papers with p-values and then follow up and see if the claims were replicated. But no, they don’t follow up on the studies at all! What they seem to be doing is collecting a set of published p-values and then fitting a mixture model to this distribution, a mixture of a uniform distribution (for null effects) and a beta distribution (for non-null effects). Since only statistically significant p-values are typically reported, they fit their model restricted to p-values less than 0.05. But this all assumes that the p-values have this stated distribution. You don’t have to be Uri Simonsohn to know that there’s a lot of p-hacking going on. Also, as noted above, the problem isn’t really effects that are exactly zero, the problem is that a lot of effects are lots in the noise and are essentially undetectable given the way they are studied.

Jager and Leek write that their model is commonly used to study hypotheses in genetics and imaging. I could see how this model could make sense in those fields … but I don’t see this model applying to published medical research, for two reasons. First … I don’t think there would be a sharp division between null and non-null effects; and, second, there’s just too much selection going on for me to believe that the conditional distributions of the p-values would be anything like the theoretical distributions suggested by Neyman-Pearson theory.

So, no, I don’t at all believe Jager and Leek when they write, “we are able to empirically estimate the rate of false positives in the medical literature and trends in false positive rates over time.” They’re doing this by basically assuming the model that is being questioned, the textbook model in which effects are pure and in which there is no p-hacking.

Andrew Gelman

Indeed. If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero –  even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

readmylips_large

Neyman-Pearson vs. Fisher on p values

28 January, 2013 at 19:05 | Posted in Statistics & Econometrics | Comments Off on Neyman-Pearson vs. Fisher on p values

 

Keep It Sophisticatedly Simple

28 January, 2013 at 14:36 | Posted in Varia | 1 Comment

Arnold Zellner’s KISS rule – Keep It Sophisticatedly Simple – has its application even outside of econometrics. An example is the film music of Stefan Nilsson. Here in the breathtakingly beautiful “Fäboden” from Bille August’s and Ingmar Bergman’s masterpiece The Best Intentions.
 

Fun with statistics

27 January, 2013 at 00:08 | Posted in Statistics & Econometrics | 3 Comments

Yours truly gives a PhD course in statistics for students in education and sports this semester. And between teaching them all about Chebyshev’s Theorem, Beta Distributions, Moment-Generating Functions and the Neyman-Pearson Lemma, I try to remind them that statistics can actually also be fun …
 

Austerity – this is what it’s all about

26 January, 2013 at 23:49 | Posted in Politics & Society | Comments Off on Austerity – this is what it’s all about

 
austerity2

Keynes on statistics and evidential weight

25 January, 2013 at 19:04 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 6 Comments

treatprobAlmost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in statistics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that “more is better.” But as Keynes argues – “more of the same” is not what is important when making inductive inferences. It’s rather a question of “more but different.”

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w “irrelevant.” Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (“weight of argument”). Running 10 replicative experiments do not make you as “sure” of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.” As Keynes writes in Treatise:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on the future. Consequently we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different “variables” is not enough. If they cannot get at the causal structure that generated the data, they are not really “identified.”

How strange that writers of statistics textbook as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the statistics carpet.

It’s high time that statistics textbooks give Keynes his due.

Tobinskatt – ja tack!

25 January, 2013 at 11:09 | Posted in Economics, Politics & Society | Comments Off on Tobinskatt – ja tack!

Yours truly har idag en artikel på Dagens Arena om att EU:s kärnländer nu bestämt sig för att införa en skatt på finansiella transaktioner.
 
tobin

Annie Lööf på tunn is

25 January, 2013 at 07:52 | Posted in Varia | 6 Comments

lindström
 
(h/t barnilsson)

The sky’s the limit?

23 January, 2013 at 17:42 | Posted in Varia | 1 Comment

happy-cartoon-boy-jumping-and-smiling3 Yours truly launched this blog two years ago. The number of visitors has  increased steadily. From having only a couple of hundred visits per month at the start, I’m now having almost 55 000 visits per month. A blog is sure not a beauty contest, but given the rather “wonkish” character of the blog – with posts mostly on economic theory, statistics, econometrics, theory of science and methodology – it’s rather gobsmacking that so many are interested and take their time to read and comment on it. I am – of course – truly awed, honoured and delighted!

Mainstream economics and neoliberalism

23 January, 2013 at 13:37 | Posted in Economics, Politics & Society | 5 Comments

Unlearning economics has an interesting post on some important shortcomings of mainstream (neoclassical) economics and libertarianism (on which I have written e.g. here, here, and here ):

I’ve touched briefly before on how behavioural economics makes the central libertarian mantra of being ‘free to choose’ completely incoherent. Libertarians tend to have a difficult time grasping this, responding with things like ‘so people aren’t rational; they’re still the best judges of their own decisions’. My point here is not necessarily that people are not the best judges of their own decisions, but that the idea of freedom of choice – as interpreted by libertarians – is nonsensical once you start from a behavioural standpoint.
polyp_cartoon_rich_poor_neoliberal
The problem is that neoclassical economics, by modelling people as rational utility maximisers, lends itself to a certain way of thinking about government intervention. For if you propose intervention on the grounds that they are not rational utility maximisers, you are told that you are treating people as if they are stupid. Of course, this isn’t the case – designing policy as if people are rational utility maximisers is no different ethically to designing it as if they rely on various heuristics and suffer cognitive biases.

This ‘treating people as if they are stupid’ mentality highlights problem with neoclassical choice modelling: behaviour is generally considered either ‘rational’ or ‘irrational’. But this isn’t a particularly helpful way to think about human action – as Daniel Kuehn says, heuristics are not really ‘irrational’; they simply save time, and as this video emphasises, they often produce better results than homo economicus-esque calculation. So the line between rationality and irrationality becomes blurred.

For an example of how this flawed thinking pervades libertarian arguments, consider the case of excessive choice. It is well documented that people can be overwhelmed by too much choice, and will choose to put off the decision or just abandon trying altogether. So is somebody who is so inundated with choice that they don’t know what to do ‘free to choose’? Well, not really – their liberty to make their own decisions is hamstrung.

Another example is the case of Nudge. The central point of this book is that people’s decisions are always pushed in a certain direction, either by advertising and packaging, by what the easiest or default choice is, by the way the choice is framed, or any number of other things. This completely destroys the idea of ‘free to choose’ – if people’s choices are rarely or never made neutrally, then one cannot be said to be ‘deciding for them’ any more than the choice was already ‘decided’ for them. The best conclusion is to push their choices in a ‘good’ direction (e.g. towards healthy food rather than junk). Nudging people isn’t a decision – they are almost always nudged. The question is the direction they are nudged in.

It must also be emphasised that choices do not come out of nowhere – they are generally presented with a flurry of bright colours and offers from profit seeking companies. These things do influence us, as much as we hate to admit it, so to work from the premise that the state is the only one that can exercise power and influence in this area is to miss the point.

The fact is that the way both neoclassical economists and libertarians think about choice is fundamentally flawed – in the case of neoclassicism, it cannot be remedied with ‘utility maximisation plus a couple of constraints’; in the case of libertarianism it cannot be remedied by saying ‘so what if people are irrational? They should be allowed to be irrational.’ Both are superficial remedies for a fundamentally flawed epistemological starting point for human action.

On significance and model validation

22 January, 2013 at 12:53 | Posted in Statistics & Econometrics | 9 Comments

Let us suppose that we as educational reformers have a hypothesis that implementing a voucher system would raise the mean test results with 100 points (null hypothesis). Instead, when sampling, it turns out it only raises it with 75 points and having a standard error (telling us how much the mean varies from one sample to another) of 20.

statisticsexplainedDoes this imply that the data do not disconfirm the hypothesis? Given the usual normality assumptions on sampling distributions, with a t-value of 1.25 [(100-75)/20] the one-tailed p-value is approximately 0.11. Thus, approximately 11% of the time we would expect a score this low or lower if we were sampling from this voucher system population. That means  – using the ordinary 5% significance-level, we would not reject the null hypothesis aalthough the test has shown that it is likely – the odds are 0.89/0.11 or 8-to-1 – that the hypothesis is false.

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypothesis by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” But looking at our example, standard scientific methodology tells us that since there is only 11% probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

And, most importantly, of course we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-value of 0.11 means next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Significance tests and ladies tasting tea

22 January, 2013 at 00:02 | Posted in Statistics & Econometrics | Comments Off on Significance tests and ladies tasting tea

The mathematical formulations of statistics can be used to compute probabilities. Those probabilities enable us to apply statistical methods to scientific problems. In terms of the mathematics used, probability is well defined. How does this abstract concept connect to reality? How is the scientist to interpret the probability statements of statistical analyses when trying to decide what is true and what is not? …

The_Lady_Tasting_Tea_-_David_SalsburgFisher’s use of a significance test produced a number Fisher called the p-value. This is a calculated probabiity, a probability associated with the observed data under the assumption that the null hypothesis is true. For instance, suppose we wish to test a new drug for the prevention of a recurrence of breast cancer in patients who have had mastectomies, comparing it to a placebo. The null hypothesis, the straw man, is that the drug is no better than the placebo …

Since [the p-value] is used to show that the hypothesis under which it is calculated is false, what does it really mean? It is a theoretical probability associated with the observations under conditions that are most likely false. It has nothing to do with reality. It is an indirect measurement of plausibility. It is not the probability that we would be wrong to say that the drug works. It is not the probability of any kind of error. It is not the probability that a patient will do as well on the placebo as on the drug.

P-values and the real tasks of social science

21 January, 2013 at 15:47 | Posted in Statistics & Econometrics | Comments Off on P-values and the real tasks of social science

After having mastered all the technicalities of regression analysis and econometrics, students often feel as though they are the masters of universe. I usually cool them down with a required reading of Christopher Achen‘s modern classic Interpreting and Using Regression.

It usually get them back on track again, and they understand that

aachen
no increase in methodological sophistication … alter the fundamental nature of the subject. It remains a wondrous mixture of rigorous theory, experienced judgment, and inspired guesswork. And that, finally, is its charm.

And in case they get to excited about having learned to master the intricacies of proper significance tests and p-values, I ask them to also ponder on Achen’s warning:

Significance testing as a search for specification errors substitutes calculations for substantive thinking. Worse, it channels energy toward the hopeless search for functionally correct specifications and divert attention from the real tasks, which are to formulate a manageable description of the data and to exclude competing ones.

Om de döda skall icke tigas men talas

20 January, 2013 at 23:42 | Posted in Varia | 1 Comment
Till Fadime Sahindal, född 2 april 1975 i Turkiet, mördad 21 januari 2002 i Sverige


DE DÖDA

De döda skall icke tiga men tala.
Förskingrad plåga skall finna sin röst,
och när cellernas råttor och mördarnas kolvar
förvandlats till aska och urgammalt stoft
skall kometens parabel och stjärnornas vågspel
ännu vittna om dessa som föll mot sin mur:
tvagna i eld men inte förbrunna till glöd,
förtrampade slagna men utan ett sår på sin kropp,
och ögon som stirrat i fasa skall öppnas i frid,
och de döda skall icke tiga men tala.

 

Om de döda skall inte tigas men talas.
Fast stympade strypta i maktens cell,
glasartade beledda i cyniska väntrum
där döden har klistrat sin freds propaganda,
skall de vila länge i samvetets montrar.
balsamerade av sanning och tvagna i eld,
och de som redan har stupat skall icke brytas,
och den som tiggde nåd i ett ögonblicks glömska
skall resa sig och vittna om det som inte brytes,
för de döda skall inte tiga men tala.

 

Nej, de döda skall icke tiga men tala.
De som kände triumf på sin nacke skall höja sitt huvud,
och de som kvävdes av rök skall se klart,
de som pinades galna skall flöda som källor,
de som föll för sin motsats skall själva fälla,
de som dräptes med bly skall dräpa med eld,
de som vräktes av vågor skall själva bli storm.
Och de döda skall icke tiga men tala.

                                           Erik Lindegren

 

Roy Andersson – the master of commercials

20 January, 2013 at 10:29 | Posted in Politics & Society | Comments Off on Roy Andersson – the master of commercials

 

James Galbraith lectures a bunch of incompetent journalists

19 January, 2013 at 23:22 | Posted in Economics | 2 Comments

 

Otto Neurath on economics

19 January, 2013 at 13:17 | Posted in Economics, Theory of Science & Methodology | 1 Comment

Where does the complex of causal laws represented in a causal structure come from and what assures its survival? These laws, I maintain, like all laws, whether causal or associational, probabilistic or deterministic, are transitory and epiphenomenal. they arise from – and exist only relative to – a nomological machine …

The founders of econometrics, Trygve Haavelmo and Ragnar Frisch … both explicitly believed in the socio-economic machine … Consider Haavelmo’s remarks about the relation between pressure on the throttle and the acceleration of the car. This is a perfectly useful piece of information if you want to drive the car you have, but it is not what you need to know if you are expecting change. For that you need to understand how the fundamental mechanisms operate …

Otto_NeurathOtto Neurath expressed the view clearly before the First World War in criticising conventional economics. Standard economics, he insisted, makes too much of the inductive method, taking the generalisations that hold in a free market economy to be generalisations that hold simpliciter: ‘Those who stay exclusively with the present will soon only be able to understand the past.’

Nancy Cartwright The Dappled World

Samhällsvetenskapliga lagar

18 January, 2013 at 16:22 | Posted in Theory of Science & Methodology | Comments Off on Samhällsvetenskapliga lagar

Som vetenskapsteoretiker är det intressant att konstatera att många ekonomer och andra samhällsvetare appellerar till ett krav på att förklaringar för att kunna sägas vara vetenskapliga kräver att ett enskilt fall ska kunna ”föras tillbaka på en allmän lag”.  Som grundläggande princip åberopas ofta en allmän lag i form av ”om A så B” och att om man i de enskilda fallen kan påvisa att om ”A och B är förhanden så har man ’förklarat’ B”.

Denna positivistisk-induktiva vetenskapssyn är dock i grunden ohållbar. Låt mig förklara varför.

sdEnligt en positivistisk-induktivistisk syn på vetenskapen utgör den kunskap som vetenskapen besitter bevisad kunskap. Genom att börja med helt förutsättningslösa observationer kan en ”fördomsfri vetenskaplig observatör” formulera observationspåståenden utifrån vilka man kan härleda vetenskapliga teorier och lagar. Med hjälp av induktionsprincipen blir det möjligt att utifrån de singulära observationspåståendena formulera universella påståenden i form av lagar och teorier som refererar till förekomster av egenskaper som gäller alltid och överallt. Utifrån dessa lagar och teorier kan vetenskapen härleda olika konsekvenser med vars hjälp man kan förklara och förutsäga vad som sker. Genom logisk deduktion kan påståenden härledas ur andra påståenden. Forskningslogiken följer schemat observation – induktion – deduktion.

I mer okomplicerade fall måste vetenskapsmannen genomföra experiment för att kunna rättfärdiga de induktioner med vars hjälp han upprättar sina vetenskapliga teorier och lagar. Experiment innebär – som Francis Bacon så måleriskt uttryckte det – att lägga naturen på sträckbänk och tvinga den att svara på våra frågor. Med hjälp av en uppsättning utsagor som noggrant beskriver omständigheterna kring experimentet – initialvillkor – och de vetenskapliga lagarna kan vetenskapsmannen deducera påståenden som kan förklara eller förutsäga den undersökta företeelsen.

Den hypotetisk-deduktiva metoden för vetenskapens förklaringar och förutsägelser kan beskrivas i allmänna termer på följande vis:

1 Lagar och teorier

2 Initialvillkor

——————

3 Förklaringar och förutsägelser

Enligt en av den hypotetisk-deduktiva metodens främsta förespråkare – Carl Hempel – har alla vetenskapliga förklaringar denna form, som också kan uttryckas enligt schemat nedan:

Alla A är B                    Premiss 1

a är A                     Premiss 2

——————————

a är B                     Konklusion

Som exempel kan vi ta följande vardagsnära företeelse:

Vatten som värms upp till 100 grader Celsius kokar

Denna kastrull med vatten värms till 100 grader Celsius

———————————————————————–

Denna kastrull med vatten kokar

Problemet med den hypotetisk-deduktiva metoden ligger inte så mycket i premiss 2 eller konklusionen, utan i själva hypotesen, premiss 1. Det är denna som måste bevisas vara riktig och det är här induktionsförfarandet kommer in.

Den mest uppenbara svagheten i den hypotetisk-deduktiva metoden är själva induktionsprincipen. Det vanligaste rättfärdigandet av den ser ut som följer:

Induktionsprincipen fungerade vid tillfälle 1

Induktionsprincipen fungerade vid tillfälle 2

Induktionsprincipen fungerade vid tillfälle n

—————————————————–

Induktionsprincipen fungerar alltid

Detta är dock tveksamt eftersom ”beviset” använder induktion för att rättfärdiga induktion. Man kan inte använda singulära påståenden om induktionsprincipens giltighet för att härleda ett universellt påstående om induktionsprincipens giltighet.

Induktion är tänkt att spela två roller. Dels ska den göra det möjligt att generalisera och dels antas den utgöra bevis för slutsatsernas riktighet. Som induktionsproblemet visar klarar induktionen inte av båda dessa uppgifter. Den kan stärka sannolikheten av slutsatserna (under förutsättning att induktionsprincipen är riktig, vilket man dock inte kan bevisa utan att hamna i ett cirkelresonemang) men säger inte att dessa nödvändigtvis är sanna.

En annan ofta påpekad svaghet hos den hypotetisk-deduktiva metoden är att teorier alltid föregår observationspåståenden och experiment och att det därför är fel att hävda att vetenskapen börjar med observationer och experiment. Till detta kommer att observationspåståenden och experiment inte kan antas vara okomplicerat tillförlitliga och att de för sin giltighetsprövning kräver att man hänvisar till teori. Att även teorierna i sin tur kan vara otillförlitliga löser man inte främst med fler observationer och experiment, utan med andra och bättre teorier. Man kan också invända att induktionen inte på något sätt gör det möjligt för oss att få kunskap om verklighetens djupareliggande strukturer och mekanismer, utan endast om empiriska generaliseringar och lagbundenheter. Inom vetenskapen är det oftast så att förklaringen av händelser på en nivå står att finna i orsaker på en annan, djupare, nivå. Induktivismens syn på vetenskap leder till att vetenskapens huvuduppgift beskrivs som att ange hur något äger rum, medan andra vetenskapsteorier menar att vetenskapens kardinaluppgift måste vara att förklara varför det äger rum.

Till följd av de ovan anförda problemen har mer moderata empirister resonerat som att kommit att eftersom det i regel inte existerar något logiskt tillvägagångssätt för hur man upptäcker en lag eller teori startar man helt enkelt med lagar och teorier utifrån vilka man deducerar fram en rad påståenden som fungerar som förklaringar eller prediktioner. I stället för att undersöka hur man kommit fram till vetenskapens lagar och teorier försöker man att förklara vad en vetenskaplig förklaring och prediktion är, vilken roll teorier och modeller spelar i dessa, och hur man ska kunna värdera dem.

I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklarings-modellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. [Men det finns problem med denna uppfattning t. o. m. inom naturvetenskapen. Många av naturvetenskapens lagar säger egentligen inte något om vad saker gör, utan om vad de tenderar att göra. Detta beror till stor del på att lagarna beskriver olika delars beteende, snarare än hela fenomenet som sådant (utom möjligen i experimentsituationer). Och många av naturvetenskapens lagar gäller egentligen inte verkliga entiteter, utan bara fiktiva entiteter. Ofta är detta en följd av matematikens användande inom den enskilda vetenskapen och leder till att dess lagar bara kan exemplifieras i modeller (och inte i verkligheten).]  Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad. 

En följd av att man accepterar den hypotetisk-deduktiva förklaringsmodellen är oftast att man också accepterar den s. k. symmetritesen. Enligt denna är den enda skillnaden mellan förutsägelse och förklaring att man i den förstnämnda antar explanansen vara känd och försöker göra en prediktion, medan man i den senare antar explanandum vara känd och försöker finna initialvillkor och lagar ur vilka det undersökta fenomenet kan härledas.

Ett problem med symmetritesen äer dock att den inte tar hänsyn till att orsaker kan förväxlas med korrelationer. Att storken dyker upp samtidigt med människobarnen utgör inte någon förklaring till barns tillkomst.

Symmetritesen tar inte heller hänsyn till att orsaker kan vara tillräckliga men inte nödvändiga. Att en cancersjuk individ blir överkörd gör inte cancern till dödsorsak. Cancern skulle kunna ha varit den riktiga förklaringen till individens död. Men även om vi t. o. m. skulle kunna konstruera en medicinsk lag – i överensstämmelse med den deduktivistiska modellen – som säger att individer med den aktuella typen av cancer kommer att dö av denna cancer, förklarar likväl inte lagen denna individs död. Därför är tesen helt enkelt inte riktig.

Att finna ett mönster är inte detsamma som att förklara något. Att på frågan varför bussen är försenad få till svar att den brukar vara det, utgör inte någon acceptabel förklaring. Ontologi och naturlig nödvändighet måste ingå i ett relevant svar, åtminstone om man i en förklaring söker något mer än ”constant conjunctions of events”.

Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, ge en metod för testning av förklaringar, och visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.

En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.

Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.

Nate Silver och den svåra konsten att predicera

18 January, 2013 at 10:53 | Posted in Statistics & Econometrics | Comments Off on Nate Silver och den svåra konsten att predicera

På sin alltid läsvärda blogg Häggström hävdar skriver Olle Häggström – professor i matematisk statistik -tankeväckande om Nate Silvers bok The Signal and the Noise:

Nate Silvers : The Art and Science of Prediction utkom i September förra året och tog sig snabbt in på bästsäljarlistorna. Den 500-sidiga boken är skriven med ett utpräglat journalistiskt driv, och är både lättläst och bitvis så intressant att den är svår att lägga ifrån sig. Flertalet kapitel behandlar prediktion inom något specifikt område, från meteorologi och jordbävningar till poker och finansmarknader, och så naturligtvis Silvers egna specialområden baseboll och amerikansk politik.

nate silverSilver är mycket skicklig på att utveckla framgångsrika prediktionsmodeller och -algoritmer, och det finns väldigt mycket att hålla med om när det gäller allmänna råd och recept för framgångsrik prediktion. Exempelvis är det bara att instämma då han framhåller vikten av mekanistisk förståelse (att sätta sig in i de kausala samband som ligger bakom det fenomen man modellerar, i motsats till det slags kurvanpassning där man inte bryr sig mycket om vad kurvorna egentligen står för). Detsamma gäller hans ställningstagande mellan hedgehogs och foxes i Isaiah Berlins instruktiva tankefigur där en igelkott (hedgehog) har en enda stor idé att arbeta med, medan en räv har många små; mycket av Silvers framgång som prognosmakare består i att han förmår kombinera informationen i många olika (slags) kunskapskällor snarare än att ensidigt fokusera på en eller några få. Den tankefigur som bokens titel fömedlar – vikten av att kunna skilja ut den relevanta signalen i vad som ofta är ett hav av brus – är också mycket användbar och viktig. Och ännu en viktig lärdom är att man inte bör nöja sig med att ange vilket utfall man finner allra sannolikast – i de flesta beslutssituationer finns all anledning att försöka precisera en fördelning som ger sannolikheter för alla möjliga utfall …

Kapitel 8 om statistisk slutledning. Vad gäller metodologi är detta bokens centrala kapitel. Nate Silver förordar bayesiansk uppdatering av a priori-sannolikheter i ljuset av data. Utan tvivel är detta en stor del av “hemligheten” bakom hans framgångar som prognosmakare, och på områden som poker och oddssättning på vadslagningsbyråer kan man lugnt säga att hans förhållningssätt är det enda rimliga. Men när han mer allmänt tar ställning i frågan om bayesianska kontra frekventistiska metoder – en av de mest debatterade principfrågorna inom den matematiska statistiken under större delen av 1900-talet och framåt – gör han det onyanserat och ovederhäftigt. Hans beskrivningar av frekventistisk statistik är aggressivt tendentiösa och ofta direkt felaktiga, som att “sampling error [is] the only type of error that frequentist statistics directly accounts for” (sid 252) eller att “frequentist methods – in striving for immaculate statistical procedures that can’t be contaminated by the researcher’s bias – keep him hermetically sealed off from the real world” (sid 253). Precis som ekonomerna Thomas Ziliak och Deidre McCloskey i deras uppmärksammade bok The Cult of Statistical Significance … övergår Silver i sina attacker på frekventismen till att även attackera personen Sir Ronald Fisher (1890-1962), allmänt känd som den frekventistiska statistikens fader. Fisher var en inbiten rökare, och vägrade hårdnackat (och pinsamt nog för en vetenskaplig frontfigur som han) att acceptera slutsatsen att rökning orsakar lungcancer. Korrelationen kunde han inte förneka, men föreslog alternativa kausalsamband, som att någon viss defekt i lungan skulle dels på sikt ge upphov till lungcancer, dels ge en retning som triggar folk att som lindring börja röka. Med facit i hand kan vi skratta åt detta, men Silver är helt fel ute när han hävdar att det var Fishers frekventistiska tankesätt som tillät honom hans ståndpunkt. Silver tycks mena att om Fisher varit bayesian, så hade han haft förstånd att representera långöktheten i han alternativa kausalteori med en mycket låg a priori-sannolikhet, och därmed ge hög sannolikhet åt det gängse kausalsambandet mellan rökning och lungcancer. Silvers argumentation här är helt orimlig, för det första för att Fisher, färgad av önsketänkande, givetvis inte hade valt en sådan a priori-fördelning, och för det andra (och viktigare!) för att vetenskapliga frågor inte skall avgöras genom val av a priori-fördelning utan genom fortsatt anskaffning av allt bättre evidens av olika slag.

Kenneth Arrow on uncertainty and “trailing clouds of vagueness”

17 January, 2013 at 20:49 | Posted in Economics | 5 Comments

In a very personal discussion of uncertainty and the hopelessness of accurately modeling what will happen in the real world, Nobel laureate Kenneth Arrow – in “I Know a Hawk From a Handsaw,” in M. Szenberg, ed., Eminent Economists: Their Life Philosophies, Cambridge University Press (1992) – writes:

It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. cloudsAn incident illustrates both uncertainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’

MMT: The Movie

17 January, 2013 at 15:14 | Posted in Economics | Comments Off on MMT: The Movie

 

Probability and economics

17 January, 2013 at 14:12 | Posted in Economics, Statistics & Econometrics | 7 Comments

Modern neoclassical economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

probabilityWhen attempting to convince us of the necessity of founding empirical economic analysis on probability models,  neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or “chance set-up”.

Just as there is no such thing as a “free lunch,” there is no such thing as a “free probability.” To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

From a realistic point of view we really have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems that social sciences – including economics – analyze, there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot really be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks a sound justification. I would even go further and argue that there really is no justifiable rationale at all for this belief that all economically relevant data can be adequately captured by a probability measure. In most real world contexts one has to argue and justify one’s case. And that is obviously something seldom or never done by practitioners of neoclassical economics.

As David Salsburg (2001:146) notes on probability theory:

[W]e assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify [this] abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Just as e. g. John Maynard Keynes (1921) and Nicholas Georgescu-Roegen (1971), Salsburg (2001:301f) is very critical of the way social scientists – including economists and econometricians – uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research:

Probability is a measure of sets in an abstract space of events. All the mathematical properties of probability can be derived from this definition. When we wish to apply probability to real life, we need to identify that abstract space of events for the particular problem at hand … It is not well established when statistical methods are used for observational studies … If we cannot identify the space of events that generate the probabilities being calculated, then one model is no more valid than another … As statistical models are used more and more for observational studies to assist in social decisions by government and advocacy groups, this fundamental failure to be able to derive probabilities without ambiguity will cast doubt on the usefulness of these methods.

Or as the great British mathematician John Edensor Littlewood says in his A Mathematician’s Miscellany:

Mathematics (by which I shall mean pure mathematics) has no grip on the real world ; if probability is to deal with the real world it must contain elements outside mathematics ; the meaning of ‘ probability ‘ must relate to the real world, and there must be one or more ‘primitive’ propositions about the real world, from which we can then proceed deductively (i.e. mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom’, and we will call it A for short. Although it has got to be true, A is by the nature of the case incapable of deductive proof, for the sufficient reason that it is about the real world  …

We will begin with the … school which I will call philosophical. This attacks directly the ‘real’ probability problem; what are the axiom A and the meaning of ‘probability’ to be, and how can we justify A? It will be instructive to consider the attempt called the ‘frequency theory’. It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated n times the proportion of 6’s will, with certainty, tend to a limit, p say, as n goes to infinity … If we take this proposition as ‘A’ we can at least settle off-hand the other problem, of the meaning of probability; we define its measure for the event in question to be the number p. But for the rest this A takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? A does not say. We have, then, to add to it something about the rate of convergence. Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p +- e for large enough n (the largeness depending on e)’. It can only say ‘the proportion will lie between p +- e with at least such and such probability (depending on e and n*) whenever n>n*’. The vicious circle is apparent. We have not merely failed to justify a workable A; we have failed even to state one which would work if its truth were granted. It is generally agreed that the frequency theory won’t work. But whatever the theory it is clear that the vicious circle is very deep-seated: certainty being impossible, whatever A is made to state can only be in terms of ‘probability ‘.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences used – and a fortiori neoclassical economics – lack sound foundations!
 

References

Georgescu-Roegen, Nicholas (1971), The Entropy Law and the Economic Process. Harvard University Press.

Keynes, John Maynard (1973 (1921)), A Treatise on Probability. Volume VIII of The Collected Writings of John Maynard Keynes, London: Macmillan.

Littlewood, John  Edensor  (1953) A Mathematician’s Miscellany, London: Methuen & Co. 

Salsburg, David (2001), The Lady Tasting Tea. Henry Holt.

Aaron Swartz (1986 – 2013) In Memoriam

15 January, 2013 at 18:16 | Posted in Varia | Comments Off on Aaron Swartz (1986 – 2013) In Memoriam

 

Charles Plosser on stimulus – so wrong, so wrong

12 January, 2013 at 21:55 | Posted in Economics | 8 Comments

According to Bloomberg, Federal Reserve Bank of Philadelphia President Charles Plosser yesterday “said the central bank’s record stimulus risks a surge in inflation and may impair efforts by households to repair their finances.”

Trying to stimulate the economy may even prolong the process, if we’re to believe this guy.

I think Plosser might benefit from taking a look at this video and listen to his master’s voice:

In this interesting video Federal Reserve Chairman Bernanke basically says that idle balances don’t chase goods and services and that a fortiori we don’t have to be overly afraid that quantitative easing will spill over into inflation. And – which actually is the most interesting part of the speech – he also confirms the Modern Monetary Theory view that the financing of these operations is made possible by simply crediting a bank account and thereby – by a single keystroke – actually creating money.

One of the most important reasons why we’re still stuck in depression-like economic quagmires is that people in general – including Plosser and other mainstream economists – simply don’t understand the workings of modern monetary systems. The result is totally and utterly wrong-headed austerity policies, emanating out of a groundless fear of creating inflation via central banks printing money, in a situation where we rather should fear deflation and inadequate effective demand.

What nature hath joined together, multiple regression cannot put asunder

12 January, 2013 at 16:18 | Posted in Statistics & Econometrics | 3 Comments

Distinguished social psychologist Richard E. Nisbett takes on the idea of intelligence and IQ testing in his Intelligence and How to Get It (Norton 2011). He has a somewhat atypical aversion to multiple-regression analysis and writes (p. 17):

iqResearchers often determine the individual’s contemporary IQ or IQ earlier in life, socioeconomic status of the family of origin, living circumstances when the individual was a child, number of siblings, whether the family had a library card, educational attainment of the individual, and other variables, and put all of them into a multiple-regression equation predicting adult socioeconomic status or income or social pathology or whatever. Researchers then report the magnitude of the contribution of each of the variables in the regression equation, net of all the others (that is, holding constant all the others). It always turns out that IQ, net of all the other variables, is important to outcomes. But … the independent variables pose a tangle of causality – with some causing others in goodness-knows-what ways and some being caused by unknown variables that have not even been measured. Higher socioeconomic status of parents is related to educational attainment of the child, but higher-socioeconomic-status parents have higher IQs, and this affects both the genes that the child has and the emphasis that the parents are likely to place on education and the quality of the parenting with respect to encouragement of intellectual skills and so on. So statements such as “IQ accounts for X percent of the variation in occupational attainment” are built on the shakiest of statistical foundations. What nature hath joined together, multiple regressions cannot put asunder.

Now, I think this is right as far as it goes, although it would certainly have strengthened Nisbett’s argumentation if he had elaborated more on the methodological question around causality or at least had given some mathematical-statistical-econometric references. Unfortunately, his alternative approach is not more convincing than regression analysis. As so many other contemporary social scientists today, Nisbett seems to think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.

As I have argued (here) that means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

On the issue of the shortcomings of multiple regression analysis, no one sums it up better than eminent mathematical statistician David Freedman in his Statistical Models and Causal Inference:

If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky. However, without the model, the data cannot be used to answer the research question …

In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …

Regression models often seem to be used to compensate for problems in measurement, data collection, and study design. By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian …

Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …

Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

Nu är det hög tid att skicka räkningen för nyliberalismens sönderslagna krukor

11 January, 2013 at 19:38 | Posted in Politics & Society | 1 Comment

Marknadsfundamentalismens och nyliberalismens glansdagar verkar snart vara förbi. Tiden ömsar skinn och glömda idéer om demokrati och välfärd väcks åter till liv.

Oh, ni marknadens matadorer! Ödets vagn löper förvisso inte på skenor, men er stund ska också komma. Som August Strindberg så riktigt konstaterade för snart hundrafemtio år sedan i Röda rummet – det här är olidligt. Men –

strindbergdet kommer en dag, då det blir än värre, men då, då komma vi ner från Vita Bergen, från Skinnarviks-bergen, från Tyskbagar-bergen, och vi komma med stort dån som ett vattenfall, och vi ska begära igen våra sängar. Begära? Nej, ta! och ni ska få ligga på hyvelbänkar, som jag har fått, och ni ska få äta potatis, så att era magar stå som trumskinn, alldeles som om ni gått igenom vattenprovet som vi …

Tora! Tora! Tora!

11 January, 2013 at 18:13 | Posted in Economics | Comments Off on Tora! Tora! Tora!

According to New York Times, the Japanese government today approved emergency stimulus spending of ¥10.3 trillion in an attempt to kick-start growth in the long-moribund economy:

Prime Minister Shinzo Abe … reiterated his desire for the Japanese central bank to make a firmer commitment to stopping deflation by pumping more money into the economy, which the prime minister has said is crucial to getting businesses to invest and consumers to spend.

“We will put an end to this shrinking and aim to build a stronger economy where earnings and incomes can grow,” Mr. Abe said. “For that, the government must first take the initiative to create demand and boost the entire economy.”

Under the plan, the Japanese government will spend $116 billion on public works and disaster mitigation projects, subsidies for companies that invest in new technology and financial aid to small businesses.

Through these measures, the government will seek to raise real economic growth 2 percentage points and add 600,000 jobs to the economy, Mr. Abe said. The package announced Friday amounts to one of the largest spending plans in Japanese history, he said.

By simply talking about stimulus measures, Mr. Abe, who took office late last month, has already driven down the value of the yen, much to the relief of Japanese exporters, whose competitiveness benefits from a weaker currency. In response, Tokyo stocks have rallied.

The sun rises in the east and sets in the west …

Annie Lööf – snart ett minne blott

11 January, 2013 at 16:18 | Posted in Politics & Society | 2 Comments

Nyliberala flaggskeppet Neo skrev för ett tag sedan om Annie Lööf:

Valet 2006 blev Annie Lööf en av sex inkryssade ledamöter i riksdagen. Med 2 296 personkryss – 13,28 procent av Centerns röster i Jönköpings län ersatte hon förstanamnet på listan, Margareta Andersson, som varit riksdagsledamot sedan 1994. Medan Andersson motionerat om att förbjuda boxning och andra kampsporter och om obligatorisk samhällstjänstgöring, motionerade Lööf om att avskaffa LAS, införa platt skatt och bryta Systembolagets monopol, ofta ihop med parhästen från Centerns ungdomsförbund, Fredrick Federley.

Inspirationen till sådana förslag tipsade Lööf tidigare om på sin hemsida. Utöver Johan Norberg rekommenderade hon liberala klassiker som Ayn Rands idéroman Och världen skälvde ochRobert Nozicks Anarki, stat och utopi, en liberal kritik av bland annat John Rawls egalitära filosofi. Det är läsvanor som fått vänsterbloggare och akademiker som Lars Pålsson Syll att tugga fradga.

festfixarenJa, att fru Lööf ohöljt hyllar den iskalla egoismens översteprästinna Ayn Rand och diktatorkramaren Margaret Thatcher är klandervärt i sig. Men huvudskälet till att jag år efter år fortsatt kritisera denna floskulösa politikerbroiler beror främst på hennes huvudlösa nyliberala argumentation och motionerande om att exempelvis

  • Införa plattskatt (lägre skatt för höginkomsttagare)
  • Avskaffa lagen om anställningsskydd
  • Inskränka strejkrätten
  • Införa marknadshyror
  • Sälja ut SvT och SR
  • Få Sverige att gå med i NATO
  • Bygga ut kärnkraften

Men – som fler och fler kunnat konstatera – med den ledaren behöver centern inte längre bekymra sig om hur det ska nå väljare bortom Stureplan …

Economics – when the model becomes the message

11 January, 2013 at 13:54 | Posted in Economics, Theory of Science & Methodology | 3 Comments

Today there is a post up on Real-World Economics Review Blog – one of my favourite blogs – by Peter Radford on the sorry state of modern economics:

IS-LMEconomists nowadays love to talk about their models. They produce models to explain and elucidate. They devise more complex models to mimic the entire economy. They cobble together small models to illustrate a particular problem. Not to model is not to be an economist.

I call these models caricatures. That’s what they are. Caricatures.

Think of those old political cartoons. The subject of the cartoonist’s scorn was always a politician whose features were exaggerated to make a point. If the nose was large in real life, it became huge in the cartoon. If the ears were a little on the above average side, then they became elephantine. A squeaky voice was portrayed as a squeal. And so on. The point being that the viewer of the cartoon was assumed to be aware of the foible or feature being drawn out, and the exaggeration was a device to convey new information or to highlight something about the subject.

Models are the same thing.

They are gross simplifications with certain features of the real world suppressed or eliminated entirely in order to draw attention to others. If the model is supposed to throw light on one thing, then other stuff is thought of as extraneous and eliminated …

The reason a caricature works is that we already know about the nose, the ears, or the voice. There is nothing new being conveyed. Recognition is simplified so that another message can be attached to it and passed along more readily.

In contrast, the reason economists use models is to look for things that are not already known. They are looking for insights not easily revealed by the complex web of reality, but which may come into sharper focus when that complexity os removed.

The success or failure of the technique resides largely in the choice[s] about which aspect of reality to suppress. And therein lies the source of the mess. Economists have made some pretty damn awful choices. So much so that any insights gained are unlikely to have much, if any, relevance to the real world. The cartoon remains just that, a cartoon …

Apparently economists like caricatures because they cannot draw good portraits. Economics is mostly a cartoon. We need it to be more.

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.