Svensk skollag — ett skämt

20 Jan, 2019 at 11:09 | Posted in Education & School | Comments Off on Svensk skollag — ett skämt

När Carina lyfte ut en störande elev ur klassrummet anmäldes hon av föräldrarna. Både Skolinspektionen och polisen kom fram till att hon handlat korrekt. Ändå krävdes hennes arbetsgivare på böter för kränkning av eleven. Och själv kände hon sig pressad att sluta sin anställning.

berglin-snoLärare har rätt att rent fysiskt tillrättavisa störande elever. Men de som gör det, och blir anmälda, riskerar sitt jobb …

I september 2018, nästan ett år efter själva händelsen, lade också Skolinspektionen ner sin utredning … Carina var friad. Barn- och elevombudets beslut hanlade inte i första hand om Carinas agerande, utan kritiserade skolan och huvudmannen för att inte ha tagit sitt ansvar och förhindrat att eleven blev kränkt. Men det var hon som hade fått ta smällen och fått sluta sitt jobb.

— Det känns rättsosäkert och fullständigt regelvidrigt. Hur många lärare blir utsatta för detta? undrar hon.

Emma Leijnse/SDS

Och så undrar politiker varför det är så svårt att få tillräckligt många att vilja jobba som lärare i vårt land …

Knut Wicksell — le origini della Teoria Monetaria Moderna

19 Jan, 2019 at 17:52 | Posted in Economics | 2 Comments

Molti economisti mainstream sembrano pensare che l’idea alla base della Teoria Monetaria Moderna sia nuova e nasca da strane idee economiche.

Nuova? Strane idee? Che ne diciamo di leggere uno dei grandi fondatori dell’economia neoclassica – Knut Wicksell. Questo è ciò che Wicksell scrisse nel 1898 sui “sistemi di puro credito” in Interesse Monetario e Prezzi dei Beni (Geldzins und Güterpreise), 1936 (1898), p. 68f:

wicksell_-_geldzins_und_güterpreise,_1936_-_5770488E’ possibile andare oltre. Non c’è affatto alcun bisogno reale di moneta se un pagamento tra due individui può essere compiuto semplicemente trasferendo la somma appropriata di denaro nei registri bancari …

Un sistema di puro credito non è ancora … stato completamente sviluppato in questa forma. Ma qua e là si può trovare nella veste, alquanto diversa, del sistema delle banconote…

Intendiamo perciò, come fondamento per la discussione che segue, immaginare una situazione in cui il denaro in realtà non circola affatto, né in forma di moneta. … né in forma di banconote, ma dove tutti i pagamenti domestici vengono effettuati per mezzo del sistema di giro [bancario – vaglia bancario ndt] e di trasferimenti contabili. Un’analisi approfondita di questo caso puramente immaginario mi sembra utile, perché fornisce una precisa antitesi alla situazione, anch’essa immaginaria di un sistema di contante puro, in cui il credito non svolge alcun ruolo a prescindere [l’esatto equivalente del modello neoclassico frequentemente assunto di “pagamento anticipato” – LPS] …

Per ragioni semplificative, dobbiamo quindi assumere che l’intero sistema monetario di un paese è nelle mani di un unico istituto di credito, dotato di un adeguato numero di sportelli, in cui ogni individuo economico indipendente detiene un conto col quale può firmare assegni.

Quello che la Teoria Monetaria Moderna (MMT) sostiene è esattamente quello che Wicksell provava ad ipotizzare oltre un secolo fa. La differenza è che oggi l’“economia di puro credito” è una realtà e non solo una curiosità teoretica – la MMT descrive un sistema di moneta legale che [al giorno d’oggi] è in funzione in quasi tutti i paesi del mondo.

In tempi moderni le valute legali sono completamente basate su monete fiat. Le valute non hanno più valore intrinseco (come oro e argento). Ciò che dà loro valore è sostanzialmente il semplice fatto che, tramite esse, si devono pagare le tasse. Ciò consente ai governi di esercitare una sorta di “monopolio del business” della valuta, della quale essi non possono mai rimanere senza. A maggior ragione, la spesa diventa l’azione primaria e la tassazione e l’indebitamento sono degradati ad atti secondari. Se abbiamo una depressione, la soluzione, quindi, non è l’austerità. E’ la spesa. I Deficit di bilancio non sono il problema principale, dal momento che la moneta fiat significa che i governi possono sempre emetterne a sufficienza.

Lars P Syll

Funeral Blues

18 Jan, 2019 at 21:43 | Posted in Varia | 1 Comment

Stop all the clocks, cut off the telephone.
Prevent the dog from barking with a juicy bone,
Silence the pianos and with muffled drum
Bring out the coffin, let the mourners come.

Let aeroplanes circle moaning overhead
Scribbling in the sky the message He is Dead,
Put crêpe bows round the white necks of the public doves,
Let the traffic policemen wear black cotton gloves.

He was my North, my South, my East and West,
My working week and my Sunday rest
My noon, my midnight, my talk, my song;
I thought that love would last forever, I was wrong.

The stars are not wanted now; put out every one,
Pack up the moon and dismantle the sun.
Pour away the ocean and sweep up the wood;
For nothing now can ever come to any good.

W. H. Auden

Bengt Nilsson In Memoriam. A friend non plus ultra.

Eres Tú​

18 Jan, 2019 at 21:16 | Posted in Varia | Comments Off on Eres Tú​


Eres tú
Como el agua de mi fuente
Eres tú
El fuego de mi hogar
(Eres tú)
Algo así eres tú
(Como el fuego de mi hoguera)
Algo así como el fuego de mi hoguera
(Eres tú)
Algo así eres tú
(El trigo de mi pan)
Mi vida algo así eres tú

A hundred years ago

18 Jan, 2019 at 19:19 | Posted in Politics & Society | 3 Comments

The treaty includes no provisions for the economic rehabilitation of Europe — nothing to make the defeated Central Empires into good neighbours, nothing to stabilize the new states of Europe …

41pkcwxw8il._sx314_bo1,204,203,200_The Council of Four paid no attention to these issues … Reparation was their main excursion into the economic field, and they settled it as a problem of theology, of politics, of electoral chicane, from every point of view except that of the economic future of the states whose destiny they were handling …

The danger confronting us, therefore, is the rapid depression of the standard of life of the European populations to a point which will mean actual starvation for some … And these in their distress may overturn the remnants of organization, and submerge civilization itself in their attempts to satisfy desperately the overwhelming needs of the individual …

In a very short time, therefore, Germany will not be in a position to give bread and work to her numerous millions of inhabitants, who are prevented from earning their livelihood by navigation and trade … “We do not know, and indeed we doubt,” the Report concludes, “whether the delegates of the Allied and Associated Powers realize the inevitable consequences which will take place … Those who sign this treaty will sign the death sentence of many millions of German men, women, and children.”

Paul Krugman — a methodological critique

16 Jan, 2019 at 16:03 | Posted in Economics | 50 Comments

Alex Rosenberg — chair of the philosophy department at Duke University and renowned economic methodologist — has an interesting article on What’s Wrong with Paul Krugman’s Philosophy of Economics in 3:AM Magazine. Writes Rosenberg:

theoryKrugman writes: “So how do you do useful economics? In general, what we really do is combine maximization-and-equilibrium as a first cut with a variety of ad hoc modifications reflecting what seem to be empirical regularities about how both individual behavior and markets depart from this idealized case.”

But if you ask the New Classical economists, they’ll say, this is exactly what we do—combine maximizing-and-equilibrium with empirical regularities …

One thing that’s missing from Krugman’s treatment of economics is the explicit recognition of what Keynes and before him Frank Knight, emphasized: the persistent presence of enormous uncertainty in the economy … Why is uncertainty so important? Because the more of it there is in the economy the less scope for successful maximizing and the more unstable are the equilibria the economy exhibits, if it exhibits any at all …

Along with uncertainty, the economy exhibits pervasive reflexivity: expectations about the economic future tend to actually shift that future … When combined, uncertainty and reflexivity together greatly limit the power of maximizing and equilibrium to do predictively useful economics. Reflexive relations between future expectations and outcomes are constantly breaking down at times and in ways about which there is complete uncertainty.

I think Rosenberg is on to something important here regarding Krugman’s neglect of methodological reflection.

When Krugman responded to my critique of IS-LM this hardly came as a surprise.  As Rosenberg notes, Krugman works with a very simple modelling dichotomy — either models are complex or they are simple. For years now, self-proclaimed “proud neoclassicist” Paul Krugman has in endless harping on the same old IS-LM string told us about the splendour of the Hicksian invention — so, of course, to Krugman simpler models are always preferred.

In a post on his blog, Krugman has argued that ‘Keynesian’ macroeconomics more than anything else “made economics the model-oriented field it has become.” In Krugman’s eyes, Keynes was a “pretty klutzy modeler,” and it was only thanks to Samuelson’s famous 45-degree diagram and Hicks’s IS-LM that things got into place. Although admitting that economists have a tendency to use ”excessive math” and “equate hard math with quality” he still vehemently defends — and always have — the mathematization of economics:

I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

Sure, ‘New Keynesian’ economists like Krugman — and their forerunners, ‘Keynesian’ economists like Paul Samuelson and (young) John Hicks — certainly have contributed to making economics more mathematical and “model-oriented.”

wrong-tool-by-jerome-awBut if these math-is-the-message-modellers aren’t able to show that the mechanisms or causes that they isolate and handle in their mathematically formalized macromodels are stable in the sense that they do not change when we “export” them to our “target systems,” these mathematical models do only hold under ceteris paribus conditions and are consequently of limited value to our understandings, explanations or predictions of real economic systems.

When it comes to modelling philosophy, Paul Krugman has earlier defended his position in the following words (my italics):

I don’t mean that setting up and working out microfounded models is a waste of time. On the contrary, trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking. In fact, I’ve had that experience several times.

The argument is hardly convincing. If people put that enormous amount of time and energy that they do into constructing macroeconomic models, then they really have to be substantially contributing to our understanding and ability to explain and grasp real macroeconomic processes. If not, they should – after somehow perhaps being able to sharpen our thoughts – be thrown into the waste-paper-basket (something the father of macroeconomics, Keynes, used to do), and not as today, being allowed to overrun our economics journals and giving their authors celestial academic prestige.

Krugman’s explications on this issue are really interesting also because they shed light on a kind of inconsistency in his art of argumentation. For years now, Krugman has repeatedly criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now — just to take one example — Paul Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.”

When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise criticizes. And as Rosenberg rightly notices:

When he accepts maximizing and equilibrium as the (only?) way useful economics is done Krugman makes a concession so great it threatens to undercut the rest of his arguments against New Classical economics.

Ist jeder Muslim ein Koran auf zwei Beinen?

15 Jan, 2019 at 18:39 | Posted in Politics & Society | Comments Off on Ist jeder Muslim ein Koran auf zwei Beinen?



15 Jan, 2019 at 18:26 | Posted in Varia | Comments Off on 10.0

Our youngest daughter was into gymnastics for many years, so I’ve seen how much practice and effort it takes to do things like this. Amazing performance!

On the​ emptiness of Bayesian probabilism

15 Jan, 2019 at 17:49 | Posted in Statistics & Econometrics | 1 Comment

unknownA major attraction of the personalistic [Bayesian] view is that it aims to address uncertainty that is not directly based on statistical data, in the narrow sense of that term​. Clearly much uncertainty is of this broader kind. Yet when we come to specific issues I believe that a snag in the theory emerges. To take an example that concerns me at the moment: what is the evidence that the signals from mobile telephones or transmission base stations are a major health hazard? Because such telephones are relatively new and the latency period for the development of, say, brain tumours is long the direct epidemiological evidence is slender; we rely largely on the interpretation of animal and cellular studies and to some extent on theoretical calculations about the energy levels that are needed to induce certain changes. What ​is the probability that conclusions drawn from such indirect studies have relevance for human health? Now I can elicit what my personal probability actually is at the moment, at least approximately. But that is not the issue. I want to know what my personal probability ought to be, partly because I want to behave sensibly and much more importantly because I am involved in the writing of a report which wants to be generally convincing. I come to the conclusion that my personal probability is of little interest to me and of no interest whatever to anyone else unless it is based on serious and so far as feasible explicit information. For example, how often have very broadly comparable laboratory studies been misleading as regards human health? How distant are the laboratory studies from a direct process affecting health? The issue is not to elicit how much weight I actually put on such considerations but how much I ought to put. Now of course in the personalistic approach having (good) information is better than having none but the point is that in my view the personalistic probability is virtually worthless for reasoned discussion​ unless it is based on information, often directly or indirectly of a broadly frequentist kind. The personalistic approach as usually presented is in danger of putting the cart before the horse.

David Cox

The nodal point here is that although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. Science is not reducible to betting, and scientific inference is not a branch of probability theory. It always transcends mathematics. The unfulfilled dream of constructing an inductive logic of probabilism — the Bayesian Holy Grail — will always remain unfulfilled.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. That probabilities may work for expressing uncertainty when we pick balls from an urn, does not automatically make it relevant for making inferences in science. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on often vague and subjective personal beliefs? People have a lot of beliefs, and when they are plainly wrong, we shall not do any calculations whatsoever on them. We simply reject them. Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as many Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I’ll be dipped!

Keynes on the limits​ of econometric methods

14 Jan, 2019 at 20:20 | Posted in Statistics & Econometrics | 5 Comments

4388529Am I right in thinking that the method of multiple correlation analysis essentially depends on the economist having furnished, not merely a list of the significant causes, which is correct so far as it goes, but a complete list? For example, suppose three factors are taken into account, it is not enough that these should be in fact vera causa; there must be no other significant factor. If there is a further factor, not taken account of, then the method is not able to discover the relative quantitative importance of the first three. If so, this means that the method is only applicable where the economist is able to provide beforehand a correct and indubitably complete analysis of the significant factors. The method is one neither of discovery nor of criticism. It is a means of giving quantitative precision to what, in qualitative terms, we know already as the result of a complete theoretical analysis.

John Maynard Keynes

The emotive power of music

14 Jan, 2019 at 18:46 | Posted in Varia | Comments Off on The emotive power of music


Davida Scheffers has lived her dream of​ winning a contest and the opportunity to play with the Dutch​ Orchestra. Davida suffers from an extremely painful neuromuscular condition that derailed her career, and she thought she would never get to play in a professional orchestra again. The young blond lady is her daughter and turned 18 years old that day.


Bayesianism — a patently​ absurd approach to science

13 Jan, 2019 at 14:54 | Posted in Theory of Science & Methodology | 7 Comments

Back in 1991, when yours truly earned his first PhD with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level, it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

This, of course, was like swearing in church. My mainstream colleagues were — to say the least — not exactly überjoyed.

The decision theoretical approach I was most critical of, was the one building on the then reawakened Bayesian subjectivist (personalistic) interpretation of probability.

One of my inspirations when working on the dissertation was Henry E. Kyburg, and I still think his critique is the ultimate take-down of Bayesian hubris:

bFrom the point of view of the “logic of consistency”, no set of beliefs is more rational than any other, so long as they both satisfy the quantitative relationships expressed by the fundamental laws of probability. Thus I am free to assign the number 1/3 to the probability that the sun will rise tomorrow; or, more cheerfully, to take the probability to be 9/10 that I have a rich uncle in Australia who will send me a telegram tomorrow informing me that he has made me his sole heir. Neither Ramsey, nor Savage, nor de Finetti, to name three leading figures in the personalistic movement, can find it in his heart to detect any logical shortcomings in anyone, or to find anyone logically culpable, whose degrees of belief in various propositions satisfy the laws of the probability calculus, however odd those degrees of belief may otherwise be …

Now this seems patently absurd. It is to suppose that even the most simple statistical inferences have no logical weight where my beliefs are concerned. It is perfectly compatible with these laws that I should have a degree of belief equal to 1/4 that this coin will land heads when next I toss it; and that I should then perform a long series of tosses (say, 1000), of which 3/4 should result in heads; and then that on the 1001st toss, my belief in heads should be unchanged at 1/4 …

There is another argument against both subjestivistic and logical theories that depends on the fact that probabilities are represented by real numbers … The point can be brought out by considering an old fashioned urn containing black and white balls. Suppose that we are in an appropriate state of ignorance, so that, on the logical view, as well as on the subjectivistic view, the probability that the first ball drawn will be black, is a half … Now suppose that we draw a thousand balls from this urn, and that half of them are black. Relative to this information both the subjectivistic and the logical theories would lead to the assignment of a conditional probability of 1/2 to the statement that a black ball will be drawn on the 1001st draw …

Although it does seem perfectly plausible that our bets concerning black balls and white balls should be offered at the same odds before and after the extensive sample, it surely does not seem plausible to characterize our beliefs in precisely the same way in the two cases … This is a strong argument, I think, for considering the measure of rational belief to be two dimensional …

Henry E. Kyburg

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find mainstream economists that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

treatprobVariation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight. Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments — even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by mainstream economists.

How strange that mainstream economists do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes two-dimensional concepts of evidential weight and uncertainty are not possible to squeeze into a single calculable numerical ‘probability’ (Peirce had a similar view — ” to express the proper state of belief, not one number but two are requisite, the first depending on the inferred probability, the second on the amount of knowledge on which that probability is based”).  In the quest for calculable risk, one puts a blind eye to genuine uncertainty and looks the other way.

Kieslowski’s masterpiece

13 Jan, 2019 at 11:56 | Posted in Varia | Comments Off on Kieslowski’s masterpiece


Though I speak with the tongues of angels,
If I have not love…
My words would resound with but a tinkling cymbal.
And though I have the gift of prophesy…
And understand all mysteries…
and all knowledge…
And though I have all faith
So that I could remove mountains,
If I have not love…
I am nothing.

Nationalekonomisk forskning om friskolor och skolkonkurrens

13 Jan, 2019 at 11:43 | Posted in Education & School | Comments Off on Nationalekonomisk forskning om friskolor och skolkonkurrens

DN:s debattsida kunde man för några år sedan, apropå en Pisarapport, läsa följande:

skolpeng2Bara för att det finns ett statistiskt samband behöver det inte finnas ett orsakssamband … Ett exempel på hur fel det kan bli gäller skolvalets och konkurrensens effekter. I Pisarapporten läser vi att det inte finns någon relation mellan länders resultat och andelen elever i fristående skolor. Samma slutsats dras av Andreas Schleicher … Svenska pedagoger och debattörer på vänsterkanten har tagit ett steg längre och hävdat att skolvalet ligger bakom kunskapsfallet i internationella undersökningar …

Samtidigt motsägs båda dessa påståenden av den nationalekonomiska skolforskningen … Forskningsmetoderna som används är inte helt invändningsfria, men de är långt mycket bättre än de som används i OECD:s egna analyser.

Låt mig börja med att slå fast att jag helt delar debattörernas uppfattning vad avser våra begränsade möjligheter att dra kausala slutsatser utifrån rena korrelationer.

Så långt är jag med dem.

Men — återigen får vi i grund och botten höra den gamla vanliga självgratulerande visan — nationalekonomisk skolforskning “visar” (garderat med en till intet förpliktigande utsaga om att forskningsmetoderna som används sägs vara inte “helt invändningsfria”) att fler friskolor leder till bättre resultat. Problemet kvarstår, för i grund är det man säger — trots åberopade “rigorösa studier” — lika ifrågasättbart som de “vänstersidans” tolkningar av Pisa-resultaten som man kritiserar!

Låt mig förklara varför jag anser att det den åberopade “nationalekonomiska skolforskningen” säger om skolkonkurrens och friskolor är lika mycket “fel” som “vänstertolkningarna” — och samtidigt försöka reda ut vad forskning och data verkligen säger om skolkonkurrens och friskolors effekter på skolors och elevers resultat.

Continue Reading Nationalekonomisk forskning om friskolor och skolkonkurrens…

Is ‘modern’ macroeconomics for real?

12 Jan, 2019 at 10:59 | Posted in Economics | 1 Comment

861cf344575acd50ed67b35d88615f2318610d8148e8c471ad10ca0132cda91eEmpirically, far from isolating a microeconomic core, real-business-cycle models, as with other representative-agent models, use macroeconomic aggregates for their testing and estimation. Thus, to the degree that such models are successful in explaining empirical phenomena, they point to the ontological centrality of macroeconomic and not to microeconomic entities … At the empirical level, even the new classical representative-agent models are fundamentally macroeconomic in content …

The nature of microeconomics and macroeconomics — as they are currently practised​ — undermines the prospects for a reduction of macroeconomics to microeconomics. Both microeconomics and macroeconomics must refer to irreducible macroeconomic entities.

Kevin Hoover

Kevin Hoover has been writing on microfoundations for more than 25 years, and is beyond any doubts the one economist/econometrician/methodologist who has thought most on the issue. It’s always interesting to compare his qualified and methodologically founded assessment on the representative-agent-rational-expectations microfoundationalist program with the more or less apologetic views of freshwater economists like Robert Lucas:

hoovGiven what we know about representative-agent models, there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfoundation​s succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. While they appear to use the mathematics of macroeconomics​, the subjects to which they apply that microeconomics are aggregates that do not belong to any agent. There is no agent who maximizes a utility function that represents the whole economy subject to a budget constraint that takes GDP as its limiting quantity. This is the simulacrum of microeconomics, not the genuine article …

[W]e should conclude that what happens to the microeconomy is relevant to the macroeconomy but that macroeconomics has its own modes of analysis … [I]t is almost certain that macroeconomics cannot be euthanized or eliminated. It shall remain necessary for the serious economist to switch back and forth between microeconomics and a relatively autonomous macroeconomics depending upon the problem in hand.

Instead of just methodologically sleepwalking into their models, modern followers of the Lucasian microfoundational program ought to do some reflection and at least try to come up with a sound methodological justification for their position. Just looking the other way won’t do. Writes Hoover:

garciaThe representative-­agent program elevates the claims of microeconomics in some version or other to the utmost importance, while at the same time not acknowledging that the very microeconomic theory it privileges undermines, in the guise of the Sonnenschein-­Debreu­-Mantel theorem, the likelihood that the utility function of the representative agent will be any direct analogue of a plausible utility function for an individual agent … The new classicals treat [the difficulties posed by aggregation] as a non-issue, showing no appreciation​ of the theoretical work on aggregation and apparently unaware that earlier uses of the representative-agent model had achieved consistencywith​h theory only at the price of empirical relevance.

Where ‘New Keynesian’ and New Classical economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they — as argued in my On the use and misuse of theories and models in economics — have to put a blind eye on the emergent properties that characterize all open social and economic systems. The interaction between animal spirits, trust, confidence, institutions, etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms.

Långsamma timmar

12 Jan, 2019 at 10:54 | Posted in Varia | Comments Off on Långsamma timmar


Insignificant ‘statistical significance’

11 Jan, 2019 at 10:03 | Posted in Statistics & Econometrics | Comments Off on Insignificant ‘statistical significance’

worship-p-300x214We recommend dropping the NHST [null hypothesis significance testing] paradigm — and the p-value thresholds associated with it — as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, rather than allowing statistical signicance as determined by p < 0.05 (or some other statistical threshold) to serve as a lexicographic decision rule in scientic publication and statistical decision making more broadly as per the status quo, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with the neglected factors [such factors as prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain] as just one among many pieces of evidence.

We make this recommendation for three broad reasons. First, in the biomedical and social sciences, the sharp point null hypothesis of zero effect and zero systematic error used in the overwhelming majority of applications is generally not of interest because it is generally implausible. Second, the standard use of NHST — to take the rejection of this straw man sharp point null hypothesis as positive or even definitive evidence in favor of some preferredalternative hypothesis — is a logical fallacy that routinely results in erroneous scientic reasoning even by experienced scientists and statisticians. Third, p-value and other statistical thresholds encourage researchers to study and report single comparisons rather than focusing on the totality of their data and results.

Andrew Gelman et al.

ad11As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

statistical-models-sdl609573791-1-42fd0In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.

Handy missing data methodologies

10 Jan, 2019 at 19:16 | Posted in Statistics & Econometrics | 2 Comments

wainerOn October 13, 2012, Manny Fernandez reported in The New York Times that former El Paso schools superintendent Lorenzo Garcia was sentenced to prison for his role in orchestrating​ a testing scandal. The Texas Assessment of Knowledge and Skills (TAKS) is a state-mandated test for high-school sophomores. The TAKS missing data algorithm was to treat missing data as missing-at-random, and hence the score for the entire school was based solely on those who showed up. Such a methodology is so easy to game that it was clearly a disaster waiting to happen. And it did. The missing data algorithm used by Texas was obviously understood by school administrators; all aspects of their scheme were to keep potentially low-scoring students out of the classroom so they would not take the test and possibly drag scores down. Students identified as likely low performing “were transferred to charter schools, discouraged from enrolling in school, or were visited at home by truant officers and told not to go to school on test day.”

But it didn’t stop there. Some students had credits deleted from transcripts or grades changed from passing to failing so they could be reclassified as freshmen and avoid testing. Sometimes​, students who were intentionally held back were allowed to catch up before graduation with “turbo-mesters,” in which a student could acquire the necessary credits for graduation in a few hours in front of a computer.

Groundbreaking study shows parachutes do not reduce death when jumping from aircraft

10 Jan, 2019 at 16:00 | Posted in Statistics & Econometrics | 1 Comment

parachuteParachute use compared with a backpack control did not reduce death or major traumatic injury when used by participants jumping from aircraft in this first randomized evaluation of the intervention. This largely resulted from our ability to only recruit participants jumping from stationary aircraft on the ground. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials evaluating their effectiveness could selectively enroll individuals with a lower likelihood of benefit, thereby diminishing the applicability of trial results to routine practice. Therefore, although we can confidently recommend that individuals jumping from small stationary aircraft on the ground do not require parachutes, individual judgment should be exercised when applying these findings at higher altitudes.

Robert W Yeh et al.

Yeap — background​ knowledge sure is important when experimenting …

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

What counts as evidence?

10 Jan, 2019 at 15:29 | Posted in Theory of Science & Methodology | Comments Off on What counts as evidence?

What counts as evidence? I suspect we tend to overweight some kinds of evidence, and underweight others.

Yeh’s paper is a lovely illustration of a general problem with randomized control trials – that they tell us how a treatment worked under particular circumstances, but are silent about its effects in other circumstances. They can lack external validity. Yeh shows that parachutes are useless for someone jumping from a plane when it is on the ground. But this tells us nothing about their value when the plane is in the air – which is an important omission.

evidenceWe should place this problem with RCTs alongside two other Big Facts in the social sciences. One is the replicability crisis … The other (related) is the fetishization of statistical significance despite the fact that, as Deirdre McCloskey has said, it “has little to do with a defensible notion of scientific inference, error analysis, or rational decision making” and “is neither necessary nor sufficient for proving discovery of a scientific or commercially relevant result.”

If we take all this together, it suggests that a lot of conventional evidence isn’t as compelling as it seems. Which suggests that maybe the converse is true.

Stumbling and Mumbling

« Previous PageNext Page »

Blog at
Entries and comments feeds.