Profitfri skola? Ja tack!

31 August, 2018 at 15:53 | Posted in Education & School | 1 Comment

policyideer-for-svensk-skola.jpgSom diskuterats tidigare kommer utförarens drivkrafter att spela en betydande roll för den verksamhet som bedrivs och det är svårt att via regelverket förhindra att utförarens motiv kan dra i en oönskad riktning …

Exempelvis finns det anledning att ifrågasätta om de incitament till kostnadsminimering som vinstmotivet ger upphov till verkligen är lämpliga i skolan. Det helhetsansvar och den budget för en elev som en skola har för att tillvarata elevernas intressen är på många sätt svårförenliga med vinstintresset: risken finns till exempel att utföraren inte väljer de bäst anpassade läromedlen och undervisningsmetoderna utan istället prioriterar de som är billigare. Även om konkurrens om eleverna har vissa positiva effekter finns det också anledning att ifrågasätta om regelverket verkligen förmår hantera de negativa konsekvenser som konkurrensen om eleverna kan ge upphov till och om stark konkurrens är förenlig med skolans myndighetsroll.

När man på 1990-talet påbörjade systemskiftet inom välfärdssektorn anfördes ofta som argument för privatiseringarna att man skulle slippa den byråkratiska logikens kostnader i form av regelverk, kontroller och uppföljningar? Konkurrensen — denna marknadsfundamentalismens panacé — skulle göra driften effektivare och höja verksamheternas kvalitet. Marknadslogiken skulle tvinga bort de ‘byråkratiska’ och tungrodda offentliga verksamheterna och kvar skulle bara finnas de bra företagen som ‘valfriheten’ möjliggjort.

När den panglossianska privatiseringsvåtdrömmen visar sig vara en mardröm, tror tyvärr våra politiker att just det som man ville bli av med – regelverk och ‘byråkratisk’ tillsyn och kontroll – skulle vara lösningen?

Man tar sig för pannan – och det av många skäl!

För ska man genomföra de åtgärdspaket som förs fram undrar man ju hur det går med den där effektivitetsvinsten. Kontroller, uppdragsspecifikationer, inspektioner m m kostar ju pengar — och hur mycket överskott blir det då av privatiseringarna när dessa kostnader också ska räknas hem i kostnads- intäktsanalysen? Och hur mycket värd är den där ‘valfriheten’ när vi ser hur den gång på gång bara resulterar i verksamhet där vinst genereras genom kostnadsnedskärningar och sänkt kvalitet?

Beslutet att släppa in vinstdrivande företag i välfärdssektorn har varit ett dyrköpt misstag. I Chile satte man på skolområdet stopp för detta för ett par år sedan. Sverige är nu det enda land i världen som accepterar vinstintresse i skattefinansierade skolor. Men — kan Chile rätta till misstag så borde vi också kunna!

Grundfrågan är inte om skattefinansierade privata företag ska få göra vinstuttag eller om det krävs hårdare tag i form av kontroll och inspektion. Grundfrågan är om det är marknadens och privatiseringarnas logik som ska styra våra välfärdsinrättningar eller om det ska ske via demokratins och politikens logik. Grundfrågan handlar om den gemensamma välfärdssektorn ska styras av demokrati och politik eller av marknaden.

Advertisements

After the crisis — business as usual

31 August, 2018 at 10:15 | Posted in Economics | Comments Off on After the crisis — business as usual

3monkeys6In contrast to the experience of the Great Depression, which led to the emergence and acceptance of novel theoretical concepts on a large scale, the financial crisis and its consequences have, by and large, been rationalized with reference to existing theoretical concepts. Although we do observe a slight shift away from the idea that financial markets are efficient by default and prices only follow random walks, the basic conceptualization of (financial) markets as being efficient and equilibrating in principle seems unquestioned. On the contrary, the rising prominence of the concept of “liquidity” – understood as the availability of funds to absorb financial assets to be sold – in the aftermath of the crisis indicates that the financial crisis is seen by economists as a major external shock, unforeseen because of the limits imposed on rational behavior by asymmetric information, and not as something intrinsic to the economic process. Similarly, our analysis of the reception of major crisis-related books shows an only temporary increase of interest in classic contributions dealing with financial and economic instability, which was even weaker for more distinguished journals. These observations signify a key difference in terms of the ‘lessons learned’ from past crises when compared to the Great Depression, which gave rise to a broad consensus that capitalist economies are not self-sustaining, a consensus that eventually helped to forge the mixed economies dominating the richer parts of the planet.

E Aigler, M Aistleitner, F Glötzl & J Kapeller

When did you last hear an economist​ say something like this?

30 August, 2018 at 22:22 | Posted in Economics | 1 Comment

If the observations of the red shift in the spectra of massive stars don’t come out quantitatively​ in accordance with the principles of general relativity, then my theory will be dust and ashes.

Albert Einstein (1920)

The Gambler’s Ruin (wonkish)

30 August, 2018 at 16:22 | Posted in Statistics & Econometrics | Comments Off on The Gambler’s Ruin (wonkish)

 

[In case you’re curious what happens if you start out with $25 but we change the probabilities — from 0.50, 0.50 into e. g. 0.49, 0.51 — you can check this out easily with e.g. Gretl:
matrix B = {1,0,0,0; 0.51,0,0.49,0;0,0.51,0,0.49;0,0,0,1}
matrix v25 = {0,1,0,0}
matrix X = v25*B^50
X

which gives X = 0.68 0.00 0.00 0.32]

Vad har vi för glädje av ekonomer?

30 August, 2018 at 14:29 | Posted in Varia | Comments Off on Vad har vi för glädje av ekonomer?

 

Die Risikogesellschaft

30 August, 2018 at 13:06 | Posted in Theory of Science & Methodology | Comments Off on Die Risikogesellschaft

 

Schlechte Wissenschaft

30 August, 2018 at 08:33 | Posted in Statistics & Econometrics | Comments Off on Schlechte Wissenschaft

bad-science-watch-logoWenn Wissenschaftler etwas herausgefunden haben – wann kann man sich auch tatsächlich darauf verlassen? Eine Antwort lautet: Wenn Fachkollegen die Studie überprüft haben. Eine andere: Wenn sie in einer renommierten Fachzeitschrift veröffentlicht wurde. Doch manchmal reicht auch beides zusammen nicht aus, wie Forscher jetzt gezeigt haben. Und zwar auf die beste und aufwendigste Art: Sie haben die zugrundeliegenden Experimente wiederholt. Und geschaut, ob noch einmal dasselbe dabei herauskommt.

Es ging um 21 sozialwissenschaftliche Studien aus den Journalen Nature und Science. Mehr Renommee geht nicht. Und natürlich werden dort eingereichte Arbeiten von Experten geprüft (Peer Review). Trotzdem kam in fast 40 Prozent der Fälle nicht noch einmal dasselbe heraus – sondern meistens: gar nichts …

Selbst wenn bei Wiederholung der Experimente ähnliche Effekte auftraten, waren diese merklich kleiner als im Original, durchschnittlich nur dreiviertel so groß. Wenn man die nicht-replizierbaren Studien einrechnet, schrumpft der durchschnittliche Effekt aller Wiederholungen sogar auf die Hälfte. Deshalb sagt Forschungskritiker John Ioannidis: “Wenn man einen Artikel über ein sozialwissenschaftliches Experiment in Nature oder Science liest, muss man den Effekt gleich halbieren.”

Die Zeit

Vem fan äger vad?

29 August, 2018 at 20:05 | Posted in Varia | Comments Off on Vem fan äger vad?

 

Tillägnad Maggan i Horn, min och barnens östgötska kioskfavorit.

Jimmie Åkesson bekräftar vad vi länge anat

29 August, 2018 at 15:59 | Posted in Politics & Society | Comments Off on Jimmie Åkesson bekräftar vad vi länge anat

jimmie-akesson-p3När Jimmie Åkesson tidigare idag intervjuades av P3:s Morgonpasset inledde han med att såga hela kanalen. – Hade jag varit chef här hade jag lagt ner P3 direkt. Jag tycker det är vänsterliberal smörja, sade Åkesson.

Att P3 är en skitkanal kan nog många av oss hålla med om. En massa tyckmyckentrutade ordbajsare som mellan ordkaskader spelar skvalmusik. Att detta skulle vara vänsterliberalt är nog dock lite svårare att svälja. Men intervjun var intressant så tillvida att den bekräftade vad vi länge anat: Åkesson är i likhet med med den amerikanske ‘fake news’ presidenten Donald Trump beredd att använd politisk makt för att likt diktatorer stänga ner medier han inte gillar. Påminner faktikt också lite grand om vad en tysk snubbe höll på med på 1930-talet. Och snart är det val …

The calibration hoax

29 August, 2018 at 08:34 | Posted in Economics | 4 Comments

fraud-kitThere are many kinds of useless economics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

In physics, it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Sargent and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

Instead of just assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving of our interest.

To say, as Edward Prescott, that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of ‘anything goes’ when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modelling assumption made in much of modern macroeconomics. Perhaps the reason is, as Paul Krugman has it, that economists often mistake

beauty, clad in impressive-looking​ mathematics, for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Irrefutable propositions may be comfortable — like religious convictions or ideological dogmas — but it is not science.

When you are old (personal)

28 August, 2018 at 19:45 | Posted in Varia | 1 Comment

 

Deutschland — Ungleichland

28 August, 2018 at 17:05 | Posted in Politics & Society | Comments Off on Deutschland — Ungleichland

 

Some common misunderstandings about randomization

28 August, 2018 at 14:22 | Posted in Statistics & Econometrics | Comments Off on Some common misunderstandings about randomization

rcRandomization is an alternative when we do not know enough to control, but is generally inferior to good control when we do. We suspect that at least some of the popular and professional enthusiasm for RCTs, as well as the belief that they are precise by construction, comes from misunderstandings about … random or realized confounding on the one hand and confounding in expectation on the other …

The RCT strategy is only successful if we are happy with estimates that are arbitrarily far from the truth, just so long as the errors cancel out over a series of imaginary experiments. In reality, ​the causality that is being attributed to the treatment might, in fact, be coming from an imbalance in some other cause in our particular trial; limiting this requires serious thought about possible covariates.

Angus Deaton & Nancy Cartwright

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Modern economics — a severe case of model Platonism

28 August, 2018 at 09:24 | Posted in Economics | 1 Comment

That is the great thing about abstraction. Working with what can be called ‘flex price’ models does not imply that you think price rigidity is unimportant, but instead that it can often be ignored if you want to focus on other processes.

Simon Wren-Lewis

When applying deductivist thinking to economics, mainstream economists like Wren-Lewis usually set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course,​ that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not. When addressing real economies, the idealizations necessary for the deductivist machinery to work — as e. g. ‘flex price’ models — simply do not hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.

hansalbertThe neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures — appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

 Hans Albert

Bad Sweden? Good Sweden?

26 August, 2018 at 23:36 | Posted in Politics & Society | 3 Comments

 

Prayer

26 August, 2018 at 20:22 | Posted in Varia | Comments Off on Prayer

 

Lill forever

26 August, 2018 at 20:16 | Posted in Varia | Comments Off on Lill forever

 

Technobabble economics

26 August, 2018 at 13:05 | Posted in Economics | Comments Off on Technobabble economics

ZJL-Head-In-Sand

Deductivist modeling endeavours and an overly simplistic use of statistical and econometric tools are sure signs of the explanatory hubris that still haunts mainstream economics.

In an interview Robert Lucas says

the evidence on postwar recessions … overwhelmingly supports the dominant importance of real shocks.

So, according to Lucas, changes in tastes and technologies should be able to explain the main fluctuations in e.g. the unemployment that we have seen during the last six or seven decades. But really — not even a Nobel laureate could in his wildest imagination come up with any warranted and justified explanation solely based on changes in tastes and technologies.

The Chicago übereconomist is simply wrong. But how do we protect ourselves from this kind of scientific nonsense? In The Scientific Illusion in Empirical Macroeconomics Larry Summers has a suggestion well worth considering — not the least since it makes it easier to understand how mainstream economics actively contribute to causing economic crises rather than to solve them:

technoA great deal of the theoretical macroeconomics done by those professing to strive for rigor and generality, neither starts from empirical observation nor concludes with empirically verifiable prediction …

The typical approach is to write down a set of assumptions that seem in some sense reasonable, but are not subject to empirical test … and then derive their implications and report them as a conclusion …

What then do these exercises teach us about the world? … If empirical testing is ruled out, and persuasion is not attempted, in the end I am not sure these theoretical exercises teach us anything at all about the world we live in …

Serious economists who respond to questions about how today’s policies will affect tomorrow’s economy by taking refuge in technobabble about how the question is meaningless in a dynamic games context abdicate the field to those who are less timid. No small part of our current economic difficulties can be traced to ignorant zealots who gained influence by providing answers to questions that others labeled as meaningless or difficult. Sound theory based on evidence is surely our best protection against such quackery.

Tractability, truth, and ignorability

25 August, 2018 at 15:37 | Posted in Statistics & Econometrics | 1 Comment

ignorance
Most attempts at causal inference in observational studies are based on assumptions that treatment assignment is ignorable. Such assumptions are usually made casually, largely because they justify the use of available statistical methods and not because they are truly believed.

Marshall Joffe et al.

An interesting (but from a technical point of view rather demanding) article on a highly questionable assumption used in ‘potential outcome’ causal models. It made yours truly come to think of how tractability has come to override reality​ and truth also in moder​n mainstream economics.

Having a ‘tractable’ model is of course great, since it usually means that you can solve it. But — using ‘simplifying’ tractability assumptions (rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc.) because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Take for example the ongoing discussion on rational expectations as a modelling assumption. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actors models. As yours truly has tried to show in On the use and misuse of theories and models in mainstream economics there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to real-world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

Berkson’s paradox or why attractive people you date tend​ to be jerks

24 August, 2018 at 15:16 | Posted in Statistics & Econometrics | 4 Comments

The Book of Why_coverHave you ever noticed that, among the people you date, the attractive ones tend to be jerks? Instead of constructing elaborate psychosocial theories, consider a simpler explanation. Your choice of people to date depends on two factors, attractiveness and personality. You’ll take a chance on dating a mean attractive person or a nice unattractive person, and certainly a nice attractive person, but not a mean unattractive person … This creates a spurious negative correlation between attractiveness and personality. The sad truth is that unattractive people are just as mean as attractive people — but you’ll never realize it, because you’ll never date somebody who is both mean and unattractive.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.