Vad ska vi ha skolan till?

30 Nov, 2019 at 17:11 | Posted in Education & School | Comments Off on Vad ska vi ha skolan till?

Skolpedagogiska reformimpulser borde så småningom sluta att förlita sig på sjuttiotalets motiv, innebörder och semantik. Annars går det som när en orkester orubbligt fortsätter att spela de gamla välkända melodierna utan att bry sig om att publiken för länge sedan har lämnat salen. Det är på tiden att spela något nytt, man borde våga en ny början. Farväl till sjuttiotalet!
                                   Thomas Ziehe, ”Adjö till sjuttiotalet”, KRUT 2/1998

Vi lever i ett ojämlikt samhälle. Ojämlikheten ökar också på många områden. Inte minst vad avser inkomster och förmögenhet. Skillnader i livsbetingelser för olika grupper vad avser klass, etnicitet och genus är oacceptabelt stora.

Och i skolans värld har uppenbarligen familjebakgrunden fortfarande stor betydelse för elevers prestationer. Än värre är att den får större betydelse ju äldre eleverna blir. Självklart kan det inte uppfattas som annat än ett kapitalt misslyckande när en skola med kompensatoriska aspirationer uppvisar ett mönster där föräldrarnas utbildningsbakgrund får allt större genomslag ju äldre eleven blir.
 
tilsam
 
 
Tvärtemot alla reformpedagogiska utfästelser är det främst barn ur hem utan studietraditioner som förlorat i den omläggning i synen på skolan som skett under det senaste halvseklet. I dag – med skolpengar, fria skolval och friskolor – har utvecklingen tvärtemot alla kompensatoriska utfästelser bara ytterligare stärkt de högutbildade föräldrarnas möjligheter att styra de egna barnens skolgång och framtid. Det är svårt att se vilka som med dagens skola ska kunna göra den ”klassresa” så många i min generation har gjort.

Continue Reading Vad ska vi ha skolan till?…

Wenn doch Trump nur Habermas lesen könnte!

30 Nov, 2019 at 16:33 | Posted in Politics & Society | Comments Off on Wenn doch Trump nur Habermas lesen könnte!

habermDas neue Buch von Jürgen Habermas ist auch eine Geschichte der Philosophie. Es gibt im Stil einer Genealogie darüber Auskunft, wie die heute dominanten Gestalten des westlichen nachmetaphysischen Denkens entstanden sind. Als Leitfaden dient ihm der Diskurs über Glauben und Wissen, der aus zwei starken achsenzeitlichen Traditionen im römischen Kaiserreich hervorgegangen ist. Habermas zeichnet nach, wie sich die Philosophie sukzessive aus ihrer Symbiose mit der Religion gelöst und säkularisiert hat. In systematischer Perspektive arbeitet er die entscheidenden Konflikte, Lernprozesse und Zäsuren heraus sowie die sie begleitenden Transformationen in Wissenschaft, Recht, Politik und Gesellschaft.

Das neue Buch von Jürgen Habermas ist aber nicht nur eine Geschichte der Philosophie. Es ist auch eine Reflexion über die Aufgabe einer Philosophie, die an der vernünftigen Freiheit kommunikativ vergesellschafteter Subjekte festhält: Sie soll darüber aufklären, »was unsere wachsenden wissenschaftlichen Kenntnisse von der Welt für uns bedeuten – für uns als Menschen, als moderne Zeitgenossen und als individuelle Personen«.

Maurice Allais on empirics and theory

30 Nov, 2019 at 14:08 | Posted in Economics | 6 Comments

225px-allais_pn_maurice-24x30-2001bSubmission to observed or experimental data is the golden rule which dominates any scientific discipline. Any theory whatever, if it is not verified by empirical evidence, has no scientific value and should be rejected.

 

Maurice Allais

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science, it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Mainstream — neoclassical — economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modelling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

To have valid evidence is not enough. What economics needs is sound evidence. Why? Simply because the premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Spirit in the Sky (personal)

29 Nov, 2019 at 17:51 | Posted in Varia | Comments Off on Spirit in the Sky (personal)

 

Marcel Proust hade sin Madeleinekaka. Jag har musiken. Den här låten är för mig alltid förknippad med sommarminnen från Hästveda och Luhrsjön, där jag och bästa kompisen Johan (Ehrenberg) brukade spela flipper vid strandcaféet och den här låten gick varm på jukeboxen. Oförglömliga minnen som fortfarande värmer.

Markus Rosenberg — liraren i mitt MFF-hjärta

29 Nov, 2019 at 07:39 | Posted in Varia | Comments Off on Markus Rosenberg — liraren i mitt MFF-hjärta

 

Tror det är dags byta ut en viss staty vid Malmö stadion …

What is (wrong with) mainstream economics?

28 Nov, 2019 at 21:15 | Posted in Economics | 13 Comments

If you want to know what is neoclassical economics — or mainstream economics as we call it nowadays — and turn to Wikipedia you are told that

fund neoclassical economics is a term variously used for approaches to economics focusing on the determination of prices, outputs, and income distributions in markets through supply and demand, often mediated through a hypothesized maximization of utility by income-constrained individuals and of profits by cost-constrained firms employing available information and factors of production, in accordance with rational choice theory.

The basic problem with this definition of neoclassical (mainstream) economics — arguing that its differentia specifica is its use of demand and supply, utility maximization and rational choice — is that it doesn’t get things quite right. As we all know, there is an endless list of mainstream models that more or less distance themselves from one or the other of these characteristics. So the heart of mainstream economic theory lies elsewhere.

The essence of mainstream economic theory is its almost exclusive use of a deductivist methodology. A methodology that is more or less used without a smack of argument to justify its relevance.

The theories and models that mainstream economists construct describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow be exportable to the target system. By analyzing a few causal factors in their “laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in economic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery to deliver “precise” and “rigorous” results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we can move from (partial) falsehoods in theories and models to truth in real-world target systems does not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

Henry Louis Mencken once wrote that “there is always an easy solution to every human problem – neat, plausible and wrong.” And mainstream economics has indeed been wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real-world target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.

The punch line is that most of the problems that mainstream economics is wrestling with, issues from its attempts at formalistic modelling per se of social phenomena. Reducing microeconomics to refinements of hyper-rational Bayesian deductivist models is not a viable way forward. It will only sentence to irrelevance the most interesting real-world economic problems.

If the ultimate criterion of success of a deductivist system is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economic target systems. These systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

Mainstream economic theory still today consists mainly of investigating economic models. It has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in mainstream economic theory, where models largely function as substitutes for empirical evidence.

What is wrong with mainstream economics is not that it employs models per se, but that it employs poor models. They are poor because they do not bridge to the real-world target systems in which we live. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on mathematical deductivist modelling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

Feel I’m goin’ back to Massachusetts

28 Nov, 2019 at 20:26 | Posted in Varia | Comments Off on Feel I’m goin’ back to Massachusetts

 

The Kings and Queens with their quiet dirty looks

28 Nov, 2019 at 20:13 | Posted in Varia | Comments Off on The Kings and Queens with their quiet dirty looks

 

Make community great again

28 Nov, 2019 at 19:14 | Posted in Politics & Society | Comments Off on Make community great again

 

Mark Lilla et la gauche identitaire

28 Nov, 2019 at 14:32 | Posted in Politics & Society | Comments Off on Mark Lilla et la gauche identitaire

 

Macroeconomic uncertainty

27 Nov, 2019 at 10:16 | Posted in Economics | 2 Comments

The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?

There are many who have ventured to answer this question. And they have come up with a variety of answers, ranging from the exaggerated mathematization of economics to irrational and corrupt politicians.

0But the root of our problem goes much deeper. It ultimately goes back to how we look upon the data we are handling. In ‘modern’ macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and ‘New Keynesian’ — variables are treated as if drawn from a known ‘data-generating process’ that unfolds over time and on which we, therefore, have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ – if we do not have the ‘true’ model – the whole edifice collapses. And of course, it has to. I mean, who really honestly believes that we should have access to this mythical Holy Grail, the data-generating process?

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

This is like saying that you are going on a holiday trip and that you know that the chance the weather being sunny is at least 30% and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.

But as Keynes convincingly argued in his monumental Treatise on Probability (1921), this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.

In the end, this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty-type. The data do not unequivocally single out one decision as the only ‘rational’ one. Neither the economist nor the deciding individual can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

wrongrightSome macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.

How much better – how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control – if instead, we could just admit that we often simply do not know and that we have to live with that uncertainty as well as it goes.

Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing – economic catastrophe!

Tina Turner Turns 80

26 Nov, 2019 at 15:00 | Posted in Varia | Comments Off on Tina Turner Turns 80



Congratulations Tina. Tu es tout simplement incroyable. Simply the best!

RCTs — assumptions, biases and limitations

25 Nov, 2019 at 15:01 | Posted in Theory of Science & Methodology | Comments Off on RCTs — assumptions, biases and limitations

Randomised experiments require much more than just randomising an experiment to identify a treatment’s effectiveness. They involve many decisions and complex steps that bring their own assumptions and degree of bias before, during and after randomisation …

rcSome researchers may respond, “are RCTs not still more credible than these other methods even if they may have biases?” For most questions we are interested in, RCTs cannot be more credible because they cannot be applied (as outlined above). Other methods (such as observational studies) are needed for many questions not amendable to randomisation but also at times to help design trials, interpret and validate their results, provide further insight on the broader conditions under which treatments may work, among other rea- sons discussed earlier. Different methods are thus complements (not rivals) in improving understanding.

Finally, randomisation does not always even out everything well at the baseline and it cannot control for endline imbalances in background influencers. No researcher should thus just generate a single randomisation schedule and then use it to run an experiment. Instead researchers need to run a set of randomisation iterations before conducting a trial and select the one with the most balanced distribution of background influencers between trial groups, and then also control for changes in those background influencers during the trial by collecting endline data. Though if researchers hold onto the belief that flipping a coin brings us closer to scientific rigour and understanding than for example systematically ensuring participants are distributed well at baseline and endline, then scientific understanding will be undermined in the name of computer-based randomisation.

Alexander Krauss

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Serenity

24 Nov, 2019 at 21:04 | Posted in Varia | Comments Off on Serenity

 

Sublime, breathtaking, and absolutely magnificent heavenly music!

Randomised controlled trials — a retreat from the bigger questions

24 Nov, 2019 at 19:13 | Posted in Economics | 1 Comment

Unknown1Nobel prizes are usually given in recognition of ideas that are already more or less guaranteed a legacy. But occasionally they prompt as much debate as admiration. This year’s economics award, given to Abhijit Banerjee, Esther Duflo and Michael Kremer … recognised the laureates’ efforts to use randomised controlled trials (RCTs) to answer social-science questions … RCT evangelists sometimes argue that their technique is the “gold standard”, better able than other analytical approaches to establish what causes what. Not so, say some other economists … Results are contextually dependent in ways that are hard to discern; a finding from a study in Kenya might not reveal much about policy in Guatemala … 

Advanced economies grew rich as a result of a broad transformation that affected everything from the aspirations of working people to the functioning of the state, not by making a series of small, technocratic changes, no matter how well-supported by evidence …

Indeed, some economists have a sneaking suspicion that the rise of RCTs represents a pivot not just to smaller questions but also to smaller ambitions … Researchers are still guided by theory, which shapes the empirical questions that get asked and whether results are interpreted as capturing some deeper aspect of an economy’s nature. But a world in which economists are mostly policy-tweakers—or “plumbers”, in Ms Duflo’s phrase—is very different from the one to which many economists once aspired.

The Economist

It is nowadays widely believed among mainstream economists that the scientific value of randomization — contrary to other methods — is totally uncontroversial and that randomized experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomization does not guarantee anything.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like Duflo, Banerjee and Kremer — portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

This year’s ‘Nobel prize’ winners think that economics should be based on evidence from randomised experiments and field studies. Duflo et consortes want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.