Leontief and the sorry state of economics

30 Nov, 2020 at 10:53 | Posted in Economics | 10 Comments

The core assumption of 'modern' macro — totally FUBAR | LARS P. SYLL Page after page of professional economic journals are filled with mathematical formulas leading the reader from sets of more or less plausible but entirely arbitrary assumptions to precisely stated but irrelevant theoretical conclusions …

Year after year economic theorists continue to produce scores of mathematical models and to explore in great detail their formal properties; and the econometricians fit algebraic functions of all possible shapes to essentially the same sets of data without being able to advance, in any perceptible way, a systematic understanding of the structure and the operations of a real economic system.

Wassily Leontief

Mainstream economics has indeed, as noted by Leontief, become increasingly irrelevant to the understanding of the real world, and the main reason for this irrelevance is the failure of economists to match their deductive-axiomatic methods with their subject of study. The fixation on constructing models showing the certainty of logical entailment has been detrimental to the development of a relevant and realist economics. Insisting on formalistic-mathematical modelling forces the economist to give up on realism and substitute axiomatics for real-world relevance.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity next to nothing. Why anyone should be interested in that kind of theories and models — as long as one does not come up with export licenses for the theories and models to the real world in which we live — is beyond comprehension. Stupid models are of no help at all in understanding the real world.

Why ontology?

29 Nov, 2020 at 15:52 | Posted in Economics | 2 Comments

With any phenomenon of interest, understanding its nature or essential properties allows us to relate to, or interact with, it in more knowledgeable and competent ways than would otherwise be the case …

The Nature of Social Reality: Issues in Social Ontology (Economics as Social  Theory): Amazon.co.uk: Lawson, Tony: 9780367188931: Books A surprising number of social theorists, when embarking on substantive analyses, pay almost no attention at all to insights bearing on the nature of these (or any other) factors. Instead, the preferred option is to select the types of methods, procedures or tools to be employed before, and quite independently of, knowing the task to be undertaken, the nature of the phenomena involved, the context or any other specifics of the situation …

This is the case of modern academic economics. Economists do indeed widely suppose, prior to undertaking any analysis that there is one specific way of proceeding that is appropriate for all occasions … This is to employ methods of mathematical modelling …

The discipline has failed to provide significant insight for the last 60 years or  more … This persistent failure is indeed to a very large extent the result of sustained ontological neglect.

The kinds of laws and relations that economics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately that also makes most of the achievements of contemporary economic theoretical modelling — rather useless.

When mainstream economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterise all open social systems – including the economic system. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analysed also on their own terms.

Mainstream macromodels describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analysing a few causal factors in their “macroeconomic laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is — as underlined by Lawson — that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in these macroeconomic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery deliver “precise” and “rigorous” results. This, of course, makes exporting to real-world target systems problematic, since these models — as part of a deductivist covering-law tradition in economics — are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity, when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalisable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real world target systems does not take us very far, unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to the real world.

Deutschland Querdenken …

28 Nov, 2020 at 11:14 | Posted in Politics & Society | 2 Comments


Die Querdenker verleugnen die Realität und haben offenbar ein sehr vereinfacht Verständnis von der Welt und insbesondere von der aktuellen Pandemie.

Diego Maradona (1960-2020)

27 Nov, 2020 at 19:28 | Posted in Economics | 2 Comments

Maradona was more than just an extraordinary footballer. He was also a complicated social icon. That further distinguishes him from other footballers, though Pele also has some of that…

70 facts about Argentina legend Diego Maradona | Goal.comHe was both rewarded by and terribly exploited by the system. The system treated him like a “race horse”. They wanted him to play at all cost and pumped him with drugs. They did not care about the physical and psychological costs to him. That contributed to his addiction …

He came from great poverty, from a shanty town. He never hid that and insisted on keeping the connection. I’m told he had tattoos of Fidel Castro and Che Guevara. He also had a relationship with the Pope (Francisco, not Benedict II or John Paul II). That politics speaks well of him, even if it was not carried through with the consistency of an intellectual or political activist …

Did you know that in Argentina, before inflation made them irrelevant, they used to call the 10 (diez) peso note a “Diego”? That is how much people loved him.

Thomas Palley

Logic and truth in economics

26 Nov, 2020 at 16:52 | Posted in Economics | 13 Comments

Logic yields validity, not truth. - Post by Ziya on BoldomaticTo be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

But that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Pandemic depression antidote (XXV)

26 Nov, 2020 at 15:55 | Posted in Varia | Comments Off on Pandemic depression antidote (XXV)

Sydsvenskan — grodors plums och ankors plask på ledarsidan

25 Nov, 2020 at 19:58 | Posted in Politics & Society | 2 Comments

I Sydsvenskans huvudledare kunde vi tidigare i år läsa följande:

ReineKissGång på gång framhåller [73-punkts]överenskommelsen också att fokus ska ligga på kvalitet — inte driftsform — i den politiska styrningen av olika välfärdstjänster.

Detta är givet ur liberalt perspektiv. Trots de brister som finns inom vissa verksamheter har avregleringarna på det hela taget gett medborgarna bättre service, utbud och valfrihet som få idag lär vara beredda att avstå från.

Herre du min milde! Och detta grodors plums och ankors plask ska man behöva läsa år 2020.

När man på 1990-talet påbörjade systemskiftet inom den svenska välfärdssektorn anfördes ofta som argument för privatiseringarna att man skulle slippa den byråkratiska logikens kostnader i form av regelverk, kontroller och uppföljningar.

Konkurrensen — denna marknadsfundamentalismens panacé — skulle göra driften effektivare och höja verksamheternas kvalitet. Marknadslogiken skulle tvinga bort de ‘byråkratiska’ och tungrodda offentliga verksamheterna och kvar skulle bara finnas de bra företagen som ‘valfriheten’ möjliggjort.

När den panglossianska privatiseringsvåtdrömmen visar sig vara en mardröm, tror tyvärr politiker och ledarskribenter att just det som man ville bli av med – regelverk och ‘byråkratisk’ tillsyn och kontroll – skulle vara lösningen.

Man tager sig för pannan – och det av många skäl!

För ska man genomföra de åtgärdspaket som förs fram undrar man ju hur det går med den där effektivitetsvinsten. Kontroller, uppdragsspecifikationer, inspektioner m m kostar ju pengar — och hur mycket överskott blir det då av privatiseringarna när dessa kostnader också ska räknas hem i kostnads- intäktsanalysen? Och hur mycket värd är den där ‘valfriheten’ när vi ser hur den gång på gång bara resulterar i verksamhet där vinst genereras genom kostnadsnedskärningar och sänkt kvalitet?

Ansvariga socialdemokratiska politiker — och inte minst Göran Persson, Kjell-Olof Feldt och alla andra som i deras fotspår glatt traskat patrull — har i decennier hänsynslöst och med berått mod låtit offra den en gång så stolta svenska traditionen av att försöka bygga en jämlik skola och vård för alla!

Till skillnad från i stort sett alla andra länder i världen har den svenska socialdemokratins ledning gjort det möjligt för privata företag att göra vinst på offentligt finansierad undervisning och vård. Och när borgerliga regeringar ytterligare stimulerat privatiseringsvågen har socialdemokraterna bara tigit och varit passiva. Och detta trots att det hela tiden funnits ett starkt folkligt motstånd mot att släppa in vinstsyftande privata företag i välfärdssektorn.

Socialdemokratin har idag har historiskt låga väljarsiffror. Mot den här bakgrunden får detta få att förvånas. När man för en politik som främst slår mot den grupp av människor man påstår sig värna om, är det inte så konstigt att väljarna flyr.

Att socialdemokratin fortsätter bidra till skolans och vårdens urholkning med sitt stöd för privata vårdföretag och friskolor och deras vinstuttag är så klart inte den enda anledningen. Men säkert en av de viktigare. Ett tydligare självmål inom politiken är svårt att hitta.

‘Mathiness’ in economics

25 Nov, 2020 at 16:35 | Posted in Economics | Comments Off on ‘Mathiness’ in economics

 blah_blahIn practice, what math does is let macro-economists locate the FWUTVs [facts with unknown truth values] farther away from the discussion of identification … Relying on a micro-foundation lets an author say, “Assume A, assume B, …  blah blah blah … And so we have proven that P is true. Then the model is identified.” …

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

Yes, indeed, modern mainstream economics — and especially its mathematical-statistical operationalization in the form of econometrics — fails miserably over and over again. ‘Modern’ mainstream economics is based on the belief that deductive-axiomatic modelling is a sufficient guide to truth. That belief is, however, totally unfounded as long as no proofs are supplied for us to believe in the assumptions on which the model-based deductions and conclusions build. ‘Mathiness’ masquerading as science is often used by mainstream economists to hide the problematic character of the assumptions used in their theories and models. But — without showing the model assumptions to be realistic and relevant, that kind of economics indeed, as Romer puts it, produces nothing but “blah blah blah.”

Radical Uncertainty' Review: The Dismal Overreachers - WSJ The belief that mathematical reasoning is more rigorous and precise than verbal reasoning, which is thought to be susceptible to vagueness and ambiguity, is pervasive in economics. In a celebrated attack on … Paul Krugman, the Chicago economist John Cochrane wrote, ‘Math in economics serves to keep the logic straight, to make sure that the “then” really does follow the “if,” which it so frequently does not if you just write prose.’ But there is a difficulty here which appears to be much more serious in economics than it is in natural sciences: that of relating variables which are written down and manipulated in mathematical models to things that can be identified and measured in the real world … Concepts such as ‘investment specific technology shocks’ and ‘wage markup’ which are no more observable, or well defined, than toves or borogoves. They exist only within the model, which is rigorous only in the same sense as ‘Jabberwocky’ is rigorous; the meaning of each term is defined by the author, and the logic of the argument follows tautologically from these definitions.

Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science. Using math can never be a substitute for thinking. Or as Romer has it in his showdown with ‘post-real’ economics:

Math cannot establish the truth value of a fact. Never has. Never will.

Pandemic depression antidote (XXIV)

25 Nov, 2020 at 14:25 | Posted in Varia | Comments Off on Pandemic depression antidote (XXIV)


Do RCTs in education really meet the ‘gold standard’?

23 Nov, 2020 at 17:56 | Posted in Education & School | Comments Off on Do RCTs in education really meet the ‘gold standard’?


Heterogeneity and interaction is not only an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. And since trials usually are not repeated, unbiasedness and balance on average over repeated trials says nothing about any one trial. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

Econometrics in the 21st century

22 Nov, 2020 at 19:05 | Posted in Statistics & Econometrics | 6 Comments


If you don’t have time to listen to all of the presentations (a couple of them are actually quite uninformative) you should at least scroll forward to 1:39:25 and listen to what Angus Deaton has to say. As so often, he is spot on!

As Deaton notes, evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments, therefore, is the best.

More and more economists and econometricians have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Yours truly would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. Just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Coronavirus and educational inequality

22 Nov, 2020 at 16:04 | Posted in Education & School | Comments Off on Coronavirus and educational inequality


Friskolorna och den bristande likvärdigheten

22 Nov, 2020 at 15:34 | Posted in Education & School | 1 Comment

– Jag vet inte hur vi tänkte, men det blev fel med valfriheten i skolan. Det säger nu tidigare S-finansministern Kjell-Olof Feldt till SVT.
– Problemet är likvärdigheten, menar Kjell-Ollof Feldt. De duktiga eleverna söker sig till samma skolor, det gör att de halvbra skolorna dräneras på bra elever, och dras ner till fler och fler problemskolor. Det måste dagens politiker våga sätta stopp för.

skolstartKjell Olof Feldt var den kanske viktigaste personen inom socialdemokratin som banade väg för privata friskolor, konkurrens och mångfald inom välfärden. Han provocerade många partivänner genom att bli ordförande i friskolornas riksförbund.

Men när han såg hur de stora koncernerna bredde ut sig på de små friskolornas bekostnad hoppade han av. Och i dag är han öppet ångerfull över den politik han varit med om att föra.


I Sverige låter vi år 2020 friskolekoncerner med undermålig verksamhet få plocka ut skyhöga vinster — vinster som den svenska staten gladeligen låter dessa koncerner ta av vår skattefinansierade skolpeng.

skolpengEtt flertal undersökningar har på senare år  visat att det system vi har i Sverige med vinstdrivande skolor leder till att våra skolor blir allt mindre likvärdiga — och att detta i sin tur bidrar till allt sämre resultat. Ska vi råda bot på detta måste vi ha ett skolsystem som inte bygger på ett marknadsmässigt konkurrenstänk där skolor istället för att utbilda främst ägnar sig åt att ragga elever och skolpeng, utan drivs som icke-vinstdrivna verksamheter med kvalitet och ett klart och tydligt samhällsuppdrag och elevernas bästa för ögonen.

Vi vet idag att friskolor driver på olika former av etnisk och social segregation, påfallande ofta har låg lärartäthet och i grund och botten sviker resurssvaga elever. Att dessa verksamheter ska premieras med att få plocka ut vinster på våra skattepengar är djupt stötande.

I ett samhälle präglat av jämlikhet, solidaritet och demokrati borde det vara självklart att skattefinansierade skolor inte ska få drivas med vinst, segregation eller religiös indoktrinering som främsta affärsidé!

Många som är verksamma inom skolvärlden eller vårdsektorn har haft svårt att förstå socialdemokratins inställning till privatiseringar och vinstuttag i välfärdssektorn. Av någon outgrundlig anledning har ledande socialdemokrater under många år pläderat för att vinster ska vara tillåtna i skolor och vårdföretag. Ofta har argumentet varit att driftsformen inte har någon betydelse. Så är inte fallet. Driftsform och att tillåta vinst i välfärden har visst betydelse. Och den är negativ.

Historiens dom ska falla hård på ansvariga politiker — och inte minst på socialdemokratins Göran Persson, Kjell-Olof Feldt och alla andra som i deras fotspår glatt traskat patrull — som hänsynslöst och med berått mod låtit offra den en gång så stolta svenska traditionen av att försöka bygga en jämlik skola för alla!

Till skillnad från i alla andra länder i världen har den svenska socialdemokratins ledning gjort det möjligt för privata företag att göra vinst på offentligt finansierad undervisning. Och när borgerliga regeringar ytterligare stimulerat privatiseringsvågen har socialdemokraterna bara tigit och varit passiva. Och detta trots att det hela tiden funnits ett starkt folkligt motstånd  mot att släppa in vinstsyftande privata företag i välfärdssektorn.

Socialdemokratin fortsätter bidra till skolans urholkning med sitt stöd för friskolor och deras vinstuttag. Ett tydligare självmål inom politiken är svårt att hitta. Att Löfvenregeringen lovat att man inte ska verka för vinstbegränsning inom skola och omvård fullbordar bara detta det största sveket någonsin mot de egna väljarna.

Så, ja visst visst var det fel att införa friskolor, Kjell-Olof Feldt. Och inte nog med det. Det var ett av de största fel som någonsin begåtts i svensk skolhistoria!

The Best Intentions

20 Nov, 2020 at 23:09 | Posted in Varia | 1 Comment


Bille August’s and Ingmar Bergman’s masterpiece.
With breathtakingly beautiful music by Stefan Nilsson.

Has mainstream economics — really — gone through a pluralist and empirical revolution?

20 Nov, 2020 at 18:14 | Posted in Economics | 3 Comments

1390045613 In an issue of the journal Fronesis yours truly and a couple of other academics (e.g. Julie Nelson, Tony Lawson, and Phil Mirowski) made an effort at introducing its readers to heterodox economics and its critique of mainstream economics. Rather unsurprisingly this hasn’t pleased the Swedish economics establishment.

On the mainstream economics blog Ekonomistas, professor Daniel Waldenström rode out to defend the mainstream with the nowadays standard defence — heterodox critics haven’t understood that mainstream economics today has gone through a pluralist and empirical revolution. Since heterodox critics haven’t noticed that, their views on the mainstream project is more or less irrelevant.

Well, the problem with that defence is that it has pretty little with reality to do.

When mainstream economists today try to give a picture of modern economics as a pluralist enterprise, they silently ‘forget’ to mention that the change and diversity that gets their approval only takes place within the analytic-formalistic modelling strategy that makes up the core of mainstream economics.logo_netzwerk_500px_transparent You’re free to take your analytical formalist models and apply it to whatever you want — as long as you do it with a modeling methodology that is acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. If you haven’t modeled your thoughts, you’re not in the economics business. But this isn’t pluralism. It’s a methodological reductionist straightjacket.

To most mainstream economists you only have knowledge of something when you can prove it, and so ‘proving’ theories with their models via deductions is considered the only certain way to acquire new knowledge. This is, however, a view for which there is no warranted epistemological foundation. Outside mathematics and logics, all human knowledge is conjectural and fallible.

Validly deducing things in closed analytical-formalist-mathematical models — built on atomistic-reductionist assumptions — doesn’t much help us understand or explain what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modelling exercises pursued by mainstream macroeconomists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had mainstream economists not been so in love with their smorgasbord of models, they would have perceived this too. Telling us that the plethora of models that make up modern macroeconomics ‘are not right or wrong,’ but ‘just more or less applicable to different situations,’ is nothing short of hand waving.

Take macroeconomics as an example. Yes, there is a proliferation of macromodels nowadays — but it almost exclusively takes place as a kind of axiomatic variation within the standard DSGE modelling framework. And — no matter how many thousands of models mainstream economists come up with, as long as they are just axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

Most mainstream economists seem to have no problem with this lack of fundamental diversity — not just path-dependent elaborations of the mainstream canon — and the vanishingly little real world relevance that characterise modern macroeconomics. To these economists there is nothing basically wrong with ‘standard theory.’ As long as policy makers and economists stick to ‘standard economic analysis’ — DSGE — everything is fine. Economics is just a common language and method that makes us think straight and reach correct answers.

Most mainstream neoclassical economists are not for pluralism. They are fanatics insisting on using an axiomatic-deductive economic modelling strategy. To yours truly, this attitude is nothing but a late confirmation of Alfred North Whitehead’s complaint that “the self-confidence of learned people is the comic tragedy of civilisation.”

Daniel Waldenström — like so many other mainstream economists today — seems to maintain that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic theories and models.

Yours truly beg to differ. There are few real reasons to share his optimism on the alleged pluralist and empirical revolution in economics.

I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomised trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomisation is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomised trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Just as e.g. econometrics, randomisation promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomisation is basically a deductive method. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomisation procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Waldenström’s and other mainstream economists’ enthusiasm and optimism on the value of the latest ’empirical’ trends in mainstream economics. I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Heterodox critics are not ill-informed about the development of mainstream economics. Its methodology is still the same basic neoclassical one. It’s still non-pluralist. And although more and more economists work within the field of ’empirical’ economics, the foundation and ‘self-evident’ bench-mark is still of the neoclassical deductive-axiomatic ilk.

Sad to say, but we still have to wait for the revolution that will make economics an empirical and truly pluralist and relevant science. Until then — if you’r familiar with the Swedish language — why not read Fronesis and get a glimpse of the future to come? Mainstream economics belongs to the past.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.