Diego Maradona (1960-2020)

27 Nov, 2020 at 19:28 | Posted in Economics | 2 Comments

Maradona was more than just an extraordinary footballer. He was also a complicated social icon. That further distinguishes him from other footballers, though Pele also has some of that…

70 facts about Argentina legend Diego Maradona | Goal.comHe was both rewarded by and terribly exploited by the system. The system treated him like a “race horse”. They wanted him to play at all cost and pumped him with drugs. They did not care about the physical and psychological costs to him. That contributed to his addiction …

He came from great poverty, from a shanty town. He never hid that and insisted on keeping the connection. I’m told he had tattoos of Fidel Castro and Che Guevara. He also had a relationship with the Pope (Francisco, not Benedict II or John Paul II). That politics speaks well of him, even if it was not carried through with the consistency of an intellectual or political activist …

Did you know that in Argentina, before inflation made them irrelevant, they used to call the 10 (diez) peso note a “Diego”? That is how much people loved him.

Thomas Palley

Logic and truth in economics

26 Nov, 2020 at 16:52 | Posted in Economics | 6 Comments

Logic yields validity, not truth. - Post by Ziya on BoldomaticTo be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

But that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Pandemic depression antidote (XXV)

26 Nov, 2020 at 15:55 | Posted in Varia | Leave a comment

Sydsvenskan — grodors plums och ankors plask på ledarsidan

25 Nov, 2020 at 19:58 | Posted in Politics & Society | 2 Comments

I Sydsvenskans huvudledare kunde vi tidigare i år läsa följande:

ReineKissGång på gång framhåller [73-punkts]överenskommelsen också att fokus ska ligga på kvalitet — inte driftsform — i den politiska styrningen av olika välfärdstjänster.

Detta är givet ur liberalt perspektiv. Trots de brister som finns inom vissa verksamheter har avregleringarna på det hela taget gett medborgarna bättre service, utbud och valfrihet som få idag lär vara beredda att avstå från.

Herre du min milde! Och detta grodors plums och ankors plask ska man behöva läsa år 2020.

När man på 1990-talet påbörjade systemskiftet inom den svenska välfärdssektorn anfördes ofta som argument för privatiseringarna att man skulle slippa den byråkratiska logikens kostnader i form av regelverk, kontroller och uppföljningar.

Konkurrensen — denna marknadsfundamentalismens panacé — skulle göra driften effektivare och höja verksamheternas kvalitet. Marknadslogiken skulle tvinga bort de ‘byråkratiska’ och tungrodda offentliga verksamheterna och kvar skulle bara finnas de bra företagen som ‘valfriheten’ möjliggjort.

När den panglossianska privatiseringsvåtdrömmen visar sig vara en mardröm, tror tyvärr politiker och ledarskribenter att just det som man ville bli av med – regelverk och ‘byråkratisk’ tillsyn och kontroll – skulle vara lösningen.

Man tager sig för pannan – och det av många skäl!

För ska man genomföra de åtgärdspaket som förs fram undrar man ju hur det går med den där effektivitetsvinsten. Kontroller, uppdragsspecifikationer, inspektioner m m kostar ju pengar — och hur mycket överskott blir det då av privatiseringarna när dessa kostnader också ska räknas hem i kostnads- intäktsanalysen? Och hur mycket värd är den där ‘valfriheten’ när vi ser hur den gång på gång bara resulterar i verksamhet där vinst genereras genom kostnadsnedskärningar och sänkt kvalitet?

Ansvariga socialdemokratiska politiker — och inte minst Göran Persson, Kjell-Olof Feldt och alla andra som i deras fotspår glatt traskat patrull — har i decennier hänsynslöst och med berått mod låtit offra den en gång så stolta svenska traditionen av att försöka bygga en jämlik skola och vård för alla!

Till skillnad från i stort sett alla andra länder i världen har den svenska socialdemokratins ledning gjort det möjligt för privata företag att göra vinst på offentligt finansierad undervisning och vård. Och när borgerliga regeringar ytterligare stimulerat privatiseringsvågen har socialdemokraterna bara tigit och varit passiva. Och detta trots att det hela tiden funnits ett starkt folkligt motstånd mot att släppa in vinstsyftande privata företag i välfärdssektorn.

Socialdemokratin har idag har historiskt låga väljarsiffror. Mot den här bakgrunden får detta få att förvånas. När man för en politik som främst slår mot den grupp av människor man påstår sig värna om, är det inte så konstigt att väljarna flyr.

Att socialdemokratin fortsätter bidra till skolans och vårdens urholkning med sitt stöd för privata vårdföretag och friskolor och deras vinstuttag är så klart inte den enda anledningen. Men säkert en av de viktigare. Ett tydligare självmål inom politiken är svårt att hitta.

‘Mathiness’ in economics

25 Nov, 2020 at 16:35 | Posted in Economics | Leave a comment

 blah_blahIn practice, what math does is let macro-economists locate the FWUTVs [facts with unknown truth values] farther away from the discussion of identification … Relying on a micro-foundation lets an author say, “Assume A, assume B, …  blah blah blah … And so we have proven that P is true. Then the model is identified.” …

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

Yes, indeed, modern mainstream economics — and especially its mathematical-statistical operationalization in the form of econometrics — fails miserably over and over again. ‘Modern’ mainstream economics is based on the belief that deductive-axiomatic modelling is a sufficient guide to truth. That belief is, however, totally unfounded as long as no proofs are supplied for us to believe in the assumptions on which the model-based deductions and conclusions build. ‘Mathiness’ masquerading as science is often used by mainstream economists to hide the problematic character of the assumptions used in their theories and models. But — without showing the model assumptions to be realistic and relevant, that kind of economics indeed, as Romer puts it, produces nothing but “blah blah blah.”

Radical Uncertainty' Review: The Dismal Overreachers - WSJ The belief that mathematical reasoning is more rigorous and precise than verbal reasoning, which is thought to be susceptible to vagueness and ambiguity, is pervasive in economics. In a celebrated attack on … Paul Krugman, the Chicago economist John Cochrane wrote, ‘Math in economics serves to keep the logic straight, to make sure that the “then” really does follow the “if,” which it so frequently does not if you just write prose.’ But there is a difficulty here which appears to be much more serious in economics than it is in natural sciences: that of relating variables which are written down and manipulated in mathematical models to things that can be identified and measured in the real world … Concepts such as ‘investment specific technology shocks’ and ‘wage markup’ which are no more observable, or well defined, than toves or borogoves. They exist only within the model, which is rigorous only in the same sense as ‘Jabberwocky’ is rigorous; the meaning of each term is defined by the author, and the logic of the argument follows tautologically from these definitions.

Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science. Using math can never be a substitute for thinking. Or as Romer has it in his showdown with ‘post-real’ economics:

Math cannot establish the truth value of a fact. Never has. Never will.

Pandemic depression antidote (XXIV)

25 Nov, 2020 at 14:25 | Posted in Varia | Leave a comment


Do RCTs in education really meet the ‘gold standard’?

23 Nov, 2020 at 17:56 | Posted in Education & School | Leave a comment


Heterogeneity and interaction is not only an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. And since trials usually are not repeated, unbiasedness and balance on average over repeated trials says nothing about any one trial. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

Econometrics in the 21st century

22 Nov, 2020 at 19:05 | Posted in Statistics & Econometrics | 6 Comments


If you don’t have time to listen to all of the presentations (a couple of them are actually quite uninformative) you should at least scroll forward to 1:39:25 and listen to what Angus Deaton has to say. As so often, he is spot on!

As Deaton notes, evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments, therefore, is the best.

More and more economists and econometricians have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Yours truly would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. Just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Coronavirus and educational inequality

22 Nov, 2020 at 16:04 | Posted in Education & School | Leave a comment


Friskolorna och den bristande likvärdigheten

22 Nov, 2020 at 15:34 | Posted in Education & School | 1 Comment

– Jag vet inte hur vi tänkte, men det blev fel med valfriheten i skolan. Det säger nu tidigare S-finansministern Kjell-Olof Feldt till SVT.
– Problemet är likvärdigheten, menar Kjell-Ollof Feldt. De duktiga eleverna söker sig till samma skolor, det gör att de halvbra skolorna dräneras på bra elever, och dras ner till fler och fler problemskolor. Det måste dagens politiker våga sätta stopp för.

skolstartKjell Olof Feldt var den kanske viktigaste personen inom socialdemokratin som banade väg för privata friskolor, konkurrens och mångfald inom välfärden. Han provocerade många partivänner genom att bli ordförande i friskolornas riksförbund.

Men när han såg hur de stora koncernerna bredde ut sig på de små friskolornas bekostnad hoppade han av. Och i dag är han öppet ångerfull över den politik han varit med om att föra.


I Sverige låter vi år 2020 friskolekoncerner med undermålig verksamhet få plocka ut skyhöga vinster — vinster som den svenska staten gladeligen låter dessa koncerner ta av vår skattefinansierade skolpeng.

skolpengEtt flertal undersökningar har på senare år  visat att det system vi har i Sverige med vinstdrivande skolor leder till att våra skolor blir allt mindre likvärdiga — och att detta i sin tur bidrar till allt sämre resultat. Ska vi råda bot på detta måste vi ha ett skolsystem som inte bygger på ett marknadsmässigt konkurrenstänk där skolor istället för att utbilda främst ägnar sig åt att ragga elever och skolpeng, utan drivs som icke-vinstdrivna verksamheter med kvalitet och ett klart och tydligt samhällsuppdrag och elevernas bästa för ögonen.

Vi vet idag att friskolor driver på olika former av etnisk och social segregation, påfallande ofta har låg lärartäthet och i grund och botten sviker resurssvaga elever. Att dessa verksamheter ska premieras med att få plocka ut vinster på våra skattepengar är djupt stötande.

I ett samhälle präglat av jämlikhet, solidaritet och demokrati borde det vara självklart att skattefinansierade skolor inte ska få drivas med vinst, segregation eller religiös indoktrinering som främsta affärsidé!

Många som är verksamma inom skolvärlden eller vårdsektorn har haft svårt att förstå socialdemokratins inställning till privatiseringar och vinstuttag i välfärdssektorn. Av någon outgrundlig anledning har ledande socialdemokrater under många år pläderat för att vinster ska vara tillåtna i skolor och vårdföretag. Ofta har argumentet varit att driftsformen inte har någon betydelse. Så är inte fallet. Driftsform och att tillåta vinst i välfärden har visst betydelse. Och den är negativ.

Historiens dom ska falla hård på ansvariga politiker — och inte minst på socialdemokratins Göran Persson, Kjell-Olof Feldt och alla andra som i deras fotspår glatt traskat patrull — som hänsynslöst och med berått mod låtit offra den en gång så stolta svenska traditionen av att försöka bygga en jämlik skola för alla!

Till skillnad från i alla andra länder i världen har den svenska socialdemokratins ledning gjort det möjligt för privata företag att göra vinst på offentligt finansierad undervisning. Och när borgerliga regeringar ytterligare stimulerat privatiseringsvågen har socialdemokraterna bara tigit och varit passiva. Och detta trots att det hela tiden funnits ett starkt folkligt motstånd  mot att släppa in vinstsyftande privata företag i välfärdssektorn.

Socialdemokratin fortsätter bidra till skolans urholkning med sitt stöd för friskolor och deras vinstuttag. Ett tydligare självmål inom politiken är svårt att hitta. Att Löfvenregeringen lovat att man inte ska verka för vinstbegränsning inom skola och omvård fullbordar bara detta det största sveket någonsin mot de egna väljarna.

Så, ja visst visst var det fel att införa friskolor, Kjell-Olof Feldt. Och inte nog med det. Det var ett av de största fel som någonsin begåtts i svensk skolhistoria!

The Best Intentions

20 Nov, 2020 at 23:09 | Posted in Varia | 1 Comment


Bille August’s and Ingmar Bergman’s masterpiece.
With breathtakingly beautiful music by Stefan Nilsson.

Has mainstream economics — really — gone through a pluralist and empirical revolution?

20 Nov, 2020 at 18:14 | Posted in Economics | 3 Comments

1390045613 In an issue of the journal Fronesis yours truly and a couple of other academics (e.g. Julie Nelson, Tony Lawson, and Phil Mirowski) made an effort at introducing its readers to heterodox economics and its critique of mainstream economics. Rather unsurprisingly this hasn’t pleased the Swedish economics establishment.

On the mainstream economics blog Ekonomistas, professor Daniel Waldenström rode out to defend the mainstream with the nowadays standard defence — heterodox critics haven’t understood that mainstream economics today has gone through a pluralist and empirical revolution. Since heterodox critics haven’t noticed that, their views on the mainstream project is more or less irrelevant.

Well, the problem with that defence is that it has pretty little with reality to do.

When mainstream economists today try to give a picture of modern economics as a pluralist enterprise, they silently ‘forget’ to mention that the change and diversity that gets their approval only takes place within the analytic-formalistic modelling strategy that makes up the core of mainstream economics.logo_netzwerk_500px_transparent You’re free to take your analytical formalist models and apply it to whatever you want — as long as you do it with a modeling methodology that is acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. If you haven’t modeled your thoughts, you’re not in the economics business. But this isn’t pluralism. It’s a methodological reductionist straightjacket.

To most mainstream economists you only have knowledge of something when you can prove it, and so ‘proving’ theories with their models via deductions is considered the only certain way to acquire new knowledge. This is, however, a view for which there is no warranted epistemological foundation. Outside mathematics and logics, all human knowledge is conjectural and fallible.

Validly deducing things in closed analytical-formalist-mathematical models — built on atomistic-reductionist assumptions — doesn’t much help us understand or explain what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modelling exercises pursued by mainstream macroeconomists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had mainstream economists not been so in love with their smorgasbord of models, they would have perceived this too. Telling us that the plethora of models that make up modern macroeconomics ‘are not right or wrong,’ but ‘just more or less applicable to different situations,’ is nothing short of hand waving.

Take macroeconomics as an example. Yes, there is a proliferation of macromodels nowadays — but it almost exclusively takes place as a kind of axiomatic variation within the standard DSGE modelling framework. And — no matter how many thousands of models mainstream economists come up with, as long as they are just axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

Most mainstream economists seem to have no problem with this lack of fundamental diversity — not just path-dependent elaborations of the mainstream canon — and the vanishingly little real world relevance that characterise modern macroeconomics. To these economists there is nothing basically wrong with ‘standard theory.’ As long as policy makers and economists stick to ‘standard economic analysis’ — DSGE — everything is fine. Economics is just a common language and method that makes us think straight and reach correct answers.

Most mainstream neoclassical economists are not for pluralism. They are fanatics insisting on using an axiomatic-deductive economic modelling strategy. To yours truly, this attitude is nothing but a late confirmation of Alfred North Whitehead’s complaint that “the self-confidence of learned people is the comic tragedy of civilisation.”

Daniel Waldenström — like so many other mainstream economists today — seems to maintain that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic theories and models.

Yours truly beg to differ. There are few real reasons to share his optimism on the alleged pluralist and empirical revolution in economics.

I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomised trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomisation is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomised trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Just as e.g. econometrics, randomisation promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomisation is basically a deductive method. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomisation procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Waldenström’s and other mainstream economists’ enthusiasm and optimism on the value of the latest ’empirical’ trends in mainstream economics. I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Heterodox critics are not ill-informed about the development of mainstream economics. Its methodology is still the same basic neoclassical one. It’s still non-pluralist. And although more and more economists work within the field of ’empirical’ economics, the foundation and ‘self-evident’ bench-mark is still of the neoclassical deductive-axiomatic ilk.

Sad to say, but we still have to wait for the revolution that will make economics an empirical and truly pluralist and relevant science. Until then — if you’r familiar with the Swedish language — why not read Fronesis and get a glimpse of the future to come? Mainstream economics belongs to the past.

Paul Samuelson and the ergodic hypothesis

20 Nov, 2020 at 10:57 | Posted in Economics | 7 Comments

Paul Samuelson claimed that the “ergodic hypothesis” is essential for advancing economics from the realm of history to the realm of science.

But is it really tenable to assume that ergodicity is essential to economics?

The answer can only be – as I have argued






here – NO WAY!

Obviously yours truly is far from the only scientist being critical of Paul Samuelson. This is what Ole Peters writes in a highly interesting article on Samuelson’s stance on the ergodic hypothesis:

Samuelson said that we should accept the ergodic hypothesis because if a system is not ergodic you cannot treat it scientifically. First of all, that’s incorrect, although I think I understand how he ended up with this impression: ergodicity means that a system is very insensitive to initial conditions or perturbations and details of the dynamics, and that makes it easy to make universal statements about such systems …

Another problem with Samuelson’s statement is the logic: we should accept this hypothesis because then we can make universal statements. But before we make any hypothesis—even one that makes our lives easier—we should check whether we know it to be wrong. In this case, there’s nothing to hypothesize. Financial and economic systems are non-ergodic. And if that means we can’t say anything meaningful, then perhaps we shouldn’t try to make meaningful claims. Well, perhaps we can speak for entertainment, but we cannot claim that it’s meaningful.

In what sense would saying something that’s patently false be “meaningful,” or “scientific” rather than “historical”? You can see where I’m going with this. Important models that economists use are not ergodic, so what’s this debate about?…

Samuelson’s comment makes little sense. A hypothesis is about something we don’t know, but in the case of finance models this is something we do know. There’s no reason to hypothesize—the system is not ergodic. It’s like hypothesizing that 3 times 4 is 0 because it makes the mathematics simpler. But I can calculate that the product is 12. Of course, a formalism that’s based on the 3-times-4 hypothesis will run into trouble sooner or later. In economics, that happens with the ergodic hypothesis when we think about risk, or financial stability. Or inequality, as we’re just working out at the moment.

And this is Nassim Taleb’s verdict on Samuelson’s view on science:

However, if you believe in free will you can’t truly believe in social sci­ence and economic projection. You cannot predict how people will act. Except, of course, if there is a trick, and that trick is the cord on which neoclassical economics is suspended. You simply assume that individuals will be rational in the future and thus act predictably. There is a strong link between rationality, predictability, and mathematical tractability …

In orthodox economics, rationality became a straitjacket … This led to mathematical techniques such as “maximization,” or “optimization,” on which Paul Samuelson built much of his work … This optimization set back social science by reducing it from the intellectual and reflective discipline that it was becoming to an attempt at an “exact science.” By “exact science,” I mean a second-rate engineering problem for those who want to pretend that they are in the physics department— so-called physics envy. In other words, an intellectual fraud …

uesc_09_img0509The tragedy is that Paul Samuelson, a quick mind, is said to be one of the most intelligent scholars of his generation. This was clearly a case of very badly invested intelli­gence. Characteristically, Samuelson intimidated those who questioned his techniques with the statement “Those who can, do science, others do methodology.” If you knew math, you could “do science” … Alas, it turns out that it was Samuelson and most of his followers who did not know much math, or did not know how to use what math they knew, how to apply it to reality. They only knew enough math to be blinded by it.

Tragically, before the proliferation of empirically blind idiot savants, interesting work had been begun by true thinkers, the likes of J . M . Keynes, Friedrich Hayek, and the great Benoît Mandelbrot, all of whom were displaced because they moved economics away from the precision of second-rate physics. Very sad.

Joke of the year

20 Nov, 2020 at 07:33 | Posted in Politics & Society | 1 Comment


Using ‘small-world’ models in a large world

19 Nov, 2020 at 10:14 | Posted in Economics | 3 Comments

Radical uncertainty arises when we know something, but not enough to enable us to act with confidence. And that is a situation we all too frequently encounter …

kayThe language and mathematics of probability is a compelling way of analysing games of chance. And similar models have proved useful in some branches of physics. Probabilities can also be used to describe overall mortality risk just as they also form the basis of short-term weather forecasting and expectations about the likely incidence of motor accidents. But these uses of probability are possible because they are in the domain of stationary processes. The determinants of the motion of particles in liquids, or overall (as distinct from pandemic-driven) human mortality, do not change over time, or do so only slowly.  

But most of the problems we face in politics, business (including finance) and society are not like that. We do not have, and never will have, the kind of understanding of human behaviour which emulates the understanding of physical behaviour which yields equations of planetary motion. Worse, human behaviour changes over time in a way that the equations of planetary motion do not …

Discourse about uncertainty has fallen victim to a pseudo-science. When no meaningful quantification is possible, algebra can provide only spurious precision, while at the same time the language becomes casual and sloppy. The terms risk, uncertainty and volatility are treated as equivalent; the words likelihood, confidence and probability are also used as if they had the same meaning. But risk is not the same as uncertainty, although it arises from it, and the confidence with which a statement is made is at best weakly related to the probability that it is true. 

The mistake that Viniar of Goldman Sachs exemplified as the credit crunch bit was to believe that a number derived from a “small world” model—a simplification based on a historic data set—is directly applicable to the “large world,” complex and constantly evolving, in which we live. We are both strongly committed to the construction and use of models—we have spent much of our careers in academia and in the financial and business world doing exactly those things. But that has left us aware of the limitations of models as well as their uses. 

John Kay & Mervyn King

Since yours truly thinks this is a great article — as is the authors’ book Radical Uncertainty (The Bridge Street Press, 2020) — it merits a couple of comments.

To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all. What is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules the roost. That was something both Keynes and Knight basically said in their 1921 books. Thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

Knight’s uncertainty concept has an epistemological founding and Keynes’ definitely an ontological founding. Of course, this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’ view is the most warranted of the two.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if one subscribes to the former, Knightian view –- as Kay and King do –- you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As Keynes convincingly argued, that is ontologically just not possible.

If probability distributions do not exist for certain phenomena, those distributions are not only not knowable, but the whole question regarding whether they can or cannot be known is beside the point. Keynes essentially says this when he asserts that sometimes they are simply unknowable.

John Davis

To Keynes, the source of uncertainty was in the nature of the real — nonergodic — world. It had to do, not only — or primarily — with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilities and expectations at all.

Sometimes we do not know because we cannot know.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.