Tony Lawson and the nature of heterodox economics

9 Apr, 2021 at 18:17 | Posted in Economics | 4 Comments

Lawson believes that there is a ‘coherent core’ of heterodox economists who employ methods that are consistent with the social ontology they implicitly advance. However, Lawson also acknowledges that many also use mathematical modelling, a method that presupposes a social ontology that is in severe tension with it. Therefore, I repeat, Lawson proposes that heterodox economists in fact exist in two groups, those who use methods consistent with the social ontology they are committed to, and those who do not. But all are heterodox economists.

Lawson’s hope is that by making the kind of social ontology presupposed by mathematical modelling clear, heterodox economists will increasingly review the legitimacy of the modelling approach. However, Lawson still considers those who make such a methodological mistake to be heterodox economists. For they still, he argues, are committed to the social ontology he defends and always reveal it in some way in their analyses or pronouncements …

Professor Tony Lawson on Economics & Social Ontology in Economics: past,  present and future. An interview project on Vimeo In recent years, Lawson has been increasingly frustrated by the continued use of mathematical modelling by heterodox economists, as well as by movements towards its increased usage. An argument made by such heterodox economists is that the problem identified by Lawson lies not with mathematical modelling per se but with the sort of mathematical methods used. They argue that poor mathematical modelling has been the problem and that better, more complex, models will be able to capture the reality of human existence.

Lawson clearly regards that methodological argument to be mistaken. For, as stated above, he finds that even complex mathematical models presuppose a closed system. However, he maintains that the social reality that such researchers reveal themselves to implicitly accept is at least quite similar to that which he defends. Their concern with being realistic, for one, speaks volumes. Therefore, these researchers should, he believes, still be distinguished from the mainstream …

Lawson does not argue for excluding mathematical models. Rather, as with all other methods, they should only be applied in conditions in which their use is appropriate, though admittedly Lawson does, as an empirical matter, assess the occurrence of the latter to be relatively rare. His stance is not anti-mathematical method but anti-mismatch of method and context of application … What Lawson does argue for regarding practice is an explicit, systematic and sustained ontological awareness, which he believes can only improve the methodological choices of heterodox economists.

Yannick Slade-Caffarel

If scientific progress in economics lies in our ability to tell ‘better and better stories’ one would, of course, expect economics journals being filled with articles supporting the stories with empirical evidence confirming the predictions. However, the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though explicit discussion, argumentation and justification on the subject aren’t considered to be required.

In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in modern — and as Lawson argues, both in mainstream and heterodox — economics. Applying it to real-world open systems immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyse confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science, we argue and try to substantiate our beliefs and hypotheses with reliable evidence. Propositional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Reasoning in economics

9 Apr, 2021 at 10:22 | Posted in Economics | 3 Comments

Reasoning: Scriven, Michael: 9780070558823: BooksReasoning is the process whereby we get from old truths to new truths, from the known to the unknown, from the accepted to the debatable … If the reasoning starts on firm ground, and if it is itself sound, then it will lead to a conclusion which we must accept, though previously, perhaps, we had not thought we should. And those are the conditions that a good argument must meet; true premises and a good inference. If either of those conditions is not met, you can’t say whether you’ve got a true conclusion or not.

Mainstream economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these ‘analogue-economy models’ rather than engineering things happening in real economies.

Mainstream economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. The one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, is a scientific cul-de-sac. To have valid evidence is not enough. What economics needs is sound evidence — evidence based on arguments that are valid in form and with premises that are true.

Avoiding logical inconsistencies is crucial in all science. But it is not enough. Just as important is avoiding factual inconsistencies. And without showing — or at least presented with a warranted argument — that the assumptions and premises of their models are in fact true, mainstream economists aren’t really reasoning, but only playing games. Formalistic deductive ‘Glasperlenspiel’ can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Dune Mosse

9 Apr, 2021 at 09:55 | Posted in Economics | Comments Off on Dune Mosse


Un viaggio in fondo ai tuoi occhi “dai d’illusi smammai” /

Un viaggio in fondo ai tuoi occhi solcherò / Dune Mosse …

Dentro una lacrima / E verso il sole / Voglio gridare amore  /

Uuh, non ne posso più  / Vieni t’imploderò /

A rallentatore, e … / E nell’immenso morirò!

Questa canzone è un’opera d’arte. Musica d’altissimo livello. Meravigliosa!

Why ergodicity matters

2 Apr, 2021 at 11:04 | Posted in Economics | 8 Comments


Paul Samuelson once famously claimed that the ‘ergodic hypothesis’ is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

In economics ergodicity is often mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent. So, if nothing else, ergodicity is an important concept for understanding one of the deep fundamental flaws of mainstream economics.

Let’s say we have a stationary process. That does not — as Adamou shows in the video — guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average.

Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Instead of arbitrarily assuming that people have a certain type of utility function — as in mainstream theory — time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage — and risks — creates extensive and recurrent systemic crises.

The methods economists bring to their research

31 Mar, 2021 at 18:40 | Posted in Economics | 2 Comments

There are other sleights of hand that cause economists problems. In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

rcResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time: what happens when a region is hit with a bad harvest. The particular exogenous shock used in the research may not be representative; for example, income shortfalls not caused by water scarcity can have different effects on conflict than rainfall-related shocks.

So, economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

Dani Rodrik / Project Syndicate

Nowadays it is widely believed among mainstream economists that the scientific value of randomisation — contrary to other methods — is totally uncontroversial and that randomised experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomisation does not guarantee anything.

As Rodrik notes, ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like 2019’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that randomized controlled trials, RCTs, cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

‘Nobel prize’ winners like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not. So, as Rodrik has it:

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

Why do economists never mention power?

29 Mar, 2021 at 22:49 | Posted in Economics | 8 Comments

Trumpian trickle down | LARS P. SYLLThe intransigence of Econ 101 points to a dark side of economics — namely that the absence of power-speak is by design. Could it be that economics describes the world in a way that purposely keeps the workings of power opaque? History suggests that this idea is not so far-fetched …

The key to wielding power successfully is to make control appear legitimate. That requires ideology. Before capitalism, rulers legitimised their power by tying it to divine right. In modern secular societies, however, that’s no longer an option. So rather than brag of their God-like power, modern corporate rulers use a different tactic; they turn to economics — an ideology that simply ignores the realities of power. Safe in this ideological obscurity, corporate rulers wield power that rivals, or even surpasses, the kings of old.

Are economists cognisant of this game? Some may be. Most economists, however, are likely just clever people who are willing to delve into the intricacies of neoclassical theory without ever questioning its core tenets. Meanwhile, with every student who gets hoodwinked by Econ 101, the Rockefellers of the world happily reap the benefits.

Blair Fix

The vanity of deductivity

29 Mar, 2021 at 22:27 | Posted in Economics | Comments Off on The vanity of deductivity

41EofxYHtBLModelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’

The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …

How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility … to teach genuine truths about empirical reality.

Learning from econophysics’ mistakes

27 Mar, 2021 at 11:19 | Posted in Economics | 13 Comments

What is Econophysics and what does Econophysicists do?By appealing to statistical mechanics, econophysicists hypothesize that we can explain the workings of the economy from simple first principles. I think that is a mistake.

To see the mistake, I’ll return to Richard Feynman’s famous lecture on atomic theory. Towards the end of the talk, he observes that atomic theory is important because it is the basis for all other branches of science, including biology:

“The most important hypothesis in all of biology, for example, is that everything that animals do, atoms do. In other words, there is nothing that living things do that cannot be understood from the point of view that they are made of atoms acting according to the laws of physics.

Richard Feynman, Lectures on Physics

I like this quote because it is profoundly correct. There is no fundamental difference (we believe) between animate and inanimate matter. It is all just atoms. That is an astonishing piece of knowledge.

It is also, in an important sense, astonishingly useless. Imagine that a behavioral biologist complains to you that baboon behavior is difficult to predict. You console her by saying, “Don’t worry, everything that animals do, atoms do.” You are perfectly correct … and completely unhelpful.

Your acerbic quip illustrates an important asymmetry in science. Reduction does not imply resynthesis. As a particle physicist, Richard Feynman was concerned with reduction — taking animals and reducing them to atoms. But to be useful to our behavioral biologist, this reduction must be reversed. We must take atoms and resynthesize animals.

The problem is that this resynthesis is over our heads … vastly so. We can take atoms and resynthesize large molecules. But the rest (DNA, cells, organs, animals) is out of reach. When large clumps of matter interact for billions of years, weird and unpredictable things happen. That is what physicist Philip Anderson meant when he said ‘more is different’ …

The ultimate goal of science is to understand all of this structure from the bottom up. It is a monumental task. The easy part (which is still difficult) is to reduce the complex to the simple. The harder part is to take the simple parts and resynthesize the system. Often when we resynthesize, we fail spectacularly.

Economics is a good example of this failure. To be sure, the human economy is a difficult thing to understand. So there is no shame when our models fail. Still, there is a philosophical problem that hampers economics. Economists want to reduce the economy to ‘micro-foundations’ — simple principles that describe how individuals behave. Then economists want to use these principles to resynthesize the economy. It is a fool’s errand. The system is far too complex, the interconnections too poorly understood.

I have picked on econophysics because its models have the advantage of being exceptionally clear. Whereas mainstream economists obscure their assumptions in obtuse language, econophysicists are admirably explicit: “we assume humans behave like gas particles”. I admire this boldness, because it makes the pitfalls easier to see.

By throwing away ordered connections between individuals, econophysicists make the mathematics tractable. The problem is that it is these ordered connections — the complex relations between people — that define the economy. Throw them away and what you gain in mathematical traction, you lose in relevance. That’s because you are no longer describing the economy. You are describing an inert gas.

Blair Fix

Interesting blog post. Building an analysis (mostly) on tractability assumptions is a dangerous thing to do. And that goes for both mainstream economics and econophysics. Why would anyone listen to policy proposals that are based on foundations that deliberately misrepresent actual behaviour?

Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimisation frequently argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. They also often maintain that there are no methodologically coherent alternatives to microfoundations modelling. That allegation is, of course, difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and ‘New Keynesian’ economists are pursuing are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductive reasoning is the hallmark of science tout court.

How economic orthodoxy protects its dominant position

25 Mar, 2021 at 15:29 | Posted in Economics | 1 Comment

John Bryan Davis (2016) has offered a persuasive account of the way an economic orthodoxy protects its dominant position. Traditional ‘reflexive domains’ for judging research quality — the theory-evidence nexus, the history and philosophy of economics — are pushed aside. Instead, research quality is assessed through journal ranking systems. This is highly biased towards the status quo and reinforces stratification: top journals feature articles by top academics at top institutions, top academics and institutions are those who feature heavily in top journals.

mainstreampluralismBecause departmental funding is so dependent on journal scores, career advancement is often made on the basis of these rankings — they are not to be taken lightly. It is not that competition is lacking, but it is confined to those who slavishly accept the paradigm, as defined by the gatekeepers — the journal editors. In this self-referential system it is faithful adherence to a preconceived notion of ‘good economics’ that pushes one ahead.

Robert Skidelsky


The only economic analysis that mainstream economists accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable — and apply them to whatever you want — as long as you do it within the mainstream approach and its modeling strategy.

If you do not follow that particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’

That isn’t pluralism.

That’s a methodological reductionist straightjacket.

Les 150 ans de la Commune de Paris

25 Mar, 2021 at 14:37 | Posted in Economics | Comments Off on Les 150 ans de la Commune de Paris


What’s wrong with economics?

24 Mar, 2021 at 18:22 | Posted in Economics | 2 Comments

81wDHnOlHnLThis is an important and fundamentally correct critique of the core methodology of economics: individualistic; analytical; ahistorical; asocial; and apolitical. What economics understands is important. What it ignores is, alas, equally important. As Skidelsky, famous as the biographer of Keynes, notes, “to maintain that market competition is a self-sufficient ordering principle is wrong. Markets are embedded in political institutions and moral beliefs.” Economists need to be humbler about what they know and do not know.

Martin Wolf / FT

Mainstream economic theory today is still in the story-telling business whereby economic theorists create mathematical make-believe analogue models of the target system – usually conceived as the real economic system. This mathematical modelling activity is considered useful and essential. To understand and explain relations between different entities in the real economy the predominant strategy is to build mathematical models and make things happen in these ‘analogue-economy models’ rather than engineering things happening in real economies.

Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science.  As Paul Romer had  it in his reckoning with ‘post-real’ economics a couple of years ago:

Math cannot establish the truth value of a fact. Never has. Never will.

We have to demand more of a justification than rather watered-down versions of ‘anything goes’ when it comes to the main postulates on which mainstream economics is founded. If one proposes ‘efficient markets’ or ‘rational expectations’ one also has to support their underlying assumptions. As a rule, none is given, which makes it rather puzzling how things like ‘efficient markets’ and ‘rational expectations’ have become standard modelling assumptions made in much of modern macroeconomics. The reason for this sad state of ‘modern’ economics is that economists often mistake mathematical beauty for truth. It would be far better if they instead made sure they keep their hands clean!

Modell och verklighet i ekonomisk teori

24 Mar, 2021 at 15:41 | Posted in Economics | Comments Off on Modell och verklighet i ekonomisk teori

If orthodox economics is at fault, the error is to be found not in the super-structure, which has been erected with great care for logical consistency, but in a lack of clearness and of generality in the premisses.

John Maynard Keynes

Ekonomistudenter frågar ofta vad de ska med nationalekonomin till. Vad är poängen med att lära sig en massa matematisk-statistiska modeller när de uppenbarligen ändå inte hjälper oss att förstå eller förklara vad som händer i verkliga ekonomier? Varför ska vi lägga månader på att lära oss bemästra verklighetsfrämmande modeller och teorier som vi sen ändå inte kan använda för att komma med förslag och policies som förhindrar ekonomiska kriser och katastrofer?

Yours truly brukar svara att felet inte så mycket ligger i ekonomers kompetens och kunnande. Förvisso ser vi ibland — inte minst i media — flagranta exempel på sådant, men det avgörande problemet är inte enskilda ekonomers olika grad av kompetens. Grundproblemet är hur ekonomer  går tillväga när de gör sina analyser och modeller. Ska vi på djupet verkligen förstå var den ‘moderna’ nationalekonomin går fel, måste vi studera nationalekonomins metodologiska grundvalar.

ec modNationalekonomi — och här avser jag främst den förhärskande neoklassiska ‘mainstream’ varianten som lärs ut på våra universitet och högskolor — är mer än någon annan samhällsvetenskap modellorienterad. Det finns många skäl till detta — ämnets historia, ideal hämtade från naturvetenskapen, universalitetsanpråk, viljan att förklara så mycket som möjligt med så lite som möjligt, rigör, precision med mera.

Tillvägagångssättet är i grunden analytiskt — helheten bryts ned i sina beståndsdelar så att det blir möjligt att förklara (reducera) aggregatet (makro) som ett resultat av interaktion mellan delarna (mikro).

Mainstreamekonomer baserar i regel sina modeller på ett antal kärnantaganden (CA) — som i grunden beskriver aktörer som ‘rationella’ — samt ett antal auxiliära antaganden (AA). Tillsammans utgör (CA) och (AA) vad vi skulle kunna kalla ’basmodellen’ (M) för alla mainstreammodeller. Baserat på dessa två uppsättningar av antaganden försöker man förklara och predicera både individuella (mikro) och samhälleliga fenomen (makro).

Kärnantagandena består typiskt av:
CA1 Fullständighet – den rationella aktören förmår alltid jämföra olika alternativ och bestämma vilket hon föredrar
CA2 Transitivitet – om aktören föredrar A framför B, och B framför C, måste hon föredra A framför C
CA3 Icke-mättnad — mer är alltid bättre än mindre
CA4 Maximering av förväntad nytta – i situationer känneteckade av risk maximerar aktören alltid den förväntade nyttan
CA5 Konsistenta ekonomiska jämvikter – olika aktörers handlande är konsistenta och interaktionen dem emellan resulterar i en jämvikt

När man beskriver aktörer som rationella i de här modellerna avser man instrumentell rationalitet, som innebär att aktörer förutsätts välja alternativ som har de bästa konsekvenserna utifrån deras givna preferenser. Hur dessa givna preferenser har uppkommit uppfattas i regel ligga utanför rationalitetsbegreppets ’omfång’ och därför inte heller utgör en del av den ekonomiska teorin som sådan.

Bilden man får av kärnantagandena (’rationella val’) är en rationell aktör med starka kognitiva kapaciteter, som vet vad hon vill, noga överväger sina alternativ och givet sina preferenser väljer vad hon tror har de bästa konsekvenserna för henne. Vägandes de olika alternativen mot varandra gör aktören ett konsistent, rationellt val och agerar utifrån detta.

De auxiliära antagandena (AA) specificerar rums- och tidmässigt vad för typ av interaktion som kan äga rum mellan ‘rationella’ aktörer. Antagandena ger ofta svar på frågor som:
AA1 vilka är aktörerna och var och när interagerar de
AA2 vilka är deras mål och aspirationer
AA3 vilka intressen har de
AA4 vilka är deras förväntningar
AA5 vad för slags handlingsutrymme har de
AA6 vilket slags överenskommelser kan de ingå
AA7 hur mycket och vad för slags information besitter de
AA8 hur interagerar deras handlingar med varandra

Så ‘basmodellen’ för alla mainstream-modeller består av en generell bestämning av vad som (axiomatiskt) utgör optimerande rationella aktörer (CA) samt en mer specifik beskrivning (AA) av i vad för slags situationer som dessa aktörer agerar (vilket innebär att AA fungerar som en restriktion som bestämmer den tilltänkta applikationsdomänen för CA och de därur deduktivt härledda teoremen). Listan över antaganden kan aldrig bli fullständig eftersom det alltid också förekommer ospecificerade ’bakgrundsantaganden’ och opåtalade utelämnanden (typ transaktionskostnader, slutningar, o d, ofta baserat på något slags negligerbarhets- och applikationsöverväganden). Förhoppningen är att denna ’tunna’ uppsättning antaganden ska vara tillräcklig för att förklara och predicera ’fylliga’ fenomen i den verkliga, komplexa världen.

I nationalekonomiska läroböcker presenteras genomgående modeller med den grundläggande strukturen

A1, A2, … An

där en uppsättning odifferentierade antaganden används för att härleda ett teorem. Detta är dock för vagt och oprecist för att vara av något egentligt värde, och ger inte heller någon vidare sanningsenlig bild av den gängse modellstrategin i mainstream nationalekonomi. Där differentieras det genomgående mellan uppsättningen av laglika hypoteser och auxiliära antaganden, vilket ger den mer adekvata strukturen

CA1, CA2, … CAn & AA1, AA2, … AAn


CA1, CA2, … CAn
(AA1, AA2, … AAn) → Teorem,

som klarare understryker (AA):s funktion som en uppsättning restriktioner på applicerbarheten av de deducerade teoremen. I extremfallet får vi

CA1, CA2, … CAn

där teoremen är analytiska entiteter med universell och fullständigt obegränsad applicerbarhet. Eller så får vi

AA1, AA2, … AAn

där teoremen förvandlats till icke-testbara tautologiska tankeexperiment utan några andra empiriska ambitioner än att berätta en koherent fiktiv ‘som-om’ historia.

Om man inte klart skiljer på (CA) och (AA) går det inte att göra denna helt avgörande tolkningsmässiga distinktion, vilket öppnar upp för allehanda försök att ’rädda’ eller ’immunicera’ modeller från i princip all kritik genom att otillbörligt ’glida’ mellan att tolka modellerna som empiriskt tomma deduktiv-axiomatiska analytiska ’system’, respektive som modeller med explicita empiriska aspirationer. I vanliga fall uppfattas kanske flexibilitet som något positivt, men i en metodologisk kontext är det snarast ett tecken på problem. Modeller som är förenliga med allting eller kommer med ospecificerade applikationsdomäner, är värdelösa ur vetenskaplig synpunkt sett.

Nationalekonomin — till skillnad från logik och matematik — borde vara en empirisk vetenskap, och empiriska test av ’axiom’ borde vara självklart relevant för en sådan disciplin. För även om mainstreamekonomen själv (implicit eller explicit) hävdar att hennes axiom är universellt accepterade som ’sanna’ och utan behov av bevis, så utgör detta självklart inte i sig ett skäl för andra att bara rakt av acceptera dem.

När mainstreamekonomers deduktivistiska ’tänk’ kommer till användning, resulterar detta som regel i konstruerande av ’som om’ modeller baserade på någon form av idealiseringslogik och en uppsättning axiomatiska antaganden från vilka konsistenta och precisa inferenser kan göras. Det vackra i detta är så klart att om de axiomatiska premisserna är sanna, så följer slutsatserna med nödvändighet. Men — även om proceduren med framgång används inom matematik och matematisk-logiskt härledda axiomatisk-deduktiva system, så är den en dålig guide för att förstå och förklara system i den verkliga världen.

De flesta teoretiska modeller som mainstreamekonomer arbetar med är abstrakta och orealistiska konstruktioner med vars hjälp man konstruerar icke-testbara hypoteser. Hur detta ska kunna säga oss något relevant och intressant om den värld vi lever i är svårt att se.

Konfronterade med de massiva empiriska misslyckanden dessa modeller och teorier gett upphov till, retirerar många mainstreamekonomer och väljer att presentera sina modeller och teorier som enbart ett slags tankeexperiment utan egentliga aspirationer att säga oss något om den verkliga världen. Istället för att slå en bro mellan modell och verklighet, ger man helt enkelt bara upp. Denna typ av vetenskaplig defeatism är dock helt oacceptabel. Det kan aldrig vara nog att bevisa eller deducera saker i en modellvärld. Om teorier inte — direkt eller indirekt — säger oss något om den värld vi lever i, varför ska vi då ödsla tid på dem?

Mainstream economics — a severe case of Bourbaki perversion

22 Mar, 2021 at 17:24 | Posted in Economics | Comments Off on Mainstream economics — a severe case of Bourbaki perversion

Il y a une certaine tendance française à attribuer à l’utilisation des mathématiques les difficultés des modèles à expliquer les phénomènes économiques … Pour moi, le problème ne réside pas dans l’utilisation des méthodes formelles mais plutôt dans une obsession poussant à améliorer et même à perfectionner des modèles qui semblent être totalement détachés de la réalité. Comme Robert Solow l’a observé:

«Maybe there is in human nature a deep-seated perverse pleasure in adopting and defending a wholly counterintuitive doctrine that leaves the uninitiated peasant wondering what planet he or she is on.
(Solow [2007])»

kirCependant, je vais suggérer que cette tendance n’est que le reflet d’une longue histoire qui nous a amenés dans une impasse. Avec la raréfaction de nos modèles, nous avons adopté l’attitude des mathématiciens bien décrite par Bourbaki:

«Why do applications [of mathematics] ever succeed? Why is a certain amount of logi cal reasoning occasionally helpful in practical life? Why have some of the most intricate theories in mathematics become an indispensable tool to the modem physicist, to the engineer, and to the manufacturer of atom-bombs? Fortunately for us, the mathematician does not feel called upon to answer such questions. (Bourbaki Journal of Symbolic Logic [1949])»

On peut raisonnablement se demander comment nous nous sommes trouvés dans cette situation …

On peut voir, là, la racine de nos problèmes. On cherche à tout prix à imposer un modèle d’équilibre à un processus qui est fondamentalement dynamique et évolutif ; on veut fermer le modèle d’un processus qui est, presque par définition, non fermé. Comme l’expliquent des économétriciens comme Hendry et Mizon [2010], les individus qui conditionnent leurs anticipations sur l’information concernant un processus passé, quand ce processus évolue dans le temps, ne se comportent pas d’une façon rationnelle. Le poids de notre héritage d’un modèle qui correspond à une situation d’équilibre statique ou stationnaire perturbé de temps en temps par des chocs exogènes est lourd.

Alan Kirman

Mainstream theoretical economics is still under the spell of the Bourbaki tradition in mathematics. Theoretical rigour is everything. Studying real-world economies and empirical corrobation/falsification of theories and models nothing. Separating questions of logic and empirical validity may — of course — help economists to focus on producing rigorous and elegant mathematical theorems that people like Lucas and Sargent consider as “progress in economic thinking.” To most other people, not being concerned with empirical evidence and model validation is a sign of social science becoming totally useless and irrelevant. Economic theories building on known to be ridiculously artificial assumptions without an explicit relationship with the real world is a dead end. That’s probably also the reason why Neo-Walrasian general equilibrium analysis today (at least outside Chicago) is considered a total waste of time. In the trade-off between relevance and rigour, priority should always be on the former when it comes to social science. The only thing followers of the Bourbaki tradition within economics — like Karl Menger, John von Neumann, Gerard Debreu, Robert Lucas and Thomas Sargent — has given us are irrelevant model abstractions with no bridges to real-world economies. It’s difficult to find a more poignant example of a total waste of time in science.

The Deficit Myth

22 Mar, 2021 at 12:15 | Posted in Economics | 6 Comments The Deficit Myth: Modern Monetary Theory and the Birth of the  People's Economy eBook: Kelton, Stephanie: Kindle StoreSoon after joining the Budget Committee, Kelton the deficit owl played a game with the staffers. She would first ask if they would wave a magic wand that had the power to eliminate the national debt. They all said yes. Then Kelton would ask, “Suppose that wand had the power to rid the world of US Treasuries. Would you wave it?” This question—even though it was equivalent to asking to wipe out the national debt—“drew puzzled looks, furrowed brows, and pensive expressions. Eventually, everyone would decide against waving the wand.”

Such is the spirit of Kelton’s book, The Deficit Myth. She takes the reader down trains of thought that turn conventional wisdom about federal budget deficits on its head. Kelton makes absurd claims that the reader will think surely can’t be true…but then she seems to justify them by appealing to accounting tautologies. And because she uses apt analogies and relevant anecdotes, Kelton is able to keep the book moving despite its dry subject matter. She promises the reader that MMT opens up grand new possibilities for the federal government to help the unemployed, the uninsured, and even the planet itself…if we would only open our minds to a paradigm shift …

Precisely because Kelton’s book is so unexpectedly impressive, I would urge longstanding critics of MMT to resist the urge to dismiss it with ridicule. Although it’s fun to lambaste “magical monetary theory” on social media and to ask, “Why don’t you move to Zimbabwe?” such moves will only serve to enhance the credibility of MMT in the eyes of those who are receptive to it.

Robert P. Murphy / Mises Institute

Can a government go bankrupt?
No. You cannot be indebted to yourself.

Can a central bank go bankrupt?
No. A central bank can in principle always ‘print’ more money.

Do taxpayers have to repay government debts?
No, at least not as long the debt is incurred in a country’s own currency.

Do increased public debts burden future generations?
No, not necessarily. It depends on what the debt is used for.

Does maintaining full employment mean the government has to increase its debt?

dec3bb27f72875e4fb4d4b62daebb2fd161b36392c1a0626f00cfd2ece207d84As the national debt increases, and with it the sum of private wealth, there will be an increasingly yield from taxes on higher incomes and inheritances, even if the tax rates are unchanged. These higher tax payments do not represent reductions of spending by the taxpayers. Therefore the government does not have to use these proceeds to maintain the requisite rate of spending, and can devote them to paying the interest on the national debt …

The greater the national debt the greater is the quantity of private wealth. The reason for this is simply that for every dollar of debt owed by the government there is a private creditor who owns the government obligations (possibly through a corporation in which he has shares), and who regards these obligations as part of his private fortune. The greater the private fortunes the less is the incentive to add to them by saving out of current income …

If for any reason the government does not wish to see private property grow too much … it can check this by taxing the rich instead of borrowing from them, in its program of financing government spending to maintain full employment.

Abba Lerner

Comment l’extrême droite a préparé l’insurrection au Capitole

21 Mar, 2021 at 19:01 | Posted in Economics | Comments Off on Comment l’extrême droite a préparé l’insurrection au Capitole


« Previous PageNext Page »

Blog at
Entries and Comments feeds.