The methods economists bring to their research

31 Mar, 2021 at 18:40 | Posted in Economics | 2 Comments

There are other sleights of hand that cause economists problems. In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

rcResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time: what happens when a region is hit with a bad harvest. The particular exogenous shock used in the research may not be representative; for example, income shortfalls not caused by water scarcity can have different effects on conflict than rainfall-related shocks.

So, economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

Dani Rodrik / Project Syndicate

Nowadays it is widely believed among mainstream economists that the scientific value of randomisation — contrary to other methods — is totally uncontroversial and that randomised experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomisation does not guarantee anything.

As Rodrik notes, ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like 2019’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that randomized controlled trials, RCTs, cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

‘Nobel prize’ winners like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not. So, as Rodrik has it:

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

A grief observed (personal)

30 Mar, 2021 at 12:59 | Posted in Varia | Comments Off on A grief observed (personal)

A Review of C.S. Lewis's “A Grief Observed” | by Jacqueline Dooley | Grief  Book Club | Medium

We were promised sufferings. They were part of the program. We were even told, ‘Blessed are they that mourn,’ and I accept it. I’ve got nothing that I hadn’t bargained for. Of course it is different when the thing happens to oneself, not to others, and in reality, not imagination …

But those two circles, above all the point at which they touched, are the very thing I am mourning for, homesick for, famished for. You tell me ‘she goes on.’ But my heart and body are crying out, come back, come back. Be a circle, touching my circle on the plane of Nature. But I know this is impossible. I know that the thing I want is exactly the thing I can never get.

People say time heals all wounds.

I wish that was true.

But some wounds never heal. Even after twenty-eight years you just have to learn to live with the scars.

In memory of Kristina Syll — beloved wife and mother of David and Tora.

Why do economists never mention power?

29 Mar, 2021 at 22:49 | Posted in Economics | 8 Comments

Trumpian trickle down | LARS P. SYLLThe intransigence of Econ 101 points to a dark side of economics — namely that the absence of power-speak is by design. Could it be that economics describes the world in a way that purposely keeps the workings of power opaque? History suggests that this idea is not so far-fetched …

The key to wielding power successfully is to make control appear legitimate. That requires ideology. Before capitalism, rulers legitimised their power by tying it to divine right. In modern secular societies, however, that’s no longer an option. So rather than brag of their God-like power, modern corporate rulers use a different tactic; they turn to economics — an ideology that simply ignores the realities of power. Safe in this ideological obscurity, corporate rulers wield power that rivals, or even surpasses, the kings of old.

Are economists cognisant of this game? Some may be. Most economists, however, are likely just clever people who are willing to delve into the intricacies of neoclassical theory without ever questioning its core tenets. Meanwhile, with every student who gets hoodwinked by Econ 101, the Rockefellers of the world happily reap the benefits.

Blair Fix

The vanity of deductivity

29 Mar, 2021 at 22:27 | Posted in Economics | Comments Off on The vanity of deductivity

41EofxYHtBLModelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’

The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …

How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility … to teach genuine truths about empirical reality.

My sinful soul

28 Mar, 2021 at 12:33 | Posted in Varia | Comments Off on My sinful soul

.

Learning from econophysics’ mistakes

27 Mar, 2021 at 11:19 | Posted in Economics | 13 Comments

What is Econophysics and what does Econophysicists do?By appealing to statistical mechanics, econophysicists hypothesize that we can explain the workings of the economy from simple first principles. I think that is a mistake.

To see the mistake, I’ll return to Richard Feynman’s famous lecture on atomic theory. Towards the end of the talk, he observes that atomic theory is important because it is the basis for all other branches of science, including biology:

“The most important hypothesis in all of biology, for example, is that everything that animals do, atoms do. In other words, there is nothing that living things do that cannot be understood from the point of view that they are made of atoms acting according to the laws of physics.

Richard Feynman, Lectures on Physics

I like this quote because it is profoundly correct. There is no fundamental difference (we believe) between animate and inanimate matter. It is all just atoms. That is an astonishing piece of knowledge.

It is also, in an important sense, astonishingly useless. Imagine that a behavioral biologist complains to you that baboon behavior is difficult to predict. You console her by saying, “Don’t worry, everything that animals do, atoms do.” You are perfectly correct … and completely unhelpful.

Your acerbic quip illustrates an important asymmetry in science. Reduction does not imply resynthesis. As a particle physicist, Richard Feynman was concerned with reduction — taking animals and reducing them to atoms. But to be useful to our behavioral biologist, this reduction must be reversed. We must take atoms and resynthesize animals.

The problem is that this resynthesis is over our heads … vastly so. We can take atoms and resynthesize large molecules. But the rest (DNA, cells, organs, animals) is out of reach. When large clumps of matter interact for billions of years, weird and unpredictable things happen. That is what physicist Philip Anderson meant when he said ‘more is different’ …

The ultimate goal of science is to understand all of this structure from the bottom up. It is a monumental task. The easy part (which is still difficult) is to reduce the complex to the simple. The harder part is to take the simple parts and resynthesize the system. Often when we resynthesize, we fail spectacularly.

Economics is a good example of this failure. To be sure, the human economy is a difficult thing to understand. So there is no shame when our models fail. Still, there is a philosophical problem that hampers economics. Economists want to reduce the economy to ‘micro-foundations’ — simple principles that describe how individuals behave. Then economists want to use these principles to resynthesize the economy. It is a fool’s errand. The system is far too complex, the interconnections too poorly understood.

I have picked on econophysics because its models have the advantage of being exceptionally clear. Whereas mainstream economists obscure their assumptions in obtuse language, econophysicists are admirably explicit: “we assume humans behave like gas particles”. I admire this boldness, because it makes the pitfalls easier to see.

By throwing away ordered connections between individuals, econophysicists make the mathematics tractable. The problem is that it is these ordered connections — the complex relations between people — that define the economy. Throw them away and what you gain in mathematical traction, you lose in relevance. That’s because you are no longer describing the economy. You are describing an inert gas.

Blair Fix

Interesting blog post. Building an analysis (mostly) on tractability assumptions is a dangerous thing to do. And that goes for both mainstream economics and econophysics. Why would anyone listen to policy proposals that are based on foundations that deliberately misrepresent actual behaviour?

Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimisation frequently argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. They also often maintain that there are no methodologically coherent alternatives to microfoundations modelling. That allegation is, of course, difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and ‘New Keynesian’ economists are pursuing are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductive reasoning is the hallmark of science tout court.

John von Neumann on mathematics

27 Mar, 2021 at 10:22 | Posted in Theory of Science & Methodology | 2 Comments

587aaabdfdf42f314b0da9f7fcf2a47d

Wissenschaftler irren

27 Mar, 2021 at 10:16 | Posted in Theory of Science & Methodology | Comments Off on Wissenschaftler irren

.

Berkson’s paradox (student stuff)

25 Mar, 2021 at 17:09 | Posted in Statistics & Econometrics | Comments Off on Berkson’s paradox (student stuff)

.

How economic orthodoxy protects its dominant position

25 Mar, 2021 at 15:29 | Posted in Economics | 1 Comment

John Bryan Davis (2016) has offered a persuasive account of the way an economic orthodoxy protects its dominant position. Traditional ‘reflexive domains’ for judging research quality — the theory-evidence nexus, the history and philosophy of economics — are pushed aside. Instead, research quality is assessed through journal ranking systems. This is highly biased towards the status quo and reinforces stratification: top journals feature articles by top academics at top institutions, top academics and institutions are those who feature heavily in top journals.

mainstreampluralismBecause departmental funding is so dependent on journal scores, career advancement is often made on the basis of these rankings — they are not to be taken lightly. It is not that competition is lacking, but it is confined to those who slavishly accept the paradigm, as defined by the gatekeepers — the journal editors. In this self-referential system it is faithful adherence to a preconceived notion of ‘good economics’ that pushes one ahead.

Robert Skidelsky

straight-jacket

The only economic analysis that mainstream economists accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable — and apply them to whatever you want — as long as you do it within the mainstream approach and its modeling strategy.

If you do not follow that particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’

That isn’t pluralism.

That’s a methodological reductionist straightjacket.

Les 150 ans de la Commune de Paris

25 Mar, 2021 at 14:37 | Posted in Economics | Comments Off on Les 150 ans de la Commune de Paris

.

What’s wrong with economics?

24 Mar, 2021 at 18:22 | Posted in Economics | 2 Comments

81wDHnOlHnLThis is an important and fundamentally correct critique of the core methodology of economics: individualistic; analytical; ahistorical; asocial; and apolitical. What economics understands is important. What it ignores is, alas, equally important. As Skidelsky, famous as the biographer of Keynes, notes, “to maintain that market competition is a self-sufficient ordering principle is wrong. Markets are embedded in political institutions and moral beliefs.” Economists need to be humbler about what they know and do not know.

Martin Wolf / FT

Mainstream economic theory today is still in the story-telling business whereby economic theorists create mathematical make-believe analogue models of the target system – usually conceived as the real economic system. This mathematical modelling activity is considered useful and essential. To understand and explain relations between different entities in the real economy the predominant strategy is to build mathematical models and make things happen in these ‘analogue-economy models’ rather than engineering things happening in real economies.

Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science.  As Paul Romer had  it in his reckoning with ‘post-real’ economics a couple of years ago:

Math cannot establish the truth value of a fact. Never has. Never will.

We have to demand more of a justification than rather watered-down versions of ‘anything goes’ when it comes to the main postulates on which mainstream economics is founded. If one proposes ‘efficient markets’ or ‘rational expectations’ one also has to support their underlying assumptions. As a rule, none is given, which makes it rather puzzling how things like ‘efficient markets’ and ‘rational expectations’ have become standard modelling assumptions made in much of modern macroeconomics. The reason for this sad state of ‘modern’ economics is that economists often mistake mathematical beauty for truth. It would be far better if they instead made sure they keep their hands clean!

Modell och verklighet i ekonomisk teori

24 Mar, 2021 at 15:41 | Posted in Economics | Comments Off on Modell och verklighet i ekonomisk teori

If orthodox economics is at fault, the error is to be found not in the super-structure, which has been erected with great care for logical consistency, but in a lack of clearness and of generality in the premisses.

John Maynard Keynes

Ekonomistudenter frågar ofta vad de ska med nationalekonomin till. Vad är poängen med att lära sig en massa matematisk-statistiska modeller när de uppenbarligen ändå inte hjälper oss att förstå eller förklara vad som händer i verkliga ekonomier? Varför ska vi lägga månader på att lära oss bemästra verklighetsfrämmande modeller och teorier som vi sen ändå inte kan använda för att komma med förslag och policies som förhindrar ekonomiska kriser och katastrofer?

Yours truly brukar svara att felet inte så mycket ligger i ekonomers kompetens och kunnande. Förvisso ser vi ibland — inte minst i media — flagranta exempel på sådant, men det avgörande problemet är inte enskilda ekonomers olika grad av kompetens. Grundproblemet är hur ekonomer  går tillväga när de gör sina analyser och modeller. Ska vi på djupet verkligen förstå var den ‘moderna’ nationalekonomin går fel, måste vi studera nationalekonomins metodologiska grundvalar.

ec modNationalekonomi — och här avser jag främst den förhärskande neoklassiska ‘mainstream’ varianten som lärs ut på våra universitet och högskolor — är mer än någon annan samhällsvetenskap modellorienterad. Det finns många skäl till detta — ämnets historia, ideal hämtade från naturvetenskapen, universalitetsanpråk, viljan att förklara så mycket som möjligt med så lite som möjligt, rigör, precision med mera.

Tillvägagångssättet är i grunden analytiskt — helheten bryts ned i sina beståndsdelar så att det blir möjligt att förklara (reducera) aggregatet (makro) som ett resultat av interaktion mellan delarna (mikro).

Mainstreamekonomer baserar i regel sina modeller på ett antal kärnantaganden (CA) — som i grunden beskriver aktörer som ‘rationella’ — samt ett antal auxiliära antaganden (AA). Tillsammans utgör (CA) och (AA) vad vi skulle kunna kalla ’basmodellen’ (M) för alla mainstreammodeller. Baserat på dessa två uppsättningar av antaganden försöker man förklara och predicera både individuella (mikro) och samhälleliga fenomen (makro).

Kärnantagandena består typiskt av:
CA1 Fullständighet – den rationella aktören förmår alltid jämföra olika alternativ och bestämma vilket hon föredrar
CA2 Transitivitet – om aktören föredrar A framför B, och B framför C, måste hon föredra A framför C
CA3 Icke-mättnad — mer är alltid bättre än mindre
CA4 Maximering av förväntad nytta – i situationer känneteckade av risk maximerar aktören alltid den förväntade nyttan
CA5 Konsistenta ekonomiska jämvikter – olika aktörers handlande är konsistenta och interaktionen dem emellan resulterar i en jämvikt

När man beskriver aktörer som rationella i de här modellerna avser man instrumentell rationalitet, som innebär att aktörer förutsätts välja alternativ som har de bästa konsekvenserna utifrån deras givna preferenser. Hur dessa givna preferenser har uppkommit uppfattas i regel ligga utanför rationalitetsbegreppets ’omfång’ och därför inte heller utgör en del av den ekonomiska teorin som sådan.

Bilden man får av kärnantagandena (’rationella val’) är en rationell aktör med starka kognitiva kapaciteter, som vet vad hon vill, noga överväger sina alternativ och givet sina preferenser väljer vad hon tror har de bästa konsekvenserna för henne. Vägandes de olika alternativen mot varandra gör aktören ett konsistent, rationellt val och agerar utifrån detta.

De auxiliära antagandena (AA) specificerar rums- och tidmässigt vad för typ av interaktion som kan äga rum mellan ‘rationella’ aktörer. Antagandena ger ofta svar på frågor som:
AA1 vilka är aktörerna och var och när interagerar de
AA2 vilka är deras mål och aspirationer
AA3 vilka intressen har de
AA4 vilka är deras förväntningar
AA5 vad för slags handlingsutrymme har de
AA6 vilket slags överenskommelser kan de ingå
AA7 hur mycket och vad för slags information besitter de
AA8 hur interagerar deras handlingar med varandra

Så ‘basmodellen’ för alla mainstream-modeller består av en generell bestämning av vad som (axiomatiskt) utgör optimerande rationella aktörer (CA) samt en mer specifik beskrivning (AA) av i vad för slags situationer som dessa aktörer agerar (vilket innebär att AA fungerar som en restriktion som bestämmer den tilltänkta applikationsdomänen för CA och de därur deduktivt härledda teoremen). Listan över antaganden kan aldrig bli fullständig eftersom det alltid också förekommer ospecificerade ’bakgrundsantaganden’ och opåtalade utelämnanden (typ transaktionskostnader, slutningar, o d, ofta baserat på något slags negligerbarhets- och applikationsöverväganden). Förhoppningen är att denna ’tunna’ uppsättning antaganden ska vara tillräcklig för att förklara och predicera ’fylliga’ fenomen i den verkliga, komplexa världen.

I nationalekonomiska läroböcker presenteras genomgående modeller med den grundläggande strukturen

A1, A2, … An
———————-
Teorem,

där en uppsättning odifferentierade antaganden används för att härleda ett teorem. Detta är dock för vagt och oprecist för att vara av något egentligt värde, och ger inte heller någon vidare sanningsenlig bild av den gängse modellstrategin i mainstream nationalekonomi. Där differentieras det genomgående mellan uppsättningen av laglika hypoteser och auxiliära antaganden, vilket ger den mer adekvata strukturen

CA1, CA2, … CAn & AA1, AA2, … AAn
———————————————–
Teorem,

eller

CA1, CA2, … CAn
———————-
(AA1, AA2, … AAn) → Teorem,

som klarare understryker (AA):s funktion som en uppsättning restriktioner på applicerbarheten av de deducerade teoremen. I extremfallet får vi

CA1, CA2, … CAn
———————
Teorem,

där teoremen är analytiska entiteter med universell och fullständigt obegränsad applicerbarhet. Eller så får vi

AA1, AA2, … AAn
———————-
Teorem,

där teoremen förvandlats till icke-testbara tautologiska tankeexperiment utan några andra empiriska ambitioner än att berätta en koherent fiktiv ‘som-om’ historia.

Om man inte klart skiljer på (CA) och (AA) går det inte att göra denna helt avgörande tolkningsmässiga distinktion, vilket öppnar upp för allehanda försök att ’rädda’ eller ’immunicera’ modeller från i princip all kritik genom att otillbörligt ’glida’ mellan att tolka modellerna som empiriskt tomma deduktiv-axiomatiska analytiska ’system’, respektive som modeller med explicita empiriska aspirationer. I vanliga fall uppfattas kanske flexibilitet som något positivt, men i en metodologisk kontext är det snarast ett tecken på problem. Modeller som är förenliga med allting eller kommer med ospecificerade applikationsdomäner, är värdelösa ur vetenskaplig synpunkt sett.

Nationalekonomin — till skillnad från logik och matematik — borde vara en empirisk vetenskap, och empiriska test av ’axiom’ borde vara självklart relevant för en sådan disciplin. För även om mainstreamekonomen själv (implicit eller explicit) hävdar att hennes axiom är universellt accepterade som ’sanna’ och utan behov av bevis, så utgör detta självklart inte i sig ett skäl för andra att bara rakt av acceptera dem.

När mainstreamekonomers deduktivistiska ’tänk’ kommer till användning, resulterar detta som regel i konstruerande av ’som om’ modeller baserade på någon form av idealiseringslogik och en uppsättning axiomatiska antaganden från vilka konsistenta och precisa inferenser kan göras. Det vackra i detta är så klart att om de axiomatiska premisserna är sanna, så följer slutsatserna med nödvändighet. Men — även om proceduren med framgång används inom matematik och matematisk-logiskt härledda axiomatisk-deduktiva system, så är den en dålig guide för att förstå och förklara system i den verkliga världen.

De flesta teoretiska modeller som mainstreamekonomer arbetar med är abstrakta och orealistiska konstruktioner med vars hjälp man konstruerar icke-testbara hypoteser. Hur detta ska kunna säga oss något relevant och intressant om den värld vi lever i är svårt att se.

Konfronterade med de massiva empiriska misslyckanden dessa modeller och teorier gett upphov till, retirerar många mainstreamekonomer och väljer att presentera sina modeller och teorier som enbart ett slags tankeexperiment utan egentliga aspirationer att säga oss något om den verkliga världen. Istället för att slå en bro mellan modell och verklighet, ger man helt enkelt bara upp. Denna typ av vetenskaplig defeatism är dock helt oacceptabel. Det kan aldrig vara nog att bevisa eller deducera saker i en modellvärld. Om teorier inte — direkt eller indirekt — säger oss något om den värld vi lever i, varför ska vi då ödsla tid på dem?

Bayesianism — a wrong-headed pseudoscience

24 Mar, 2021 at 15:00 | Posted in Theory of Science & Methodology | Comments Off on Bayesianism — a wrong-headed pseudoscience

✓ Frases y citas célebres de Mario Bunge 📖The occurrence of unknown prior probabilities, that must be stipulated arbitrarily, does not worry the Bayesian anymore than God’s inscrutable designs worry the theologian. Thus Lindley (1976), one of the leaders of the Bayesian school, holds that this difficulty has been ‘grossly exaggerated’. And he adds: ‘I am often asked if the [Bayesian] method gives the right answer: or, more particularly, how do you know if you have got the right prior [probability]. My reply is that I don’t know what is meant by ‘right’ in this context. The Bayesian theory is about coherence, not about right or wrong.’ Thus the Bayesian, along with the philosopher who only cares about the cogency of arguments, fits in with the reasoning madman …

One should not confuse the objective probabilities of random events with mere intuitive likelihoods of such events or the plausibility (or verisimilitude) of the corresponding hypotheses in the light of background knowledge. As Peirce (1935: p. 363) put it, this confusion ‘is a fertile source of waste of time and energy’. A clear case of such waste is the current proliferation of rational-choice theories in the social sciences, to model processes that are far from random, from marriage to crime to business transactions to political struggles.

Mario Bunge

On the poverty of deductivism

23 Mar, 2021 at 09:34 | Posted in Theory of Science & Methodology | 1 Comment

In mainstream macroeconomics, there has for long been an insistence on formalistic (mathematical) modelling, and to some economic methodologists (e.g. Lawson 2015, Syll 2016) this has forced economists to give up on realism and substitute axiomatics for real world relevance. According to the critique, the deductivist orientation has been the main reason behind the difficulty that mainstream economics has had in terms of understanding, explaining and predicting what takes place in modern economies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of — and justification for — the assumptions on which the deductivist foundation is erected.

The kind of formal-analytical and axiomatic-deductive mathematical modelling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant.

Although there has been a clearly discernible increase and focus on ‘empirical’ economics in recent decades, the results in these research fields have not fundamentally challenged the main deductivist direction of mainstream economics. They are still mainly framed and interpreted within the core ‘axiomatic’ assumptions of individualism, instrumentalism and equilibrium that make up even the ‘new’ mainstream economics. Although, perhaps, a sign of an increasing – but highly path-dependent — theoretical pluralism, mainstream economics is still, from a methodological point of view, mainly a deductive project erected on a formalist foundation.

If macroeconomic theories and models are to confront reality there are obvious limits to what can be said ‘rigorously’ in economics.  For although it is generally a good aspiration to search for scientific claims that are both rigorous and precise, the chosen level of precision and rigour must be relative to the subject matter studied.  An economics that is relevant to the world in which we live can never achieve the same degree of rigour and precision as in logic, mathematics or the natural sciences.

An example of a logically valid deductive inference (whenever ‘logic’ is used here it refers to deductive/analytical logic) may look like this:

Premise 1: All Chicago economists believe in the rational expectations hypothesis (REH)
Premise 2: Bob is a Chicago economist
—————————————————————–
Conclusion: Bob believes in REH

In a hypothetico-deductive reasoning — hypothetico-deductive confirmation in this case — we would use the conclusion to test the law-like hypothesis in premise 1 (according to the hypothetico-deductive model, a hypothesis is confirmed by evidence if the evidence is deducible from the hypothesis). If Bob does not believe in REH we have gained some warranted reason for non-acceptance of the hypothesis (an obvious shortcoming here being that further information beyond that given in the explicit premises might have given another conclusion).

The hypothetico-deductive method (in case we treat the hypothesis as absolutely sure/true, we should rather talk of an axiomatic-deductive method) basically means that we

•Posit a hypothesis
•Infer empirically testable propositions (consequences) from it
•Test the propositions through observation or experiment
•Depending on the testing results either find the hypothesis corroborated or falsified.

However, in science we regularly use a kind of ‘practical’ argumentation where there is little room for applying the restricted logical ‘formal transformations’ view of validity and inference. Most people would probably accept the following argument as a ‘valid’ reasoning even though from a strictly logical point of view it is non-valid:

Premise 1: Bob is a Chicago economist
Premise 2: The recorded proportion of Keynesian Chicago economists is zero
————————————————————————–
Conclusion: So, certainly, Bob is not a Keynesian economist

In science, contrary to what you find in most logic textbooks, only few argumentations are settled by showing that ‘All Xs are Ys.’ In scientific practice we instead present other-than-analytical explicit warrants and backings — data, experience, evidence, theories, models — for our inferences. As long as we can show that our ‘deductions’ or ‘inferences’ are justifiable and have well-backed warrants, other scientists will listen to us. That our scientific ‘deductions’ or ‘inferences’ are logical non-entailments simply is not a problem. To think otherwise is committing the fallacy of misapplying formal-analytical logic categories to areas where they are irrelevant or simply beside the point.

Scientific arguments are not analytical arguments, where validity is solely a question of formal properties. Scientific arguments are substantial arguments. Whether Bob is a Keynesian or not, is not something we can decide on formal properties of statements/propositions. We have to check out what he has actually been writing and saying to see if the hypothesis that he is a Keynesian is true or not.

In a deductive-nomological explanation — also known as a covering law explanation — we would try to explain why Bob believes in REH with the help of the two premises (in this case actually giving an explanation with only little explanatory value). These kinds of explanations — both in their deterministic and statistic/probabilistic versions — rely heavily on deductive entailment from premises that are assumed to be true. But they have precious little to say on where these assumed-to-be-true premises come from.

The deductive logic of confirmation and explanation may work well — given that they are used in deterministic closed models. In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in the science of economics. Applying the deductive-axiomatic method to real world systems immediately proves it to be excessively narrow and irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fail, since there is no way you can relevantly analyse confirmation or explanation as a purely logical relation between hypothesis and evidence, or between law-like rules and explananda. In science we argue and try to substantiate our beliefs and hypotheses with reliable evidence — propositional and predicate  deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Deduction — and the inferences that go with it — is an example of ‘explicative reasoning,’ where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning. But it is also its limitation, since truth in the deductive context does not refer to a real world ontology (only relating propositions as true or false within a formal-logic system) and as an argument scheme, deduction is totally non-ampliative: the output of the analysis is nothing else than the input.

Just to give an economics example, consider the following rather typical, but also uninformative and tautological, deductive inference:

Premise 1: The firm seeks to maximise its profits
Premise 2: The firm maximises its profits when marginal cost equals marginal income
——————————————————
Conclusion: The firm will operate its business at the equilibrium where marginal cost equals marginal income

This is as empty as deductive-nomological explanations of singular facts building on simple generalizations:

Premise 1: All humans are less than 20 feet tall
Premise 2: Bob is a human
——————————————————–
Conclusion: Bob is less than 20 feet tall

Although a logically valid inference, this is not much of an explanation (since we would still probably want to know why all humans are less than 20 feet tall).

Deductive-nomological explanations also often suffer from a kind of emptiness that emanates from a lack of real (causal) connection between premises and conclusions:

Premise 1: All humans that take birth control pills do not get pregnant
Premise 2: Bob took birth control pills
——————————————————–
Conclusion: Bob did not get pregnant

Most people would probably not consider this much of a real explanation.

Learning new things about reality demands something else than a reasoning where the knowledge is already embedded in the premises. These other kinds of reasoning — induction and abduction — may give good, but not conclusive, reasons. That is the price we have to pay if we want to have something substantial and interesting to say about the real world.

.

References

Lawson, Tony (2015): Essays on the nature and state of modern economics. Routledge.

Syll, Lars (2016): On the use and misuse of theories and models in economics. WEA Books.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: