Positivism

31 July, 2013 at 16:59 | Posted in Theory of Science & Methodology | Comments Off on Positivism

9789144014340_largeI min artikel häromdagen om ekonometrins felande ontologiska grundfundament — Econometrics – still lacking a valid ontological foundation — förde jag fram kritiska synpunkter på den förhärskande nominalistisk-positivistiska vetenskapssynen inom ekonometrin. Några läsare har hört av sig och undrat om jag skulle kunna elaborera lite kring vad denna vetenskapsuppfattning står för och varför den utifrån mina kritisk-realistiska utgångspunkter är ohållbar.

Som vetenskapsteoretiker är det intressant att konstatera att många ekonomer och andra samhällsvetare appellerar till ett krav på att förklaringar för att kunna sägas vara vetenskapliga kräver att ett enskilt fall ska kunna ”föras tillbaka på en allmän lag”.  Som grundläggande princip åberopas ofta en allmän lag i form av ”om A så B” och att om man i de enskilda fallen kan påvisa att om ”A och B är förhanden så har man ’förklarat’ B”.

Denna positivistisk-induktiva vetenskapssyn är dock i grunden ohållbar. Låt mig förklara varför.

sdEnligt en positivistisk-induktivistisk syn på vetenskapen utgör den kunskap som vetenskapen besitter bevisad kunskap. Genom att börja med helt förutsättningslösa observationer kan en ”fördomsfri vetenskaplig observatör” formulera observationspåståenden utifrån vilka man kan härleda vetenskapliga teorier och lagar. Med hjälp av induktionsprincipen blir det möjligt att utifrån de singulära observationspåståendena formulera universella påståenden i form av lagar och teorier som refererar till förekomster av egenskaper som gäller alltid och överallt. Utifrån dessa lagar och teorier kan vetenskapen härleda olika konsekvenser med vars hjälp man kan förklara och förutsäga vad som sker. Genom logisk deduktion kan påståenden härledas ur andra påståenden. Forskningslogiken följer schemat observation – induktion – deduktion.

I mer okomplicerade fall måste vetenskapsmannen genomföra experiment för att kunna rättfärdiga de induktioner med vars hjälp han upprättar sina vetenskapliga teorier och lagar. Experiment innebär – som Francis Bacon så måleriskt uttryckte det – att lägga naturen på sträckbänk och tvinga den att svara på våra frågor. Med hjälp av en uppsättning utsagor som noggrant beskriver omständigheterna kring experimentet – initialvillkor – och de vetenskapliga lagarna kan vetenskapsmannen deducera påståenden som kan förklara eller förutsäga den undersökta företeelsen.

Den hypotetisk-deduktiva metoden för vetenskapens förklaringar och förutsägelser kan beskrivas i allmänna termer på följande vis:

1 Lagar och teorier

2 Initialvillkor

——————

3 Förklaringar och förutsägelser

Enligt en av den hypotetisk-deduktiva metodens främsta förespråkare – Carl Hempel – har alla vetenskapliga förklaringar denna form, som också kan uttryckas enligt schemat nedan:

Alla A är B                    Premiss 1

a är A                     Premiss 2

——————————

a är B                     Konklusion

Som exempel kan vi ta följande vardagsnära företeelse:

Vatten som värms upp till 100 grader Celsius kokar

Denna kastrull med vatten värms till 100 grader Celsius

———————————————————————–

Denna kastrull med vatten kokar

Problemet med den hypotetisk-deduktiva metoden ligger inte så mycket i premiss 2 eller konklusionen, utan i själva hypotesen, premiss 1. Det är denna som måste bevisas vara riktig och det är här induktionsförfarandet kommer in.

Den mest uppenbara svagheten i den hypotetisk-deduktiva metoden är själva induktionsprincipen. Det vanligaste rättfärdigandet av den ser ut som följer:

Induktionsprincipen fungerade vid tillfälle 1

Induktionsprincipen fungerade vid tillfälle 2

Induktionsprincipen fungerade vid tillfälle n

—————————————————–

Induktionsprincipen fungerar alltid

Detta är dock tveksamt eftersom ”beviset” använder induktion för att rättfärdiga induktion. Man kan inte använda singulära påståenden om induktionsprincipens giltighet för att härleda ett universellt påstående om induktionsprincipens giltighet.

Induktion är tänkt att spela två roller. Dels ska den göra det möjligt att generalisera och dels antas den utgöra bevis för slutsatsernas riktighet. Som induktionsproblemet visar klarar induktionen inte av båda dessa uppgifter. Den kan stärka sannolikheten av slutsatserna (under förutsättning att induktionsprincipen är riktig, vilket man dock inte kan bevisa utan att hamna i ett cirkelresonemang) men säger inte att dessa nödvändigtvis är sanna.

En annan ofta påpekad svaghet hos den hypotetisk-deduktiva metoden är att teorier alltid föregår observationspåståenden och experiment och att det därför är fel att hävda att vetenskapen börjar med observationer och experiment. Till detta kommer att observationspåståenden och experiment inte kan antas vara okomplicerat tillförlitliga och att de för sin giltighetsprövning kräver att man hänvisar till teori. Att även teorierna i sin tur kan vara otillförlitliga löser man inte främst med fler observationer och experiment, utan med andra och bättre teorier. Man kan också invända att induktionen inte på något sätt gör det möjligt för oss att få kunskap om verklighetens djupareliggande strukturer och mekanismer, utan endast om empiriska generaliseringar och lagbundenheter. Inom vetenskapen är det oftast så att förklaringen av händelser på en nivå står att finna i orsaker på en annan, djupare, nivå. Induktivismens syn på vetenskap leder till att vetenskapens huvuduppgift beskrivs som att ange hur något äger rum, medan andra vetenskapsteorier menar att vetenskapens kardinaluppgift måste vara att förklara varför det äger rum.

Till följd av de ovan anförda problemen har mer moderata empirister resonerat som att kommit att eftersom det i regel inte existerar något logiskt tillvägagångssätt för hur man upptäcker en lag eller teori startar man helt enkelt med lagar och teorier utifrån vilka man deducerar fram en rad påståenden som fungerar som förklaringar eller prediktioner. I stället för att undersöka hur man kommit fram till vetenskapens lagar och teorier försöker man att förklara vad en vetenskaplig förklaring och prediktion är, vilken roll teorier och modeller spelar i dessa, och hur man ska kunna värdera dem.

I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklarings-modellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. [Men det finns problem med denna uppfattning t. o. m. inom naturvetenskapen. Många av naturvetenskapens lagar säger egentligen inte något om vad saker gör, utan om vad de tenderar att göra. Detta beror till stor del på att lagarna beskriver olika delars beteende, snarare än hela fenomenet som sådant (utom möjligen i experimentsituationer). Och många av naturvetenskapens lagar gäller egentligen inte verkliga entiteter, utan bara fiktiva entiteter. Ofta är detta en följd av matematikens användande inom den enskilda vetenskapen och leder till att dess lagar bara kan exemplifieras i modeller (och inte i verkligheten).]  Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad. 

En följd av att man accepterar den hypotetisk-deduktiva förklaringsmodellen är oftast att man också accepterar den s. k. symmetritesen. Enligt denna är den enda skillnaden mellan förutsägelse och förklaring att man i den förstnämnda antar explanansen vara känd och försöker göra en prediktion, medan man i den senare antar explanandum vara känd och försöker finna initialvillkor och lagar ur vilka det undersökta fenomenet kan härledas.

Ett problem med symmetritesen äer dock att den inte tar hänsyn till att orsaker kan förväxlas med korrelationer. Att storken dyker upp samtidigt med människobarnen utgör inte någon förklaring till barns tillkomst.

Symmetritesen tar inte heller hänsyn till att orsaker kan vara tillräckliga men inte nödvändiga. Att en cancersjuk individ blir överkörd gör inte cancern till dödsorsak. Cancern skulle kunna ha varit den riktiga förklaringen till individens död. Men även om vi t. o. m. skulle kunna konstruera en medicinsk lag – i överensstämmelse med den deduktivistiska modellen – som säger att individer med den aktuella typen av cancer kommer att dö av denna cancer, förklarar likväl inte lagen denna individs död. Därför är tesen helt enkelt inte riktig.

Att finna ett mönster är inte detsamma som att förklara något. Att på frågan varför bussen är försenad få till svar att den brukar vara det, utgör inte någon acceptabel förklaring. Ontologi och naturlig nödvändighet måste ingå i ett relevant svar, åtminstone om man i en förklaring söker något mer än ”constant conjunctions of events”.

Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, ge en metod för testning av förklaringar, och visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.

En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.

Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.

On the history of potential outcome models (wonkish)

31 July, 2013 at 10:24 | Posted in Statistics & Econometrics | 1 Comment

1-s2.0-S1053811911012997-gr2

My understanding of the history is as follows. The potential outcome framework became popular in the econometrics literature on causality around 1990. See Heckman (1990, American Economic Review, Papers and Proceedings, “Varieties of Selection Bias,” 313-318, and Manski (1990 American Economic Review, Papers and Proceedings, “Nonparametric Bounds on Treatment Effects,” 319-323.) Both those papers read very differently from the classic paper in the econometric literature on program evaluation and causality, published five years earlier, (Heckman, and Robb, 1985, “Alternative Methods for Evaluating the Impact of Interventions,” in Heckman and Singer (eds.), Longitudinal Analysis of Labor Market Data, Cambridge, Cambridge University Press) which did not use the potential outcome framework. When the potential outcome framework became popular, there was little credit given to Rubin’s work, but there were also no references to Neyman (1923), Roy (1951) or Quandt (1958) in the Heckman and Manski papers. It appears that at the time the notational shift was not viewed as sufficiently important to attribute to anyone.

Heckman’s later work has attempted to place the potential outcome framework in a historical perspective. Here are two quotes somewhat clarifying his views on the relation to Rubin’s work. In 1996 he wrote:

“The “Rubin Model” is a version of the widely used econometric switching regression model (Maddalla 1983; Quandt, 1958, 1972, 1988). The Rubin model shares many features in common with the Roy model (Heckman and Honore, 1990, Roy 1951) and the model of competing risks (Cox, 1962). It is a tribute to the value of the framework that it has been independently invented by different disciplines and subfields within statistics at different times.” p. 459
(Heckman, (1996) Comment on “identification of causal effects using instrumental variables”,
journal of the american statistical association.)

More recently, in 2008, he wrote:

“4.3 The Econometric Model vs. the Neyman-Rubin Model
Many statisticians and social scientists use a model of counterfactuals and causality attributed to Donald Rubin by Paul Holland (1986). The framework was developed in statistics by Neyman (1923), Cox (1958) and others. Parallel frameworks were independently developed in psychometrics (Thurstone, 1927) and economics (Haavelmo, 1943; Quandt, 1958, 1972; Roy, 1951). The statistical treatment effect literature originates in the statistical literature on the design of experiments. It draws on hypothetical experiments to define causality and thereby creates the impression in the minds of many of its users that random assignment is the most convincing way to identify causal models.” p. 19
(“Econometric Causality”, Heckman, International economic review, 2008, 1-27.)

(I include the last sentence of the quote mainly because it is an interesting thought, although it is not really germane to the current discussion.)

In the end I agree with Andrew’s blog post that the attribution to Roy or Quandt is tenuous, and I would caution the readers of this blog not to interpret Heckman’s views on this as reflecting a consensus in the economics profession. The Haavelmo reference is interesting. Haavelmo is certainly thinking of potential outcomes in his 1943 paper, and I view Haavelmo’s paper (and a related paper by Tinbergen) as the closest to a precursor of the Rubin Causal Model in economics. However, Haavelmo’s notation did not catch on, and soon econometricians wrote their models in terms of realized, not potential, outcomes, not returning to the explicit potential outcome notation till 1990.

Relatedly, I recently met Paul Holland at a conference, and I asked him about the reasons for attaching the label “Rubin Causal Model” to the potential outcome framework in 1986. (now you often see phrase, “called the Rubin Causal Model by Paul Holland”). Paul responded that he felt that Don’s work on this went so far beyond what was done before by, among others, Neyman (1923), by putting the potential outcomes front and center in a discussion on causality, as in the 1974 paper, that his contributions merited this label. Personally I agree with that.

Guido Imbens

Spirited debate on Fox News? I’ll be dipped!

30 July, 2013 at 22:26 | Posted in Politics & Society | Comments Off on Spirited debate on Fox News? I’ll be dipped!


(h/t barnilsson)

Some Other Time

29 July, 2013 at 10:37 | Posted in Varia | Comments Off on Some Other Time

 

What Kind Of Fool Am I?

28 July, 2013 at 19:35 | Posted in Varia | Comments Off on What Kind Of Fool Am I?

 

Why average assumptions on average are wrong in finance and economics

28 July, 2013 at 12:40 | Posted in Statistics & Econometrics | 1 Comment

Notes7
 
In The Flaw of Averages Sam Savage explains more on why average assumptions on average are wrong …

Spurious statistical significance – when science gets it all wrong

28 July, 2013 at 11:20 | Posted in Statistics & Econometrics | 2 Comments

A non-trivial part of teaching statistics is made up of teaching students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p values – really are, still most students misinterpret them. And a lot of researchers obviously also fall pray to the same mistakes:

Are women three times more likely to wear red or pink when they are most fertile? No, probably not. But here’s how hardworking researchers, prestigious scientific journals, and gullible journalists have been fooled into believing so.

The paper I’ll be talking about appeared online this month in Psychological Science, the flagship journal of the Association for Psychological Science, which represents the serious, research-focused (as opposed to therapeutic) end of the psychology profession.

images-11“Women Are More Likely to Wear Red or Pink at Peak Fertility,” by Alec Beall and Jessica Tracy, is based on two samples: a self-selected sample of 100 women from the Internet, and 24 undergraduates at the University of British Columbia. Here’s the claim: “Building on evidence that men are sexually attracted to women wearing or surrounded by red, we tested whether women show a behavioral tendency toward wearing reddish clothing when at peak fertility. … Women at high conception risk were more than three times more likely to wear a red or pink shirt than were women at low conception risk. … Our results thus suggest that red and pink adornment in women is reliably associated with fertility and that female ovulation, long assumed to be hidden, is associated with a salient visual cue.”

Pretty exciting, huh? It’s (literally) sexy as well as being statistically significant. And the difference is by a factor of three—that seems like a big deal.

Really, though, this paper provides essentially no evidence about the researchers’ hypotheses …

The way these studies fool people is that they are reduced to sound bites: Fertile women are three times more likely to wear red! But when you look more closely, you see that there were many, many possible comparisons in the study that could have been reported, with each of these having a plausible-sounding scientific explanation had it appeared as statistically significant in the data.

The standard in research practice is to report a result as “statistically significant” if its p-value is less than 0.05; that is, if there is less than a 1-in-20 chance that the observed pattern in the data would have occurred if there were really nothing going on in the population. But of course if you are running 20 or more comparisons (perhaps implicitly, via choices involved in including or excluding data, setting thresholds, and so on), it is not a surprise at all if some of them happen to reach this threshold.

The headline result, that women were three times as likely to be wearing red or pink during peak fertility, occurred in two different samples, which looks impressive. But it’s not really impressive at all! Rather, it’s exactly the sort of thing you should expect to see if you have a small data set and virtually unlimited freedom to play around with the data, and with the additional selection effect that you submit your results to the journal only if you see some catchy pattern. …

Statistics textbooks do warn against multiple comparisons, but there is a tendency for researchers to consider any given comparison alone without considering it as one of an ensemble of potentially relevant responses to a research question. And then it is natural for sympathetic journal editors to publish a striking result without getting hung up on what might be viewed as nitpicking technicalities. Each person in this research chain is making a decision that seems scientifically reasonable, but the result is a sort of machine for producing and publicizing random patterns.

There’s a larger statistical point to be made here, which is that as long as studies are conducted as fishing expeditions, with a willingness to look hard for patterns and report any comparisons that happen to be statistically significant, we will see lots of dramatic claims based on data patterns that don’t represent anything real in the general population. Again, this fishing can be done implicitly, without the researchers even realizing that they are making a series of choices enabling them to over-interpret patterns in their data.

Andrew Gelman

Indeed. If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero –  even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and  here)- it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on weird backward logic that students and researchers usually don’t understand?

Added 31/7: Beall and Tracy has a commnent on the critique here.

What’s the use of economics?

28 July, 2013 at 09:23 | Posted in Economics | 1 Comment

The simple question that was raised during a recent conference … was to what extent has – or should – the teaching of economics be modified in the light of the current economic crisis? The simple answer is that the economics profession is unlikely to change. Why would economists be willing to give up much of their human capital, painstakingly nurtured for over two centuries? For macroeconomists in particular, the reaction has been to suggest that modifications of existing models to take account of ‘frictions’ or ‘imperfections’ will be enough to account for the current evolution of the world economy. The idea is that once students have understood the basics, they can be introduced to these modifications.

However, other economists such as myself feel that we have finally reached the turning point in economics where we have to radically change the way we conceive of and model the economy. The crisis is an opportune occasion to carefully investigate new approaches. Paul Seabright hit the nail on the head; economists tend to inaccurately portray their work as a steady and relentless improvement of their models whereas, actually, economists tend to chase an empirical reality that is changing just as fast as their modelling. I would go further; rather than making steady progress towards explaining economic phenomena professional economists have been locked into a narrow vision of the economy. We constantly make more and more sophisticated models within that vision until, as Bob Solow put it, “the uninitiated peasant is left wondering what planet he or she is on” (Solow 2006) …

Entomologists (those who study insects) of old with more simple models came to the conclusion that bumble bees should not be able to fly. Their reaction was to later rethink their models in light of irrefutable evidence. Yet, the economist’s instinct is to attempt to modify reality in order to fit a model that has been built on longstanding theory. Unfortunately, that very theory is itself based on shaky foundations …

Every student in economics is faced with the model of the isolated optimising individual who makes his choices within the constraints imposed by the market. Somehow, the axioms of rationality imposed on this individual are not very convincing, particularly to first time students. But the student is told that the aim of the exercise is to show that there is an equilibrium, there can be prices that will clear all markets simultaneously. And, furthermore, the student is taught that such an equilibrium has desirable welfare properties. Importantly, the student is told that since the 1970s it has been known that whilst such a system of equilibrium prices may exist, we cannot show that the economy would ever reach an equilibrium nor that such an equilibrium is unique.

The student then moves on to macroeconomics and is told that the aggregate economy or market behaves just like the average individual she has just studied. She is not told that these general models in fact poorly reflect reality. For the macroeconomist, this is a boon since he can now analyse the aggregate allocations in an economy as though they were the result of the rational choices made by one individual. The student may find this even more difficult to swallow when she is aware that peoples’ preferences, choices and forecasts are often influenced by those of the other participants in the economy. Students take a long time to accept the idea that the economy’s choices can be assimilated to those of one individual.

Alan Kirman What’s the use of economics?

Simply the best – scientific realism and inference to the best explanation

27 July, 2013 at 15:42 | Posted in Theory of Science & Methodology | 1 Comment

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

In a truly wonderful essay – chapter three of Error and Inference (Cambridge University Press, 2010, eds. Deborah Mayo and Aris Spanos) – Alan Musgrave gives strong arguments why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in:

For realists, the name of the scientific game is explaining phenomena, not just saving them. Realists typically invoke ‘inference to the best explanation’ or IBE …

IBE is a pattern of argument that is ubiquitous in science and in everyday life as well. van Fraassen has a homely example:
“I hear scratching in the wall, the patter of little feet at midnight, my cheese disappears – and I infer that a mouse has come to live with me. Not merely that these apparent signs of mousely presence will continue, not merely that all the observable phenomena will be as if there is a mouse, but that there really is a mouse.” (1980: 19-20)
Here, the mouse hypothesis is supposed to be the best explanation of the phenomena, the scratching in the wall, the patter of little feet, and the disappearing cheese.
alan musgraveWhat exactly is the inference in IBE, what are the premises, and what the conclusion? van Fraassen says “I infer that a mouse has come to live with me”. This suggests that the conclusion is “A mouse has come to live with me” and that the premises are statements about the scratching in the wall, etc. Generally, the premises are the things to be explained (the explanandum) and the conclusion is the thing that does the explaining (the explanans). But this suggestion is odd. Explanations are many and various, and it will be impossible to extract any general pattern of inference taking us from explanandum to explanans. Moreover, it is clear that inferences of this kind cannot be deductively valid ones, in which the truth of the premises guarantees the truth of the conclusion. For the conclusion, the explanans, goes beyond the premises, the explanandum. In the standard deductive model of explanation, we infer the explanandum from the explanans, not the other way around – we do not deduce the explanatory hypothesis from the phenomena, rather we deduce the phenomena from the explanatory hypothesis …

The intellectual ancestor of IBE is Peirce’s abduction, and here we find a different pattern:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, … A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)

Here the second premise is a fancy way of saying “A explains C”. Notice that the explanatory hypothesis A figures in this second premise as well as in the conclusion. The argument as a whole does not generate the explanans out of the explanandum. Rather, it seeks to justify the explanatory hypothesis …

Abduction is deductively invalid … IBE attempts to improve upon abduction by requiring that the explanation is the best explanation that we have. It goes like this:

F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, H is true
(William Lycan, 1985: 138)

This is better than abduction, but not much better. It is also deductively invalid …

There is a way to rescue abduction and IBE. We can validate them without adding missing premises that are obviously false, so that we merely trade obvious invalidity for equally obvious unsoundness. Peirce provided the clue to this. Peirce’s original abductive scheme was not quite what we have considered so far. Peirce’s original scheme went like this:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, there is reason to suspect that A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)

This is obviously invalid, but to repair it we need the missing premise “There is reason to suspect that any explanation of a surprising fact is true”. This missing premise is, I suggest, true. After all, the epistemic modifier “There is reason to suspect that …” weakens the claims considerably. In particular, “There is reason to suspect that A is true” can be true even though A is false. If the missing premise is true, then instances of the abductive scheme may be both deductively valid and sound.

IBE can be rescued in a similar way. I even suggest a stronger epistemic modifier, not “There is reason to suspect that …” but rather “There is reason to believe (tentatively) that …” or, equivalently, “It is reasonable to believe (tentatively) that …” What results, with the missing premise spelled out, is:

It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true.

This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences.

Of course, to establish that any such inference is sound, the ‘explanationist’ owes us an account of when a hypothesis explains a fact, and of when one hypothesis explains a fact better than another hypothesis does. If one hypothesis yields only a circular explanation and another does not, the latter is better than the former. If one hypothesis has been tested and refuted and another has not, the latter is better than the former. These are controversial issues, to which I shall return. But they are not the most controversial issue – that concerns the major premise. Most philosophers think that the scheme is unsound because this major premise is false, whatever account we can give of explanation and of when one explanation is better than another. So let me assume that the explanationist can deliver on the promises just mentioned, and focus on this major objection.

People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths.

What if the best explanation not only might be false, but actually is false. Can it ever be reasonable to believe a falsehood? Of course it can. Suppose van Fraassen’s mouse explanation is false, that a mouse is not responsible for the scratching, the patter of little feet, and the disappearing cheese. Still, it is reasonable to believe it, given that it is our best explanation of those phenomena. Of course, if we find out that the mouse explanation is false, it is no longer reasonable to believe it. But what we find out is that what we believed was wrong, not that it was wrong or unreasonable for us to have believed it.

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Why economics needs economic history

27 July, 2013 at 14:15 | Posted in Economics | Comments Off on Why economics needs economic history

reality-checkKnowledge of economic and financial history is crucial in thinking about the economy in several ways.
Most obviously, it forces students to recognise that major discontinuities in economic performance and economic policy regimes have occurred many times in the past, and may therefore occur again in the future. These discontinuities have often coincided with economic and financial crises, which therefore cannot be assumed away as theoretically impossible. A historical training would immunise students from the complacency that characterised the “Great Moderation”. Zoom out, and that swan may not seem so black after all.

A second, related point is that economic history teaches students the importance of context …

Third, economic history is an unapologetically empirical field, exclusively dedicated to understanding the real world.
Doing economic history forces students to add to the technical rigor of their programs an extra dimension of rigor: asking whether their explanations for historical events actually fit the facts or not. Which emphatically does not mean cherry-picking selected facts that fit your thesis and ignoring all the ones that don’t: the world is a complicated place, and economists should be trained to recognise this. An exposure to economic history leads to an empirical frame of mind, and a willingness to admit that one’s particular theoretical framework may not always work in explaining the real world. These are essential mental habits for young economists wishing to apply their skills in the work environment, and, one hopes, in academia as well.

Fourth, economic history is a rich source of informal theorising about the real world, which can help motivate more formal theoretical work later on …

Fifth, even once the current economic and financial crisis has passed, the major long run challenges facing the world will still remain …
Apart from issues such as the rise of Asia and the relative decline of the West, other long run issues that would benefit from being framed in a long-term perspective include global warming, the future of globalisation, and the question of how rapidly we can expect the technological frontier to advance in the decades ahead.

Sixth, economic theory itself has been emphasising – for well over 20 years now – that path dependence is ubiquitous …

Finally, and perhaps most importantly from the perspective of an undergraduate economics instructor, economic history is a great way of convincing undergraduates that the theory they are learning in their micro and macro classes is useful in helping them make sense of the real world.

Far from being seen as a ‘soft’ alternative to theory, economic history should be seen as an essential pedagogical complement. There is nothing as satisfying as seeing undergraduates realise that a little bit of simple theory can help them understand complicated real world phenomena. Think of  … the Domar thesis, referred to in Temin (2013), which is a great way to talk to students about what drives diminishing returns to labour. Economic history is replete with such opportunities for instructors trying to motivate their students. Economic history is replete with such opportunities for instructors trying to motivate their students.

Kevin O’Rourke

[h/t Jan Milch]

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.