Stylized facts are close kin of ceteris paribus laws. They are ‘broad generalizations true in essence, though perhaps not in detail’. They play a major role in economics, constituting explananda that economic models are required to explain. Models of economic growth, for example, are supposed to explain the (stylized) fact that the profit rate is constant. The unvarnished fact of course is that profit rates are not constant. All sorts of non-economic factors — e.g., war, pestilence, drought, political chicanery — interfere. Manifestly, stylized facts are not (what philosophers would call) facts, for the simple reason that they do not actually obtain. It might seem then that economics takes itself to be required to explain why known falsehoods are true. (Voodoo economics, indeed!) This can’t be correct. Rather, economics is committed to the view that the claims it recognizes as stylized facts are in the right neighborhood, and that their being in the right neighborhood is something economic models should account for. The models may show them to be good approximations in all cases, or where deviations from the economically ideal are small, or where economic factors dominate non-economic ones. Or they might afford some other account of their often being nearly right. The models may diverge as to what is actually true, or as to where, to what degree, and why the stylized facts are as good as they are. But to fail to acknowledge the stylized facts would be to lose valuable economic information (for example, the fact that if we control for the effects of such non-economic interference as war, disease, and the president for life absconding with the national treasury, the profit rate is constant.) Stylized facts figure in other social sciences as well. I suspect that under a less alarming description, they occur in the natural sciences too. The standard characterization of the pendulum, for example, strikes me as a stylized fact of physics. The motion of the pendulum which physics is supposed to explain is a motion that no actual pendulum exhibits. What such cases point to is this: The fact that a strictly false description is in the right neighborhood sometimes advances understanding of a domain.
Catherine Elgin thinks we should accept model claims when we consider them to be ‘true enough,’ and Uskali Mäki has argued in a similar vain, maintaining that it could be warranted — based on diverse pragmatic considerations — to accept model claims that are negligibly false.
When criticizing the basic (DSGE) workhorse model for its inability to explain involuntary unemployment, its defenders maintain that later elaborations — especially newer search models — manage to do just that. However, one of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — far from ‘negligibly false’ or ‘true enough’?
Suck on that — and tell me if those typical mainstream — neoclassical — modeling assumptions in any possibly relevant way — with or without due pragmatic considerations — can be considered anything else but imagined model worlds assumptions that has nothing at all to do with the real world we happen to live in!
Realism and relativism stand opposed. This much is apparent if we consider no more than the realist aim for science. The aim of science, realists tell us, is to have true theories about the world, where ‘true’ is understood in the classical correspondence sense. And this seems immediately to presuppose that at least some forms of relativism are mistaken … If realism is correct, then relativism (or some versions of it) is incorrect …
Whether or not realism is correct depends crucially upon what we take realism to assert, over and above the minimal claim about the aim of science.
My way into these issues is through what has come to be called the ‘Ultimate Argument for Scientific Realism’.’ The slogan is Hilary Putnam’s: “Realism is the only philosophy that does not make the success of science a miracle” …
We can at last be clear about what the Ultimate Argument actually is. It is an example of a so-called inference to the best explanation. How, in general, do such inferences work?
The intellectual ancestor of inference to the best explanation is Peirce’s abduction. Abduction goes something like this:
F is a surprising fact.
If T were true, F would be a matter of course.
Hence, T is true.
The argument is patently invalid: it is the fallacy of affirming the consequent …
What we need is a principle to the effect that it is reasonable to accept a satisfactory explanation which is the best we have as true. And we need to amend the inference-scheme accordingly. What we finish up with goes like this:
It is reasonable to accept a satisfactory explanation of any fact, which is also the best available explanation of that fact, as true.
F is a fact.
Hypothesis H explains F.
Hypothesis H satisfactorily explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to accept H as true …
To return to the Ultimate Argument for scientific realism. It is, I suggest, an inference to the best explanation. The fact to be explained is the (novel) predictive success of science. And the claim is that realism (more precisely, the conjecture that the realist aim for science has actually been achieved) explains this fact, explains it satisfactorily, and explains it better than any non-realist philosophy of science. And the conclusion is that it is reasonable to accept scientific realism (more precisely, the conjecture that the realist aim for science has actually been achieved) as true.
Inference to the Best Explanation can be seen as an extension of the idea of `self-evidencing’ explanations, where the phenomenon that is explained in turn provides an essential part of the reason for believing the explanation is correct. For example, a star’s speed of recession explains why its characteristic spectrum is red-shifted by a specified amount, but the observed red-shift may be an essential part of the reason the astronomer has for believing that the star is receding at that speed. Self-evidencing explanations exhibit a curious circularity, but this circularity is benign.
The recession is used to explain the red-shift and the red-shift is used to confirm the recession, yet the recession hypothesis may be both explanatory and well-supported. According to Inference to the Best Explanation, this is a common situation in science: hypotheses are supported by the very observations they are supposed to explain. Moreover, on this model, the observations support the hypothesis precisely because it would explain them. Inference to the Best Explanation thus partially inverts an otherwise natural view of the relationship between inference and explanation. According to that natural view, inference is prior to explanation. First the scientist must decide which hypotheses to accept; then, when called upon to explain some observation, she will draw from her pool of accepted hypotheses. According to Inference to the Best Explanation, by contrast, it is only by asking how well various hypotheses would explain the available evidence that she can determine which hypotheses merit acceptance. In this sense, Inference to the Best Explanation has it that explanation is prior to inference.
Either (1) the Holocaust happened as a real, objective event in the past or (2) it did not as an objective fact, and any left-wing person who denies objective truth has got no business opposing, criticising and condemning the disgusting, shameful and ignorant fringe of Holocaust deniers we see today.
Rather, any Postmodernists who really believe their truth relativism and Foucault’s view of truth should be saying that “all truth is made by power,” no objective truths exist, and our “truths” are invented and not determined by some objective reality – not even the Holocaust.
But then the Postmodernist would face these questions:
(1) Is the proposition that “the Holocaust happened” just a truth made by power? If “yes,” what power system “made” it and why?
(2) If you think it is not an objective truth that “the Holocaust happened,” then explain why we have overwhelming evidence that it did.
(3) if you accept the overwhelming evidence that the Holocaust happened, then explain why you would persist in denying the reality of objective truth.
It does not matter what choice the Postmodernist takes, every path they could take here leads to exactly the same end: an intellectually and morally broken and bankrupt world-view.
(h/t Jan Milch)
“Kritiska undersökningar i Weibull-mytens historia” … är på en gång en stridsskrift, en detetektivbragd och ett stycke – föga uppbygglig – lärdomshistoria. Arvidssons-Aarsleffs teori om weibullianismens härstamning är långt mer plausibel än L W:s egen redogörelse, och den är oändligt mycket mer plausibel än den självmotsägande advokatyr hans väpnare presterat. Arvidsson och Aarsleff argumenterar elegant och stringent för sin tes, men naturligtvis är deras casus inte bindande. Det är teoretiskt möjligt att L W, utan att ha mottagit några impulser alls, och utan att dittills ha lagt i dagen något intresse för källkritiska problem, producerat “Kritiska undersökningar . . .” ur intet. Det finns exempel på sådana isolerade tigersprång, särskilt inom matematiken och lyriken. Vad som är stört omöjligt är att L W skulle varit okunnig om Bédier. Om det behöver vi heller inte spekulera: Arvidsson har dokumenterat, via universitetsbibliotekets lånejournaler, att L W i två omgångar studerat Bédiers omtalade avhandling om les fabliaux, de satiriska franska folksagorna, och att han i samband därmed lånade även de relevanta källeditionerna. Efter 1905 saknas journaler och Weibulls bruk av Bédiers senare verk – vilka alla införskaffades – går ej att belägga handgripligen.
Den källkritiska andan var sund, även om metoden vann ett oförtjänt rykte som de vises sten. Källkritik är ingen algoritm, ingen låda med handgrepp som låter sig härledas från en viss teori. Ingen kan säga på förhand vad som kommer att återverka på dateringen och förståelsen av en sida skrift. Det kan vara en snillrik litterär analys, ett arkeologiskt fynd eller något med bläckets kemiska egenskaper. I alla händelser är ju historia långt mer än konsten att nagelfara arkivalier. Många weibullianska rön är solida och oantastliga, men Weibullmyten torde nu, tack vare Rolf Arvidssons enträgna arbete mot strömmen, ha manats i graven för gott.
Efter att Rolf Arvidsson i en artikel i Scandia (1971) ifrågasatt den gängse Weibull-mytologin och visat att Weibull fått de avgörande uppslagen till “sin” källkritiska metod från den franske litteraturhistorikern Joseph Bédier, försökte den lojala weibullianen och Lundaprofessorn Birgitta Odén i Lauritz Weibull och forskarsamhället (1976) ärerädda den dyrkade lärofadern. Professorn vid Princeton-universitetet, Hans Aarsleff, gör följande värdering av det försöket i Kritiska undersökningar i Weibull-mytens historia:
Lauritz Weibull och forskarsamhället (LWF) raaber beständig höjt om videnskabsteoriens enestaaende og solide betydning. Dertil kan jeg med det samme sige at den raedselsfulde videnskabsteori som her fejres syns at have antaget en saerlig virulent og banal form i Lund, som om man bevaebnet med den ikke blot kan klare sig uden intelligens, forstand, fornuft, fantasi, og common sense, men i virkeligheden er meget bedre stillet end med saa upaalidelige ting som fornuft og fantasi. En ejendommelig detalje er den hyppige snak om Karl Popper og brugen af falsifiering. Popper var ikke positivist, og … set i Poppersk perspektiv er al den Weibullske snak om sikre fakta, udsondring av hvad der blot er sandsynligt, osv, simpelt hen vrövl.
Som gammal Lundahistoriker kan man så klart inte vara annat än tacksam över att Arvidsson och Aarsleff med sin gedigna källkritiska forskning kring Weibull-myten slutligen visat “wie es eigentlich gewesen” med Weibull och hans efterföljares vilseledande historieskrivning.
If only mainstream economists also understood these basics …
But they don’t!
Because in mainstream economics it’s not inference to the best explanation that rules the methodological-inferential roost, but deductive reasoning based on logical inference from a set of axioms. Although — under specific and restrictive assumptions — deductive methods may be usable tools, insisting that economic theories and models ultimately have to be built on a deductive-axiomatic foundation to count as being economic theories and models, will only make economics irrelevant for solving real world economic problems. Modern deductive-axiomatic mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?
Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more than anything else try to contribute to solving real problems — and in that endeavour inference to the best explanation is much more relevant than formal logic.
In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.
Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.
Instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliatve deductive reasoning — as in mainstream economic theory — it would be so much more fruitful and relevant to apply inference to the best explanation, given that what we are looking for is to be able to explain what’s going on in the world we live in.
Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.
There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive infer- ence, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.
One obvious interpretation is that Model-Platonism implies that economic models are Platonic, if they take the form of thought-experiments, which use idealized conceptions of certain objects or entities (Platonic archetypes). This clearly sounds like a pretty familiar procedure, although the rationale for the thought-experimental character of economic models (if they are conceived as such) has been transformed over the years from apriorism to story-telling ‘without empirical commitment’ …
When interpreting economic models as thought-experiments, that is, as mainly speculative endeavours, these models operate without any obligation to accommodate empirical results. However, if the idealized assumptions employed in these models are interpreted as true forms in a Platonic sense they gain strong empirical relevance, since they are assumed to provide us with a form of knowledge far superior to observational data …
Model-Platonism as an epistemological concept can be understood as the combination of the following two routines: the reliance on thought-experimental style of theorizing as well as the introduction of idealized, metaphysical and, hence, ‘Platonic’ arguments in the form of basic assumptions … We still find resemblances of the Platonic idea of ‘superior insights’ through ‘true forms’ in economic models. Thereby, these insights are sometimes even believed to be generally immune to conflicting empirical evidence …
Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …
The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …
So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.