Deduction — induction — abduction

25 January, 2016 at 14:42 | Posted in Theory of Science & Methodology | 3 Comments



In science – and economics – one could argue that there basically are three kinds of argumentation patterns/schemes/methods/strategies available:


Premise 1: All Chicago economists believe in REH
Premise 2: Robert Lucas is a Chicago economist
Conclusion: Robert Lucas believes in REH

Here we have an example of a logically valid deductive inference (and, following Quine, whenever logic is used in this essay, ‘logic’ refers to deductive/analytical logic).

In a hypothetico-deductive reasoning — hypothetico-deductive confirmation in this case — we would use the conclusion to test the law-like hypothesis in premise 1 (according to the hypothetico-deductive model, a hypothesis is confirmed by evidence if the evidence is deducible from the hypothesis). If Robert Lucas does not believe in REH we have gained some warranted reason for non-acceptance of the hypothesis (an obvious shortcoming here being that further information beyond that given in the explicit premises might have given another conclusion).

The hypothetico-deductive method (in case we treat the hypothesis as absolutely sure/true, we rather talk of an axiomatic-deductive method) basically means that we

•Posit a hypothesis
•Infer empirically testable propositions (consequences) from it
•Test the propositions through observation or experiment
•Depending on the testing results either find the hypothesis corroborated or falsified.

However, in science we regularly use a kind of ‘practical’ argumentation where there is little room for applying the restricted logical ‘formal transformations’ view of validity and inference. Most people would probably accept the following argument as a ‘valid’ reasoning even though it from a strictly logical point of view is non-valid:

Premise 1: Robert Lucas is a Chicago economist
Premise 2: The recorded proportion of Keynesian Chicago economists is zero
Conclusion: So, certainly, Robert Lucas is not a Keynesian economist

How come? Well I guess one reason is that in science, contrary to what you find in most logic text-books, not very many argumentations are settled by showing that ‘All Xs are Ys.’ In scientific practice we instead present other-than-analytical explicit warrants and backings — data, experience, evidence, theories, models — for our inferences. As long as we can show that our ‘deductions’ or ‘inferences’ are justifiable and have well-backed warrants our colleagues listen to us. That our scientific ‘deductions’ or ‘inferences’ are logical non-entailments simply is not a problem. To think otherwise is committing the fallacy of misapplying formal-analytical logic categories to areas where they are pretty much irrelevant or simply beside the point.

Scientific arguments are not analytical arguments, where validity is solely a question of formal properties. Scientific arguments are substantial arguments. If Robert Lucas is a Keynesian or not, is nothing we can decide on formal properties of statements/propositions. We have to check out what the guy has actually been writing and saying to check if the hypothesis that he is a Keynesian is true or not.

In a deductive-nomological explanation — also known as a covering law explanation — we would try to explain why Robert Lucas believes in REH with the help of the two premises (in this case actually giving an explanation with very little explanatory value). These kinds of explanations — both in their deterministic and statistic/probabilistic versions — rely heavily on deductive entailment from assumed to be true premises. But they have preciously little to say on where these assumed to be true premises come from.

Deductive logic of confirmation and explanation may work well — given that they are used in deterministic closed models! In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in the science of economics.  Applying it to real world systems, however, immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyze confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science we argue and try to substantiate our beliefs and hypotheses with reliable evidence — proportional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Deduction — and the inferences that goes with it — is an example of ‘explicative reasoning,’  where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning. But it is also its limitation, since truth in the deductive context does not refer to  a real world ontology (only relating propositions as true or false within a formal-logic system) and as an argument scheme is totally non-ampliative — the output of the analysis is nothing else than the input.

Just to give an economics example, consider the following rather typical, but also uninformative and tautological, deductive inference:

Premise 1: The firm seeks to maximize its profits
Premise 2: The firm maximizes its profits when MC = MI
Conclusion: The firm will operate its business at the equilibrium MC = MI

This is as empty as deductive-nomological explanations of singular facts building on simple generalizations:

Premise 1: All humans are less than 20 feet tall
Premise 2: Robert Lucas is a human
Conclusion: Robert Lucas is less than 20 feet tall

Although a logically valid inference, this is not much of an explanation (since we would still probably want to know why all humans are less than 20 feet tall).

Deductive-nomological explanations also often suffer from a kind of emptiness that emanates from a lack of real (causal) connection between premises and conclusions:

Premise 1: All humans that take birth control pills do not get pregnant
Premise 2: Lars Syll took birth control pills
Conclusion: Lars Syll did not get pregnant

I guess most people would agree that this is not much of a real explanation.

Learning new things about reality demands something else than a reasoning where the knowledge is already embedded in the premises. These other kinds of reasoning may give good — but not conclusive — reasons. That is the price we have to pay if we want to have something substantial and interesting to say about the real world.


Premise 1: This is a randomly selected large set of economists from Chicago
Premise 2: These randomly selected economists all believe in REH
Conclusion: All Chicago economists believes in REH

In this inductive inference we have an example of a logically non-valid inference that we would have to supply with strong empirical evidence to really warrant. And that is no simple matter at all:

treatprob-2In my judgment, the practical usefulness of those modes of inference, here termed Universal and Statistical Induction, on the validity of which the boasted knowledge of modern science depends, can only exist—and I do not now pause to inquire again whether such an argument must be circular—if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appear more and more clearly as the ultimate result to which material science is tending …

The physicists of the nineteenth century have reduced matter to the collisions and arrangements of particles, between which the ultimate qualitative differences are very few …

The validity of some current modes of inference may depend on the assumption that it is to material of this kind that we are applying them … Professors of probability have been often and justly derided for arguing as if nature were an urn containing black and white balls in fixed proportions. Quetelet once declared in so many words—“l’urne que nous interrogeons, c’est la nature.” But again in the history of science the methods of astrology may prove useful to the astronomer; and it may turn out to be true—reversing Quetelet’s expression—that “La nature que nous interrogeons, c’est une urne”.

Justified inductions presupposes a resemblance of sort between what we have experienced and know, and what we have not yet experienced and do not yet know. Just to exemplify this problem of induction let me take two examples.

Let’s start with this one. Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

Or take the case of macroeconomic forecasting, which perhaps better than anything else illustrates the problem of induction. As a rule macroeconomic forecasts tend to be little better than intelligent guesswork.  Or in other words — macroeconomic mathematical-statistical forecasting models, and the inductive logic upon which they ultimately build, are as a rule far from successful. The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not confirm with the inductive assumptions made in the applied statistical and econometric models. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics and econometrics does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

Induction is sometimes a good guide for evaluating hypotheses. But for the creative generation of plausible and relevant hypotheses it is conspicuously silent. For that we need another — non-algorithmic and ampliative — kind of reasoning.


Premise 1: All Chicago economists believe in REH
Premise 2: These economists believe in REH
Conclusion: These economists are from Chicago

In this case, again, we have an example of a logically non-valid inference — the fallacy of affirming the consequent:

p -> q

or, in instantiated form

∀x (Gx -> Px)

But it is nonetheless an inference that may be a strongly warranted and truth-producing — in contradistinction to truth-preserving deductions — reasoning, following the general pattern Evidence  ->  Explanation  ->  Inference.

Here we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world all evidence has an irreducible holistic aspect. We never conclude that evidence follows from hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are a fortiori context-dependent.

If extending the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation (IBE).  In this way IBE is a refinement of the original (Peircean) concept of abduction by making the background knowledge requirement more explicit.

In abduction we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we do not (inevitably) have deductive certainty, our abductive reasoning gives us a license to consider our belief in the hypothesis as reasonable.

The model of Inference to the Best Explanation is designed to give a partial account of many inductive inferences, both in science and in ordinary life … Its governing idea is that explanatory considerations are a guide to inference, that scientists infer from the available evidence to the hypothesis which would, if correct, best explain that evidence. WS00323Inference_zpsd842ff44Many inferences are naturally described in this way … When a detective infers that it was Moriarty who committed the crime, he does so because this hypothesis would best explain the fingerprints, blood stains and other forensic evidence. Sherlock Holmes to the contrary, this is not a matter of deduction. The evidence will not entail that Moriarty is to blame, since it always remains possible that someone else was the perpetrator. Nevertheless, Holmes is right to make his inference, since Moriarty’s guilt would provide a better explanation of the evidence than would anyone else’s.

Inference to the Best Explanation can be seen as an extension of the idea of ‘self-evidencing’ explanations, where the phenomenon that is explained in turn provides an essential part of the reason for believing the explanation is correct … According to Inference to the Best Explanation, this is a common situation in science: hypotheses are supported by the very observations they are supposed to explain. Moreover, on this model, the observations support the hypothesis precisely because it would explain them.

Accepting a hypothesis means that you consider it to explain the available evidence better than any other competing hypothesis. The acceptability warrant comes from the explanatory power  of the hypothesis, and the conscious act of trying to rule out the possible competing potential explanations in itself  increases the plausibility of the preferred explanation. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best — ‘loveliest’ — explanation, i. e., the explanation that provides us with the greatest understanding (given it is correct). As Sherlock Holmes had it (in ‘The Sign of Four’): ‘Eliminate the impossible, and whatever remains, however improbable, must be the truth.’ Subsequent confirmation of our hypothesis — by observations, experiments or other future evidence — makes it even more well-confirmed (and underlines that all explanations are incomplete, and that the models and theories that we as scientists use, cannot only be assessed by the extent of their fit with experimental or observational data, but also need to take into account their explanatory power).

This, of course, does not in any way mean that we can not be wrong. Of course we can. Abductions are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, abduction is a weak mode of inference. But if the abductive arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use abductive reasoning — experience disillusion. We thought that we had reached a strong abductive conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false. But that does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we’re in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in abduction — well, then get in to math or logic, not science.

People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths.

What if the best explanation not only might be false, but actually is false. Can it ever be reasonable to believe a falsehood? Of course it can … What we find out is that what we believed was wrong, not that it was wrong or unreasonable for us to have believed it.

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Alan Musgrave

What makes the works of people like Galileo, Newton, Marx, or Keynes, truly interesting is not that they described new empirical facts. No, the truly seminal and pioneering aspects of their works is that they managed to find out and analyse what makes empirical phenomena possible. What are the fundamental physical forces that make heavy objects fall the way they do? Why do people get unemployed? Why are market societies haunted by economic crises? Starting from well known facts these scientists discovered the mechanisms and structures that made these empirical facts possible.

The works of these scientists are good illustrations of the fact that in science we are usually not only interested in observable facts and phenomena. Since structures, powers, institutions, relations, etc., are not directly observable, we need to use theories and models to indirectly obtain knowledge of them (and to be able to recontextualize and redescribe observables to discover new and (perhaps) hitherto unknown dimensions of the world around us). Deduction and induction do not give us access to these kind of entities. They are things that to a large extent have to be discovered. Discovery processes presupposes creativity and imagination, virtues that are not very prominent in inductive analysis (statistics and econometrics) or deductive-logical reasoning. We need another mode of inference.

Inference to the best explanation is a (non-demonstrative) ampliative method of reasoning that makes it possible for us to gain new insights and come up with — and evaluate — theories and hypotheses that — in contradistinction to the entailments that deduction provide us with — transcend the epistemological content of the evidence that brought about them. And instead of only delivering inductive generalizations from the evidence at hand — as the inductive scheme — it typically opens up for conceptual novelties and retroduction, where we from analysis of empirical data and observation reconstruct the ontological conditions for their being what they are. As scientists we do not only want to be able to deal with observables. We try to make the world more intelligible by finding ways to understand the fundamental processes and structures that rule the world we live in. Science should help us penetrate to these processes and structures behind facts and events we observe. We should look out for causal relations, processes and structures, but models — mathematical, econometric, or what have you — can never be more than a starting point in that endeavour. There is always the possibility that there are other (non-quantifiable) variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the formalized mathematical model. The content-enhancing  aspect of inference to the best explanation gives us the possibility of acquiring new and warranted knowledge and understanding of things beyond empirical sense data — such as structures, relations, capacities, powers, etc. Arguably, realism in its different guises ultimately rests on inference to the best explanation to found the existence of such unobservable entities.

Outside mathematics and logic, scientific methods do not deliver absolute certainty or prove things. However, many economists are still in pursuit of the Holy Grail of absolute certainty. But there will always be a great number of theories and models that are compatible/consistent with facts, and no logic makes it possible to select one as the right one. The search for absolute certainty can never be anything else but disappointing since all scientific knowledge is more or less uncertain. That is a fact of the way the world is, and we just have to learn to live with that inescapable limitation of scientific knowledge.

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive inference, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

Explanations are per se not deductive proofs. And deductive proofs often do not explain at all, since validly deducing X from Y does not per se explain why X is a fact, because it does not say anything at all about how being Y is connected to being X. Explanations do not necessarily have to entail the things they explain, but they can nevertheless confer warrants for the conclusions we reach using inference to the best explanation. The evidential force of inference to the best explanation is consistent with having less than certain belief.

Explanation is prior to inference. Inferring means that you come to believe something and have (evidential) reasons for believing so. As economists we entertain different hypotheses on inflation, unemployment, growth, wealth inequality, and so on. From the available evidence and our context-dependent background knowledge we evaluate how well the different hypotheses would explain these evidence and which of them qualifies for being the best accepted hypothesis. Given the information available, we base our inferences on explanatory considerations (noting this, of course, does not exclude that there exist other, non-explanatory, factors that may influence our choices and rankings of of explanations and hypotheses).

If only mainstream economists also understood these basics …

But most of them do not!


Because in mainstream economics it is not inference to the best explanation that rules the methodological-inferential roost, but deductive reasoning based on logical inference from a set of axioms. Although — under specific and restrictive assumptions — deductive methods may be usable tools, insisting that economic theories and models ultimately have to be built on a deductive-axiomatic foundation to count as being economic theories and models, will only make economics irrelevant for solving real world economic problems. Modern deductive-axiomatic mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?

Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more than anything else try to contribute to solving real problems — and in that endeavour inference to the best explanation is much more relevant than formal logic.

The weaknesses of social-scientific normativism are obvious. The basic assumptions refer to idealized action under pure maxims; no empirically substantive lawlike hypotheses can be derived from them. Either it is a question of analytic statements recast in deductive form or the conditions under which the hypotheses derived could be definitively falsified are excluded under ceteris paribus stipulations. Despite their reference to reality, the laws stated by pure economics have little, if any, information content.51T4SXX4DSL._SX326_BO1,204,203,200_To the extent that theories of rational choice lay claim to empirical-analytic knowledge, they are open to the charge of Platonism (Modellplatonismus). Hans Albert has summarized these arguments: The central point is the confusion of logical presuppositions with empirical conditions. The maxims of action introduced are treated not as verifiable hypotheses but as assumptions about actions by economic subjects that are in principle possible. The theorist limits himself to formal deductions of implications in the unfounded expectation that he will nevertheless arrive at propositions with empirical content. Albert’s critique is directed primarily against tautological procedures and the immunizing role ofqualifying or “alibi” formulas. This critique of normative-analytic methods argues that general theories of rational action are achieved at too great a cost when they sacrifice empirically verifiable and descriptively meaningful information.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

From that point of view, it could be argued that the generalizations we look for (often with statistical and econometric methods) when using inductive methods (to say anything about a population based on  a given sample) are abductions. From the premise ‘All observed real-world markets are non-perfect’ we conclude “All real-world markets are non-perfect.” If we have tested all the other potential hypotheses and found that, e. g., there is no reason to believe that the sampling process has been biased and that we are dealing with a non-representative non-random sample, we could, given relevant background beliefs/assumptions, say that we have justified belief in treating our conclusion as warranted. Being able to eliminate/refute contesting/contrastive hypotheses — using both observational and non-observational evidence — confers an increased certainty in the hypothesis I believe to be ‘the loveliest’.

Instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliative deductive reasoning — as in mainstream economic theory — it would be more fruitful and relevant to apply inference to the best explanation, given that what we are looking for is to be able to explain what’s going on in the world we live in. The world in which we live is — as argued by e. g. Keynes and Shackle — genuinely uncertain. By using abductive inferences we can nonetheless gain knowledge about it. Although inevitably defeasible, abduction is also our only source of scientific discovery.

To achieve explanatory success, a theory should, minimally, satisfy two criteria: it should have determinate implications for behavior, and the implied behavior should be what we actually observe. These are necessary conditions, not sufficient ones. Rational-choice theory often fails on both counts. The theory may be indeterminate, and people may be irrational.

In what was perhaps the first sustained criticism of the theory, Keynes emphasized indeterminacy, notably because of the pervasive presence of uncertainty …

Disregarding some more technical sources of indeterminacy, the most basic one is embarrassingly simple: how can one impute to the social agents the capacity to make the calculations that occupy many pages of mathematical appendixes in the leading journals of economics and political science and that can be acquired only through years of professional training?

I believe that much work in economics and political science that is inspired by rational-choice theory is devoid of any explanatory, aesthetic or mathematical interest, which means that it has no value at all. I cannot make a quantitative assessment of the proportion of work in leading journals that fall in this category, but I am confident that it represents waste on a staggering scale.

Jon Elster

Most mainstream economic models are abstract, unrealistic and presenting mostly non-testable hypotheses. One important rational behind this kind of model building is the quest for rigour, and more precisely, logical rigour. Formalization of economics has been going on for more than a century and with time the it has become obvious that the preferred kind of formalization is the one that rigorously follows the rules of formal logic. As in mathematics, this has gone hand in hand with a growing emphasis on axiomatics. Instead of basically trying to establish a connection between empirical data and assumptions, ‘truth’ has come to be reduced to, a question of fulfilling internal consistency demands between conclusion and premises, instead of showing a ‘congruence’ between model assumptions and reality. This has, of course, severely restricted the applicability of economic theory and models.

In their search for the Holy Grail of deductivism — an idea originating in physics and maintaining the feasibility and relevance of describing an entire science as (more or less) a self-contained axiomatic-deductive system —  mainstream economists are forced to make assumptions with standardly preciously little resemblance to reality. When applying this deductivist thinking to economics, mainstream economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They (almost) never do. In the positivist (Hempelian, deductive-nomological) tradition, explanation is basically seen as deduction from general laws. In social sciences these laws are non-existent, and so, a fortiori, are the deductivist explanations. When addressing real economies, the idealizations necessary for the deductivist machinery to work simply don’t hold.

The thrust of this realist rhetoric is the same both at the scientific and at the meta-scientific levels. It is that explanatory virtues need not be evidential virtues. It is that you should feel cheated by “The world is as if T were true”, in the same way as you should feel cheated by “The stars move as if they were fixed on a rotating sphere”. Realists do feel cheated in both cases.

Alan Musgrave

The one-eyed focus on validity and consistency makes much of mainstream economics irrelevant, since its insistence on deductive-axiomatic foundations does not earnestly consider the fact that its formal logical reasoning, inferences and arguments show an amazingly weak relationship to their everyday real world equivalents. Searching in vain for absolute and deductive knowledge and ‘truth,’ these economists forgo the opportunity of getting more relevant and better (defeasible) knowledge. For although the formal logic focus may deepen our insights into the notion of validity, the rigour and precision has a devastatingly important trade-off: the higher the level of rigour and precision, the smaller is the range of real world applications. Consistency does not take us very far. As scientists we can not only be concerned with the consistency of our universe of discourse. We also have to investigate how consistent our models and theories are with the universe in which we happen to live.

Mainstream economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the targeted real economic system. This modeling activity is considered useful and essential. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies. This formalistic-deductive modeling strategy certainly impresses some people, but the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, forgets that in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. Although the formalistic tractability of deductivist mathematical modeling method makes conclusions follow with certainty from given assumptions, that should be of little interest to scientists, since what happens with certainty in a model world is no warrant for the same to hold in real world economies.


Mathematics, especially through the work of David Hilbert, became increasingly viewed as a discipline properly concerned with providing a pool of frameworks for possible realities …

This emergence of the axiomatic method removed at a stroke various hitherto insurmountable constraints facing those who would mathematise the discipline of economics. Researchers involved with mathematical projects in economics could, for the time being at least, postpone the day of interpreting their preferred axioms and assumptions. There was no longer any need to seek the blessing of mathematicians and physicists or of other economists who might insist that the relevance of metaphors and analogies be established at the outset. In particular it was no longer regarded as necessary, or even relevant, to economic model construction to consider the nature of social reality, at least for the time being …

The result was that in due course deductivism in economics, through morphing into mathematical deductivism on the back of developments within the discipline of mathematics, came to acquire a new lease of life, with practitioners (once more) potentially oblivious to any inconsistency between the ontological presuppositions of adopting a mathematical modelling emphasis and the nature of social reality. The consequent rise of mathematical deductivism has culminated in the situation we find today.

Tony Lawson

Theories and models being ‘coherent’ or ‘consistent’ with data do not make the theories and models success stories. To have valid evidence is not enough. What economics needs is sound evidence. The premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid, but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

In science, nothing of substance has ever been decided by just putting things in the right logical form. Those scientific matters that can be dealt with in a purely  formal-analytical matter are only of second-order interest. The absurdity of trying to analyse and explain (necessarily ‘non-Laplacian’) real world systems equipped with analytical rather than substantial scientific arguments, becomes clear as soon as we become aware that this is fundamentally a denial of the field-dependent character of all science. What counts as a justified inference in economics is not necessarily equivalent to what counts in sociology, physics, or biology. They address different problems and questions, and — a fortiori — what is considered absolutely necessary in one field, may be considered totally irrelevant in another.

aIn the case of substantial arguments there is no question of data and backing taken together entailing the conclusion, or failing to entail it: just because the steps involved are substantial ones, it is no use either looking for entailments or being disappointed if we do not find them. Their absence does not spring from a lamentable weakness in the arguments, but from the nature of the problems with which they are designed to deal. When we have to set about assessing the real merits of any substantial argument, analytical criteria such as entailment are, accordingly, simply irrelevant … ‘Strictly speaking’ means, to them, analytically speaking; although in the case of substantial  arguments to appeal to analytic criteria is not so much strict as beside the point … There is no justification for applying analytic criteria in all fields of argument indiscriminately, and doing so consistently will lead one (as Hume found) into a state of philosophical delirium.

Abduction and inference to the best explanation show the inherent limits of formal logical reasoning in science. No new ideas or hypotheses in science originate by deduction or induction. In order to come up with new ideas or hypotheses and explain what happens in our world, scientists have to use inference to the best explanation. All scientific explanations inescapably relies on a reasoning that is, from a logical point of view, fallacious. Thus — in order to explain what happens in our world, we have to use a reasoning that logically is a fallacy. There is no way around this — unless you want to follow the barren way that mainstream economics has been following for more than half a century now — retreating into the world of thought experimental ‘as if’ axiomatic-deductive-mathematical models. 

The more mainstream economists insist on formal logic validity, the less they have to say about the real world. And real progress in economics, as in all sciences, presupposes real world involvement, not only self-referential deductive reasoning within formal-analytical mathematical models.


RSS feed for comments on this post. TrackBack URI

  1. As a mathematician I have less objection to the insistence on formal logical validity / deductive – axiomatic foundations. What I have difficulty with is the insistence that the set of axioms should be complete, so that the time-development is precisely determined, at least stochastically. This surely is not the case except possibly locally and for a short period?

    It thus seems to me that your argument is not so much against the use of logic as such, but against the kind of abuse that Keynes pointed out.

  2. “Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.”


  3. There’s a lot here to which I might offer demurral regarding the account of how we come to learn and know things about the world. How many of my objections would be just quibbles over a pedagogy for the philosophy of science and how much is relevant to identifying what is wrong with mainstream economics is hard to sort out.
    I would never say, “inductive logic” or “abductive logic“. That’s using “logic” as a metaphor of dubious value to label a loosely outlined procedure or approach. There’s no such thing as “inductive logic”, at least not in the sense that there is deductive logic or boolean logic. Logic is logic, and we make use of it in reasoning about the world, because we presume before we know anything else that the world is a logical place, that (the singular) reality is bound by what is logically possible, and we can therefore usefully test our conjectures about mechanisms at work for logical consistency.
    “Deductive logic”, in my understanding, is also not in same category as induction or abduction as an approach or procedure for intellectually confronting the world. “Deductive logic” might be a tool for either induction or abduction for the reason stated in the previous paragraph, but it is not, in isolation, an adequate approach to apprehension. Philosophers have been unduly fascinated by the subjective certainty we associate with the validity of a logical argument: we feel psychologically compelled to accept the logical validity of an argument whose form is valid. The philosophers might have done us the courtesy of noticing more conspicuously and consistently that nothing about the form of a syllogism or a theorem makes the logical argument factually true. Valid arguments can be factually absurd. That’s an essential aspect of the distinction between valid and true. So, where induction or abduction may denote approaches to determining or arguing factual truth, and deductive reasoning (or logic or math in general) may be a tool employed at various points in the course of induction or abduction, deductive reasoning — in isolation — is not. Deductive logic, employed in the isolation of the form that makes it valid – say, a syllogism or a theorem — never makes a descriptive statement.
    If you want to put something into silos, I recommend three domains of knowing: 1.) the possible, logical, speculative 2.) positive fact and measurement, and 3. the normative, ethical or aesthetic. It helps to notice that there’s a gap, such that no purely logical argument is ever factually true by reason of being valid, and no true factual statement, no observation however careful or measurement so precise is ever sufficiently true to establish by itself what is good or beautiful or desirable. I will leave it to the poets to argue whether there can be meaningful sense in something impossible being true or something false being beautiful, though I think the scientist has to hold that an invalid argument about a functional relationship cannot be true and a factually false argument can not accurately support judgement of what is good or wise.
    A good pragmatist would say that no actual statement or proposition is ever purely logical and a priori, and I would agree, but the silos are still useful in impressing the important distinctions between theory and fact, fact and value. The pragmatist is right that human beings psychologically cannot construct statements, abstract from fact or value. Still, the counterfactuals of theory are never factual evidence, nor can facts however firmly established or abundant substitute for wise judgment. Philosophy is needed to teach skeptical method to passion and the human propensity to detect patterns and to wilfully believe.
    Economists go wrong when they do not acknowledge that a valid statement can not be true, just because it is valid. Samuelson’s formula was to argue that if the premises of a valid theorem fit a case, then the argument was necessarily a true statement about the case. He was wrong.
    I am not sure how to convince economists that he was wrong, but it seems to me that’s a key problem.
    As long as economists continue to mistake their “geometry” for a map, continue to imagine that an axiomatic-deductive analysis could be sufficiently descriptive as to substitute for an experiment, they will continue to feel justified into holding on to an ideologically laden and vague superstition about a self-correcting market economy, and the conceit that it makes them expert in a well-founded theory.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at
Entries and comments feeds.