Rethinking philosophy of economics

8 October, 2016 at 11:13 | Posted in Theory of Science & Methodology | Leave a comment


In this book Gustavo Marqués, one of our discipline’s most dexterous and acute minds, calmly investigates in depth economics’ most persistent methodological enigmas. Chapter Three alone is sufficient reason for owning this book.
Edward Fullbrook, University of the West of England

Is ‘mainstream philosophy of economics’ only about models and imaginary worlds created to represent economic theories? Marqués questions this epistemic focus and calls for the ontological examination of real world economic processes. This book is a serious challenge to standard thinking and an alternative program for a pluralist philosophy of economics.
John Davis, Marquette University

Exposing the ungrounded pretensions of the mainstream philosophy of economics, Marqués’ carefully argued book is a major contribution to the ongoing debate on contemporary mainstream economics and its methodological and philosophical underpinnings. Even those who disagree with his conclusions will benefit from his thorough and deep critique of the modeling strategies used in modern economics.
Lars P Syll, Malmö University

The main problem with mainstream economics

7 October, 2016 at 17:38 | Posted in Theory of Science & Methodology | Leave a comment

Many economists have over time tried to diagnose what’s the problem behind the ‘intellectual poverty’ that characterizes modern mainstream economics. Rationality postulates, rational expectations, market fundamentalism, general equilibrium, atomism, over-mathematisation are some of the things one have been pointing at. But although these assumptions/axioms/practices are deeply problematic, they are mainly reflections of a deeper and more fundamental problem.

c9dd533b1cb4e7a2e1d6569481907beeThe main problem with mainstream economics is its methodology.

The fixation on constructing models showing the certainty of logical entailment has been detrimental to the development of a relevant and realist economics. Insisting on formalistic (mathematical) modeling forces the economist to give upon on realism and substitute axiomatics for real world relevance. The price for rigour and precision is far too high for anyone who is ultimately interested in using economics to pose and (hopefully) answer real world questions and problems.

This deductivist orientation is the main reason behind the difficulty that mainstream economics has in terms of understanding, explaining and predicting what takes place in our societies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of – and justification for – the assumptions on which the deductivist foundation is erected. Asking these questions is an important ingredient in a sustained critical effort at showing how nonsensical is the embellishing of a smorgasbord of models founded on wanting (often hidden) methodological foundations.

The mathematical-deductivist straitjacket used in mainstream economics presupposes atomistic closed-systems – i.e., something that we find very little of in the real world, a world significantly at odds with an (implicitly) assumed logic world where deductive entailment rules the roost. Ultimately then, the failings of modern mainstream economics has its root in a deficient ontology. The kind of formal-analytical and axiomatic-deductive mathematical modeling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant.

Although there has been a clearly discernible increase and focus on “empirical” economics in recent decades, the results in these research fields have not fundamentally challenged the main deductivist direction of mainstream economics. They are still mainly framed and interpreted within the core “axiomatic” assumptions of individualism, instrumentalism and equilibrium that make up even the “new” mainstream economics. Although, perhaps, a sign of an increasing – but highly path-dependent – theoretical pluralism, mainstream economics is still, from a methodological point of view, mainly a deductive project erected on a foundation of empty formalism.

If we want theories and models to confront reality there are obvious limits to what can be said “rigorously” in economics. For although it is generally a good aspiration to search for scientific claims that are both rigorous and precise, we have to accept that the chosen level of precision and rigour must be relative to the subject matter studied. An economics that is relevant to the world in which we live can never achieve the same degree of rigour and precision as in logic, mathematics or the natural sciences. Collapsing the gap between model and reality in that way will never give anything else than empty formalist economics.

In mainstream economics, with its addiction to the deductivist approach of formal- mathematical modeling, model consistency trumps coherence with the real world. That is sure getting the priorities wrong. Creating models for their own sake is not an acceptable scientific aspiration – impressive-looking formal-deductive models should never be mistaken for truth.

For many people, deductive reasoning is the mark of science: induction – in which the argument is derived from the subject matter – is the characteristic method of history or literary criticism. But this is an artificial, exaggerated distinction. Scientific progress … is frequently the result of observation that something does work, which runs far ahead of any understanding of why it works.

aimageNot within the economics profession. There, deductive reasoning based on logical inference from a specific set of a priori deductions is “exactly the right way to do things”. What is absurd is not the use of the deductive method but the claim to exclusivity made for it. This debate is not simply about mathematics versus poetry. Deductive reasoning necessarily draws on mathematics and formal logic: inductive reasoning, based on experience and above all careful observation, will often make use of statistics and mathematics …

The belief that models are not just useful tools but are capable of yielding comprehensive and universal descriptions of the world blinded proponents to realities that had been staring them in the face. That blindness made a big contribution to our present crisis, and conditions our confused responses to it.

John Kay

It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.


When applying deductivist thinking to economics, economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

For Keynes the world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a “weight of argument” that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, Keynes would add “nor do people”. The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by “legal atoms” with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and our theories and models of it.

Keynes once wrote that economics “is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Now, if the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

Models preferably ought to somehow reflect/express/correspond to reality. I’m not saying that the answers are self-evident, but at least you have to do some philosophical under-labouring to rest your case. Too often that is wanting in modern economics, just as it was when Keynes in the 1930s complained about Tinbergen’s and other econometricians lack of justifications of the chosen models and methods.

“Human logic” has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world I would say we are better served with a methodology that takes into account that “the more we know the more we know we don’t know”.

The models and methods we choose to work with have to be in conjunction with the economy as it is situated and structured. Epistemology has to be founded on ontology. Deductivist closed-system theories, as all the varieties of the Walrasian general equilibrium kind, could perhaps adequately represent an economy showing closed-system characteristics. But since the economy clearly has more in common with an open-system ontology we ought to look out for other theories – theories who are rigorous and precise in the meaning that they can be deployed for enabling us to detect important causal mechanisms, capacities and tendencies pertaining to deep layers of the real world.

the-first-principle-isRigour, coherence and consistency have to be defined relative to the entities for which they are supposed to apply. Too often they have been restricted to questions internal to the theory or model. But clearly the nodal point has to concern external questions, such as how our theories and models relate to real-world structures and relations. Applicability rather than internal validity ought to be the arbiter of taste.

So – if we want to develop a new and better economics we have to give up on the deductivist straitjacket methodology. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.


If economics is going to be useful, it has to change its methodology. Economists have to get out of their deductivist theoretical ivory towers and start asking questions about the real world. A relevant economics science presupposes adopting methods suitable to the object it is supposed to predict, explain or understand.

The wisdom of crowds

3 October, 2016 at 20:50 | Posted in Theory of Science & Methodology | 1 Comment


Bayesian rationality — nothing but a probabilistic version of irrationalism

19 August, 2016 at 09:26 | Posted in Economics, Theory of Science & Methodology | 9 Comments

The initial choice of a prior probability distribution is not regulated in any way. The probabilities, called subjective or personal probabilities, reflect personal degrees of belief. From a Bayesian philosopher’s point of view, any prior distribution is as good as any other. Of course, from a Bayesian decision maker’s point of view, his own beliefs, as expressed in his prior distribution, may be better than any other beliefs, but Bayesianism provides no means of justifying this position. Bayesian rationality rests in the recipe alone, and the choice of the prior probability distribution is arbitrary as far as the issue of rationality is concerned. Thus, two rational persons with the same goals may adopt prior distributions that are wildly different …

bayestheoremBayesian learning is completely inflexible after the initial choice of probabilities: all beliefs that result from new observations have been fixed in advance. This holds because the new probabilities are just equal to certain old conditional probabilities …

According to the Bayesian recipe, the initial choice of a prior probability distribution is arbitrary. But the probability calculus might still rule out some sequences of beliefs and thus prevent complete arbitrariness.

Actually, however, this is not the case: nothing is ruled out by the probability calculus …

Thus, anything goes … By adopting a suitable prior probability distribution, we can fix the consequences of any observations for our beliefs in any way we want. This result, which will be referred to as the anything-goes theorem, holds for arbitrarily complicated cases and any number of observations. It implies, among other consequences, that two rational persons with the same goals and experiences can, in all eternity, differ arbitrarily in their beliefs about future events …

From a Bayesian point of view, any beliefs and, consequently, any decisions are as rational or irrational as any other, no matter what our goals and experiences are. Bayesian rationality is just a probabilistic version of irrationalism. Bayesians might say that somebody is rational only if he actually rationalizes his actions in the Bayesian way. However, given that such a rationalization always exists, it seems a bit pedantic to insist that a decision maker should actually provide it.

Max Albert

Uskali Mäki and Tony Lawson — different varieties of realism

18 August, 2016 at 11:05 | Posted in Theory of Science & Methodology | Leave a comment

We are all realists and we all—Mäki, Cartwright, and I—self-consciously present ourselves as such. The most obvious research-guiding commonality, perhaps, is that we do all look at the ontological presuppositions of economics or economists.

ecrealWhere we part company, I believe, is that I want to go much further. I guess I would see their work as primarily analytical and my own as more critically constructive or dialectical. My goal is less the clarification of what economists are doing and presupposing as seeking to change the orientation of modern economics … Specifically, I have been much more prepared than the other two to criticise the ontological presuppositions of economists—at least publically. I think Mäki is probably the most guarded. I think too he is the least critical, at least of the state of modern economics …

One feature of Mäki’s work that I am not overly convinced by, but which he seems to value, is his method of theoretical isolation (Mäki 1992). If he is advocating it as a method for social scientific research, I doubt it will be found to have much relevance—for reasons I discuss in Economics and reality (Lawson 1997). But if he is just saying that the most charitable way of interpreting mainstream economists is that they are acting on this method, then fine. Sometimes, though, he seems to imply more …

I cannot get enthused by Mäki’s concern to see what can be justified in contemporary formalistic modelling endeavours. The insights, where they exist, seem so obvious, circumscribed, and tagged on anyway …

As I view things, anyway, a real difference between Mäki and me is that he is far less, or less openly, critical of the state and practices of modern economics … Mäki seems more inclined to accept mainstream economic contributions as largely successful, or anyway uncritically. I certainly do not think we can accept mainstream contributions as successful, and so I proceed somewhat differently …

So if there is a difference here it is that Mäki more often starts out from mainstream academic economic analyses accepted rather uncritically, whilst I prefer to start from those everyday practices widely regarded as successful.

Tony Lawson

Truth and validity

15 August, 2016 at 14:01 | Posted in Theory of Science & Methodology | 1 Comment


Mainstream economics has become increasingly irrelevant to the understanding of the real world. The main reason for this irrelevance is the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity and truth nothing. Why anyone should be interested in that kind of theories and models — as long as mainstream economists do not come up with any export licenses for their theories and models to the real world in which we live — is beyond comprehension. Stupid models are of no or little help in understanding the real world.

Friedman’s ‘as if’ methodology — a total disaster

30 July, 2016 at 20:12 | Posted in Economics, Theory of Science & Methodology | 1 Comment

the-only-function-of-economic-forecasting-is-to-make-astrology-look-respectable-quote-1The explicit and implicit acceptance of Friedman’s as if methodology by mainstream economists has proved to be disastrous. The fundamental paradigm of economics that emerged from this methodology not only failed to anticipative the Crash of 2008 and its devastating effects, this paradigm has proved incapable of producing a consensus within the discipline of economics as to the nature and cause of the economic stagnation we find ourselves in the midst of today. In attempting to understand why this is so it is instructive to examine the nature of Friedman’s arguments within the context in which he formulated them, especially his argument that the truth of a theory’s assumptions is irrelevant so long as the inaccuracy of a theory’s predictions are cataloged and we argue as if those assumptions are true …

A scientific theory is, in fact, the embodiment of its assumptions. There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.

George Blackford

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journal being filled with articles supporting the stories with empirical evidence confirming the predictions. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject isn’t considered required.

If the ultimate criteria of success of a deductivist system is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.

Mainstream economics — going for the wrong kind of certainty

6 July, 2016 at 15:24 | Posted in Economics, Theory of Science & Methodology | 1 Comment

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

64800990Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world all evidence has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to math or logic, not science.

Keynes’ critique of scientific atomism

7 April, 2016 at 19:12 | Posted in Theory of Science & Methodology | Leave a comment

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state. We do not have an invariable relation between particular bodies, but nevertheless each has on the others its own separate and invariable effect, which does not change with changing circumstances, although, of course, the total effect may be changed to almost any extent if all the other accompanying causes are different. Each atom can, according to this theory, be treated as a separate cause and does not enter into different organic combinations in each of which it is regulated by different laws …

The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts. Given, on the other hand, a number of legally atomic units and the laws connecting them, it would be possible to deduce their effects pro tanto without an exhaustive knowledge of all the coexisting circumstances.

Keynes’ incisive critique is of course of interest in general for all sciences, but I think it is also of special interest in economics as a background to much of Keynes’ doubts about inferential statistics and econometrics.

Since econometrics doesn’t content itself with only making ‘optimal predictions’ but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions. Most important of these are the ‘atomistic’ assumptions of additivity and linearity.

overconfidenceThese assumptions — as underlined by Keynes — are of paramount importance and ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics like Keynes — and yours truly — will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, and a rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations.

But — real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. As Keynes argued, when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

Science and truth

21 March, 2016 at 13:07 | Posted in Theory of Science & Methodology | 1 Comment

28mptoothfairy_jpg_1771152eIn my view, scientific theories are not to be considered ‘true’ or ‘false.’ In constructing such a theory, we are not trying to get at the truth, or even to approximate to it: rather, we are trying to organize our thoughts and observations in a useful manner.

Robert Aumann


What a handy view of science.

How reassuring for all of you who have always thought that believing in the tooth fairy make you understand what happens to kids’ teeth. Now a ‘Nobel prize’ winning economist tells you that if there are such things as tooth fairies or not doesn’t really matter. Scientific theories are not about what is true or false, but whether ‘they enable us to organize and understand our observations’ …

Mirabile dictu!

What Aumann and other defenders of scientific storytelling ‘forgets’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough. Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough. Truth is an important concept in real science.

Next Page »

Blog at
Entries and comments feeds.