The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
Friedman enters this scene arguing that all we need to do is predict successfully, that this can be done even without realistic theories, and that unrealistic theories are to be preferred to realistic ones, essentially because they can usually be more parsimonious.
The first thing to note about this response is that Friedman is attempting to turn inevitable failure into a virtue. In the context of economic modelling, the need to produce formulations in terms of systems of isolated atoms, where these are not characteristic of social reality, means that unrealistic formulations are more or less unavoidable. Arguing that they are to be preferred to realistic ones in this context belies the fact that there is not a choice …
My own response to Friedman’s intervention is that it was mostly an irrelevancy, but one that has been opportunistically grasped by some as a supposed defence of the profusion of unrealistic assumptions in economics. This would work if successful prediction were possible. But usually it is not.
If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journals being filled with articles supporting the stories with empirical evidence confirming the predictions. However, the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though explicit discussion, argumentation and justification on the subject isn’t considered to be required.
If the ultimate criterion of success of a model is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economies.
There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.
As yours truly has repeatedly argued on this blog (e.g. here here here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which many of its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.
An extremely interesting systematic review article, on the grand claims to external validity often raised by advocates of RCTs, now confirms this view and show that using an RCT is not at all the “gold standard” it is portrayed as:
In theory there seems to be a consensus among empirical researchers that establishing external validity of a policy evaluation study is as important as establishing its internal validity. Against this background, this paper has systematically reviewed the existing RCT literature in order to examine the extent to which external validity concerns are addressed in the practice of conducting and publishing RCTs for policy evaluation purposes. We have identified all 92 papers based on RCTs that evaluate a policy intervention and that are published in the leading economic journals between 2009 and 2014. We reviewed them with respect to whether the published papers address the different hazards of external validity that we developed …
Many published RCTs do not provide a comprehensive presentation of how the experiment was implemented. More than half of the papers do not even provide the reader with information on whether the participants in the experiment are aware of being part of an experiment – which is crucial to assess whether Hawthorne- or John- Henry-effects could codetermined the outcomes in the RCT …
Further, potential general equilibrium effects are only rarely addressed. This is above all worrisome in case outcomes involve price changes (e.g. labor market outcomes) with straightforward repercussions when the program is brought to scale …
In many of the studies we reviewed, the assumptions that the authors make in generalizing their results, as well as respective limitations to the inferences we can draw, are left behind a veil …
A more transparent reporting would also lead to a situation in which RCTs that properly accounted for the potential hazards to external validity receive more attention than those that did not … We therefore call for dedicating the same devotion to establishing external validity as is done to establish internal validity. It would be desirable if the peer review process at economics journals explicitly scrutinized design features of RCTs that are relevant for extrapolating the findings to other settings and the respective assumptions made by the authors … Given the trade-offs we all face during the laborious implementation of studies it is almost certain that external validity will often be sacrificed for other features to which the peer-review process currently pays more attention.
Blinding is rarely possible in economics or social science trials, and this is one of the major differences from most (although not all) RCTs in medicine, where blinding is standard, both for those receiving the treatment and those administering it … Subjects in social RCTs usually know whether they are receiving the treatment or not and so can react to their assignment in ways that can affect the outcome other than through the operation of the treatment; in econometric language, this is akin to a violation of exclusion restrictions, or a failure of exogeneity …
Note also that knowledge of their assignment may cause people to want to cross over from treatment to control, or vice versa, to drop out of the program, or to change their behavior in the trial depending on their assignment. In extreme cases, only those members of the trial sample who expect to benefit from the treatment will accept treatment. Consider, for example, a trial in which children are randomly allocated to two schools that teach in different languages, Russian or English, as happened during the breakup of the former Yugoslavia. The children (and their parents) know their allocation, and the more educated, wealthier, and less-ideologically committed parents whose children are assigned to the Russian-medium schools can (and did) remove their children to private English-medium schools. In a comparison of those who accepted their assignments, the effects of the language of instruction will be distorted in favor of the English schools by differences in family characteristics. This is a case where, even if the random number generator is fully functional, a later balance test will show systematic differences in observable background characteristics between the treatment and control groups; even if the balance test is passed, there may still be selection on unobservables for which we cannot test …
Various statistical corrections are available for a few of the selection problems non- blinding presents, but all rely on the kind of assumptions that, while common in observational studies, RCTs are designed to avoid. Our own view is that assumptions and the use of prior knowledge are what we need to make progress in any kind of analysis, including RCTs whose promise of assumption-free learning is always likely to be illusory …
This only confirms that ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ methods and ‘on-average-knowledge’ is despairingly small.
Yours truly is extremely fond of science philosophers and economists like Nancy Cartwright and Angus Deaton. With razor-sharp intellects they immediately go for the essentials. They have no time for bullshit. And neither should we:
Randomised controlled trials (RCTs) have been sporadically used in economic research since the negative income tax experiments between 1968 and 1980 … and have been regularly used since then to evaluate labour market and welfare programmes … In recent years, they have spread widely in economics …
In a recent paper, we argue that some of the popularity of RCTs, among the public as well as some practitioners, rests on misunderstandings about what they are capable of accomplishing (Deaton and Cartwright 2016). Well-conducted RCTs could provide unbiased estimates of the average treatment effect (ATE) in the study population, provided no relevant differences between treatment and control are introduced post randomisation, which blinding of subjects, investigators, data collectors, and analysts serves to diminish. Unbiasedness says that, if we were to repeat the trial many times, we would be right on average. Yet we are almost never in such a situation, and with only one trial (as is virtually always the case) unbiasedness does nothing to prevent our single estimate from being very far away from the truth. If, as if often believed, randomisation were to guarantee that the treatment and control groups are identical except for the treatment, then indeed, we would have a precise – indeed exact – estimate of the ATE. But randomisation does nothing of the kind …
A well-conducted RCT can yield a credible estimate of an ATE in one specific population, namely the ‘study population’ from which the treatments and controls were selected. Sometimes this is enough; if we are doing a post hoc program evaluation, if we are testing a hypothesis that is supposed to be generally true, if we want to demonstrate that the treatment can work somewhere, or if the study population is a randomly drawn sample from the population of interest whose ATE we are trying to measure. Yet the study population is often not the population that we are interested in, especially if subjects must volunteer to be in the experiment and have their own reasons for participating or not …
More generally, demonstrating that a treatment works in one situation is exceedingly weak evidence that it will work in the same way elsewhere; this is the ‘transportation’ problem: what does it take to allow us to use the results in new contexts, whether policy contexts or in the development of theory? … No matter how rigorous or careful the RCT, if the bridge is built by a hand-waving simile that the policy context is somehow similar to the experimental context, the rigor in the trial does nothing to support a policy … Causal effects depend on the settings in which they are derived, and often depend on factors that might be constant within the experimental setting but different elsewhere … Without knowing why things happen and why people do things, we run the risk of worthless casual (‘fairy story’) causal theorising, and we have given up on one of the central tasks of economics.
Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions conerning the external validity of economic models. In their view they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever expanding ‘collection of potentially applicable models.’
When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics.
If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).
Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).
External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream economists’ RCTs and ‘experiments.’
Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.
In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).
Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.
Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.
In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:
Y = α + βX + ε,
where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.
The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.
Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.
Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!
‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.
A common idea among mainstream — neoclassical — economists is the idea of science advancing through the use of ‘as if’ modeling assumptions and ‘successive approximations’. But is this really a feasible methodology? I think not.
Most models in science are representations of something else. Models ‘stand for’ or ‘depict’ specific parts of a ‘target system’ (usually the real world). All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions — like ‘rational expectations’ or ‘representative actors’ — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.
All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
The implications that follow from the kind of models that mainstream economists construct are always conditional on the simplifying assumptions used — assumptions predominantly of a rather far-reaching and non-empirical character with little resemblance to features of the real world. From a descriptive point of view there is usually very little resemblance between the models used and the empirical world. ‘As if’ explanations building on such foundations are not really any explanations at all, since they always conditionally build on hypothesized law-like theorems and situation-specific restrictive assumptions. The empirical-descriptive inaccuracy of the models makes it more or less miraculous if they should — in any substantive way — be able to be considered explanative at all. If the assumptions that are made are known to be descriptively totally unrealistic — think of e.g. ‘rational expectations’ — they are of course likewise totally worthless for making empirical inductions. Assuming that people behave ‘as if’ they were rational computers doesn’t take us far when we know that the ‘if’ is false.
To give up the quest for truth and to merely study the internal logic of credible worlds is not compatible with scientific realism. And assessing the adequacy of a theory or model solely in terms of “the interests of the theorist” or “on purely aesthetic grounds” does not seem to be a warranted scientific position. That would be lowering one’s standards of fidelity beyond reasonable limits. Theories and models must be justified on more grounds than their intended scope. Scientific theories and models must have ontological constraints and the most non-negotiable of these is – at least from a realist point of view – that they have to be coherent to the way the worlds is.
Even though we might say that models are devised “to account for stylized facts or data” as science philosopher Tarja Knuuttila — in ‘Isolating Representations Versus Credible Constructions? Economic Modelling in Theory and Practice,’ Erkenntnis 1989 — has been arguing, using models as ‘stylised facts’ somehow ‘approximating’ reality are rather unimpressive attempts at legitimizing fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.
Knuuttila notices that most economic models fall short of representing real systems. I agree. Mainstream neoclassical economic theory employs very few principles, and among those used, bridge principals are as a rule missing. But instead of criticising this, she rather apologetically concludes that “the connections between the models and the data, or what is known about economies more generally, are just looser than what is traditionally assumed.” To my ears this sounds like trying to turn failure into virtue. Why should we be concerned with economic models that are “purely hypothetical constructions”?
A consistent autonomist vision … maintains that it is possible to learn from models just by constructing and manipulating them, without making any reference to their capacity to refer to any extra-model reality. In several of her works, Tarja Knuuttila claims that models are constructed freely, and that valuable information is obtained from them just by constructing, examining and, particularly, manipulating them. The problem that philosophy of economics must solve however is to show how models can offer relevant and reliable knowledge about our world, and in this regard Knuuttila solution seems to be merely verbal and is based in a semantic displacement. Nobody doubts that through constructing and manipulating “objects” something is learned. Mathematics does it all the time in reference to some kind of abstract entity. Learning is also possible through manipulating material objects (for example, the magic cube). But the philosophical problem that demands solution is whether the construction and manipulation of theoretical economic models allows us to learn something about our world, the empirical world that surrounds us, particularly about real market economies, and whether what is thus learned can somehow be used to design and to implement successful economic policies. The arguments of Knuuttila, avoid approaching these issues. In my opinion it is far more enlightening to interpret these models as the accomplishment of mere intellectual exercises …
If … economic processes are open ended, intervenible, based on expectations and pervaded by unavoidable conflicting interests, I do not see how theoretical models could provide significantly better knowledge than what is offered by practical knowledge and common sense.
Constructing minimal macroeconomic ‘as if’ models — or using microfounded macroeconomic models as ‘stylised facts’ somehow ‘successively approximating’ macroeconomic reality — is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made in mainstream economics are restrictive rather than harmless and can therefore not in any sensible meaning be considered approximations at all.
Economics building on such a modeling strategy does not produce science. As long as mainstream economists can not come up with anything even approaching a reasonable account of what are the practical results of their theoretical modeling endeavours, they have to live with the fact that most science theoreticians, philosophers, and methodologists dismiss economics as an empirically relevant science.
In this book Gustavo Marqués, one of our discipline’s most dexterous and acute minds, calmly investigates in depth economics’ most persistent methodological enigmas. Chapter Three alone is sufficient reason for owning this book.
Edward Fullbrook, University of the West of England
Is ‘mainstream philosophy of economics’ only about models and imaginary worlds created to represent economic theories? Marqués questions this epistemic focus and calls for the ontological examination of real world economic processes. This book is a serious challenge to standard thinking and an alternative program for a pluralist philosophy of economics.
John Davis, Marquette University
Exposing the ungrounded pretensions of the mainstream philosophy of economics, Marqués’ carefully argued book is a major contribution to the ongoing debate on contemporary mainstream economics and its methodological and philosophical underpinnings. Even those who disagree with his conclusions will benefit from his thorough and deep critique of the modeling strategies used in modern economics.
Lars P Syll, Malmö University
Many economists have over time tried to diagnose what’s the problem behind the ‘intellectual poverty’ that characterizes modern mainstream economics. Rationality postulates, rational expectations, market fundamentalism, general equilibrium, atomism, over-mathematisation are some of the things one have been pointing at. But although these assumptions/axioms/practices are deeply problematic, they are mainly reflections of a deeper and more fundamental problem.
The main problem with mainstream economics is its methodology.
The fixation on constructing models showing the certainty of logical entailment has been detrimental to the development of a relevant and realist economics. Insisting on formalistic (mathematical) modeling forces the economist to give upon on realism and substitute axiomatics for real world relevance. The price for rigour and precision is far too high for anyone who is ultimately interested in using economics to pose and (hopefully) answer real world questions and problems.
This deductivist orientation is the main reason behind the difficulty that mainstream economics has in terms of understanding, explaining and predicting what takes place in our societies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of – and justification for – the assumptions on which the deductivist foundation is erected. Asking these questions is an important ingredient in a sustained critical effort at showing how nonsensical is the embellishing of a smorgasbord of models founded on wanting (often hidden) methodological foundations.
The mathematical-deductivist straitjacket used in mainstream economics presupposes atomistic closed-systems – i.e., something that we find very little of in the real world, a world significantly at odds with an (implicitly) assumed logic world where deductive entailment rules the roost. Ultimately then, the failings of modern mainstream economics has its root in a deficient ontology. The kind of formal-analytical and axiomatic-deductive mathematical modeling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant.
Although there has been a clearly discernible increase and focus on “empirical” economics in recent decades, the results in these research fields have not fundamentally challenged the main deductivist direction of mainstream economics. They are still mainly framed and interpreted within the core “axiomatic” assumptions of individualism, instrumentalism and equilibrium that make up even the “new” mainstream economics. Although, perhaps, a sign of an increasing – but highly path-dependent – theoretical pluralism, mainstream economics is still, from a methodological point of view, mainly a deductive project erected on a foundation of empty formalism.
If we want theories and models to confront reality there are obvious limits to what can be said “rigorously” in economics. For although it is generally a good aspiration to search for scientific claims that are both rigorous and precise, we have to accept that the chosen level of precision and rigour must be relative to the subject matter studied. An economics that is relevant to the world in which we live can never achieve the same degree of rigour and precision as in logic, mathematics or the natural sciences. Collapsing the gap between model and reality in that way will never give anything else than empty formalist economics.
In mainstream economics, with its addiction to the deductivist approach of formal- mathematical modeling, model consistency trumps coherence with the real world. That is sure getting the priorities wrong. Creating models for their own sake is not an acceptable scientific aspiration – impressive-looking formal-deductive models should never be mistaken for truth.
For many people, deductive reasoning is the mark of science: induction – in which the argument is derived from the subject matter – is the characteristic method of history or literary criticism. But this is an artificial, exaggerated distinction. Scientific progress … is frequently the result of observation that something does work, which runs far ahead of any understanding of why it works.
Not within the economics profession. There, deductive reasoning based on logical inference from a specific set of a priori deductions is “exactly the right way to do things”. What is absurd is not the use of the deductive method but the claim to exclusivity made for it. This debate is not simply about mathematics versus poetry. Deductive reasoning necessarily draws on mathematics and formal logic: inductive reasoning, based on experience and above all careful observation, will often make use of statistics and mathematics …
The belief that models are not just useful tools but are capable of yielding comprehensive and universal descriptions of the world blinded proponents to realities that had been staring them in the face. That blindness made a big contribution to our present crisis, and conditions our confused responses to it.
It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!
Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.
When applying deductivist thinking to economics, economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.
So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.
For Keynes the world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a “weight of argument” that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, Keynes would add “nor do people”. The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by “legal atoms” with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.
To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and our theories and models of it.
Keynes once wrote that economics “is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Now, if the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.
Models preferably ought to somehow reflect/express/correspond to reality. I’m not saying that the answers are self-evident, but at least you have to do some philosophical under-labouring to rest your case. Too often that is wanting in modern economics, just as it was when Keynes in the 1930s complained about Tinbergen’s and other econometricians lack of justifications of the chosen models and methods.
“Human logic” has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world I would say we are better served with a methodology that takes into account that “the more we know the more we know we don’t know”.
The models and methods we choose to work with have to be in conjunction with the economy as it is situated and structured. Epistemology has to be founded on ontology. Deductivist closed-system theories, as all the varieties of the Walrasian general equilibrium kind, could perhaps adequately represent an economy showing closed-system characteristics. But since the economy clearly has more in common with an open-system ontology we ought to look out for other theories – theories who are rigorous and precise in the meaning that they can be deployed for enabling us to detect important causal mechanisms, capacities and tendencies pertaining to deep layers of the real world.
Rigour, coherence and consistency have to be defined relative to the entities for which they are supposed to apply. Too often they have been restricted to questions internal to the theory or model. But clearly the nodal point has to concern external questions, such as how our theories and models relate to real-world structures and relations. Applicability rather than internal validity ought to be the arbiter of taste.
So – if we want to develop a new and better economics we have to give up on the deductivist straitjacket methodology. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.
If economics is going to be useful, it has to change its methodology. Economists have to get out of their deductivist theoretical ivory towers and start asking questions about the real world. A relevant economics science presupposes adopting methods suitable to the object it is supposed to predict, explain or understand.
The initial choice of a prior probability distribution is not regulated in any way. The probabilities, called subjective or personal probabilities, reflect personal degrees of belief. From a Bayesian philosopher’s point of view, any prior distribution is as good as any other. Of course, from a Bayesian decision maker’s point of view, his own beliefs, as expressed in his prior distribution, may be better than any other beliefs, but Bayesianism provides no means of justifying this position. Bayesian rationality rests in the recipe alone, and the choice of the prior probability distribution is arbitrary as far as the issue of rationality is concerned. Thus, two rational persons with the same goals may adopt prior distributions that are wildly different …
Bayesian learning is completely inflexible after the initial choice of probabilities: all beliefs that result from new observations have been fixed in advance. This holds because the new probabilities are just equal to certain old conditional probabilities …
According to the Bayesian recipe, the initial choice of a prior probability distribution is arbitrary. But the probability calculus might still rule out some sequences of beliefs and thus prevent complete arbitrariness.
Actually, however, this is not the case: nothing is ruled out by the probability calculus …
Thus, anything goes … By adopting a suitable prior probability distribution, we can fix the consequences of any observations for our beliefs in any way we want. This result, which will be referred to as the anything-goes theorem, holds for arbitrarily complicated cases and any number of observations. It implies, among other consequences, that two rational persons with the same goals and experiences can, in all eternity, differ arbitrarily in their beliefs about future events …
From a Bayesian point of view, any beliefs and, consequently, any decisions are as rational or irrational as any other, no matter what our goals and experiences are. Bayesian rationality is just a probabilistic version of irrationalism. Bayesians might say that somebody is rational only if he actually rationalizes his actions in the Bayesian way. However, given that such a rationalization always exists, it seems a bit pedantic to insist that a decision maker should actually provide it.