If we had begun our reform efforts with a focus on how to make our economy more efficient and more stable, there are other questions we would have naturally asked; other questions we would have posed. Interestingly, there is some correspondence between these deficiencies in our reform efforts and the deficiencies in the models that we as economists often use in macroeconomics.
•First, the importance of credit
We would, for instance, have asked what the fundamental roles of the financial sector are, and how we can get it to perform those roles better. Clearly, one of the key roles is the allocation of capital and the provision of credit, especially to small and medium-sized enterprises, a function which it did not perform well before the crisis, and which arguably it is still not fulfilling well.
This might seem obvious. But a focus on the provision of credit has neither been at the centre of policy discourse nor of the standard macro-models. We have to shift our focus from money to credit. In any balance sheet, the two sides are usually going to be very highly correlated. But that is not always the case, particularly in the context of large economic perturbations. In these, we ought to be focusing on credit. I find it remarkable the extent to which there has been an inadequate examination in standard macro models of the nature of the credit mechanism. There is, of course, a large microeconomic literature on banking and credit, but for the most part, the insights of this literature has not been taken on board in standard macro-models …
As I have already noted, in the conventional models (and in the conventional wisdom) market economies were stable. And so it was perhaps not a surprise that fundamental questions about how to design more stable economic systems were seldom asked. We have already touched on several aspects of this: how to design economic systems that are less exposed to risk or that generate less volatility on their own.
One of the necessary reforms, but one not emphasised enough, is the need for more automatic stabilisers and fewer automatic destabilisers – not only in the financial sector, but throughout the economy. For instance, the movement from defined benefit to defined contribution systems may have led to a less stable economy …
Distribution matters as well – distribution among individuals, between households and firms, among households, and among firms. Traditionally, macroeconomics focused on certain aggregates, such as the average ratio of leverage to GDP. But that and other average numbers often don’t give a picture of the vulnerability of the economy.
In the case of the financial crisis, such numbers didn’t give us warning signs. Yet it was the fact that a large number of people at the bottom couldn’t make their debt payments that should have tipped us off that something was wrong …
•Fourth, policy frameworks
Flawed models not only lead to flawed policies, but also to flawed policy frameworks.
Should monetary policy focus just on short-term interest rates? In monetary policy, there is a tendency to think that the central bank should only intervene in the setting of the short-term interest rate. They believe ‘one intervention’ is better than many. Since at least eighty years ago, with the work of Frank Ramsey, we know that focusing on a single instrument is not generally the best approach.
The advocates of the ‘single intervention’ approach argue that it is best, because it least distorts the economy. Of course, the reason we have monetary policy in the first place – the reason why government acts to intervene in the economy – is that we don’t believe that markets on their own will set the right short-term interest rate. If we did, we would just let free markets determine that interest rate. The odd thing is that while just about every central banker would agree we should intervene in the determination of that price, not everyone is so convinced that we should strategically intervene in others, even though we know from the general theory of taxation and the general theory of market intervention that intervening in just one price is not optimal.
Once we shift the focus of our analysis to credit, and explicitly introduce risk into the analysis, we become aware that we need to use multiple instruments. Indeed, in general, we want to use all the instruments at our disposal. Monetary economists often draw a division between macro-prudential, micro-prudential, and conventional monetary policy instruments. In our book Towards a New Paradigm in Monetary Economics, Bruce Greenwald and I argue that this distinction is artificial. The government needs to draw upon all of these instruments, in a coordinated way …
Of course, we cannot ‘correct’ every market failure. The very large ones, however – the macroeconomic failures – will always require our intervention. Bruce Greenwald and I have pointed out that markets are never Pareto efficient if information is imperfect, if there are asymmetries of information, or if risk markets are imperfect. And since these conditions are always satisfied, markets are never Pareto efficient. Recent research has highlighted the importance of these and other related constraints for macroeconomics – though again, the insights of this important work have yet to be adequately integrated either into mainstream macroeconomic models or into mainstream policy discussions.
•Fifth, price versus quantitative interventions
These theoretical insights also help us to understand why the old presumption among some economists that price interventions are preferable to quantity interventions is wrong. There are many circumstances in which quantity interventions lead to better economic performance.
A policy framework that has become popular in some circles argues that so long as there are as many instruments as there are objectives, the economic system is controllable, and the best way of managing the economy in such circumstances is to have an institution responsible for one target and one instrument. (In this view, central banks have one instrument – the interest rate – and one objective – inflation. We have already explained why limiting monetary policy to one instrument is wrong.)
Drawing such a division may have advantages from an agency or bureaucratic perspective, but from the point of view of managing macroeconomic policy – focusing on growth, stability and distribution, in a world of uncertainty – it makes no sense. There has to be coordination across all the issues and among all the instruments that are at our disposal. There needs to be close coordination between monetary and fiscal policy. The natural equilibrium that would arise out of having different people controlling different instruments and focusing on different objectives is, in general, not anywhere near what is optimal in achieving overall societal objectives. Better coordination – and the use of more instruments – can, for instance, enhance economic stability.
Inom informationsekonomin är Hayeks ”The Use of Knowledge in Society” (American Economic Review 1945) och Grossman & Stiglitz ”On the Impossibility of Informationally Efficient Markets” (American Economic Review 1980) två klassiker. Men medan Hayeks artikel ofta åberopas av nyösterrikiskt influerade ekonomer har neoklassiska nationalekonomer sällan något att säga om Grossman & Stiglitz:s artikel. Jag tror inte det är en tillfällighet
Ett av de mest avgörande antaganden som görs i den ortodoxa ekonomiska teorin är att ekonomins aktörer utan kostnad besitter fullständig information. Detta är ett antagande som heterodoxa ekonomer under lång tid ifrågasatt.
Neoklassiska ekonomer är självklart medvetna om att antagandet om perfekt information är orealistiskt i de flesta sammanhang. Man försvarar ändå användandet av det i sina formella modeller med att verkliga ekonomier, där informationen inte är fullt så perfekt, ändå inte skiljer sig på något avgörande sätt från de egna modellerna. Vad informationsekonomin visat är dock att de resultat som erhålls i modeller som bygger på perfekt information inte är robusta. Även en liten grad av informationsimperfektion har avgörande inverkan på ekonomins jämvikt. Detta har i och för sig också påvisats tidigare, av t ex transaktionskostnadsteorin. Dess representanter har dock snarare dragit slutsatsen att om man bara tar hänsyn till dessa kostnader i analysen så är standardresultaten i stort ändå intakta. Informationsekonomin visar på ett övertygande sätt att så inte är fallet. På område efter område har man kunnat visa att den ekonomiska analysen blir kraftigt missvisande om man bortser från asymmetrier i information och kostnader för att erhålla den. Bilden av marknader (och behovet av eventuella offentliga ingripanden) blir väsentligt annorlunda än i modeller som bygger på standardantagandet om fullständig information.
Grossman-Stiglitz-paradoxen visar att om marknaden vore effektiv – om priser fullt ut reflekterar tillgänglig information – skulle ingen aktör ha incitament att skaffa den information som priserna förutsätts bygga på. Om å andra sidan ingen aktör är informerad skulle det löna sig för en aktör att införskaffa information. Följaktligen kan marknadspriserna inte inkorporera all relevant information om de nyttigheter som byts på marknaden. Självklart är detta för (i regel marknadsapologetiska) neoklassiska nationalekonomer synnerligen störande. Därav ”tystnaden” kring artikeln och dess paradox!
Grossman & Stiglitz – precis som senare Frydman & Goldberg gör i Imperfect Knowledge Economics (Princeton University Press 2007) – utgår från och förhåller sig till Lucas et consortes. Deras värdering av det informationsparadigm som rationella förväntningar bygger på sammanfaller också så vitt jag kan bedöma helt med min egen. Det är ur relevans- och realismsynpunkt nonsens på styltor. Grossman-Stiglitz-paradoxen är kraftfull som ett yxhugg mot den neoklassiska roten. Det är därför den så gärna ”glöms” bort av neoklassiska ekonomer.
Hayek menade – se t ex kapitel 1 och 3 i Kunskap, konkurrens och rättvisa (Ratio 2003) – att marknader och dess prismekanismer bara har en avgörande roll att spela när information inte är kostnadsfri. Detta var en av huvudingredienserna i hans kritik av planhushållningsidén, då ju kostnadsfri information i princip skulle göra planhushållning och marknad likvärdiga (som så ofta är den nyösterrikiska bilden av marknader betydligt relevantare och mer realistisk än allehanda neoklassiska Bourbakikonstruktioner à la Debreu et consortes).
Kruxet med ”effektiva marknader” är – som Grossman & Stiglitz på ett lysande sätt visar – att de strikt teoretiskt bara kan föreligga när information är kostnadsfri. När information inte är gratis kan priset inte perfekt återspegla mängden tillgänglig information (nota bene – detta gäller vare sig asymmetrier föreligger eller ej).
Den i mitt tycke intressantaste funderingen utifrån Grossman & Stiglitz blir vad vi har för glädje av teorier som bygger på ”brusfria” marknader, när de inte bara är hopplöst orealistiska utan också visar sig vara teoretiskt inkonsistenta.
Trams är trams om än i vackra dosor.
A complete modeling system which yields definitive predictions (or at least multiple equilibria) requires the following conditions: given structures with fixed (or at least predictably random) interrelations between separable parts (e.g., economic agents) and predictable (or at least predictably random) outside influences. Such a system is … a‘closed’ system. Such a system, correctly applied, promotes internal consistency but risks inconsistency with the nature of the economic system unless that too is closed …
An open system is not the opposite of a closed system, since there is a range of possibilities for openness, depending on which conditions are not met and to what degree … Deviating from a closed system, and thus from certainty or certainty equivalence, does not mean abandoning theory or formal models. On the contrary, Keynes was concerned to identify the logical grounds on which we habitually form beliefs, make judgments and take decisions (both as economists and as economic agents) in spite of uncertainty. The question was what view on probability would be logically justified, in relation to the evidence, within an open system …
Any formal model is a closed system. Variables are specified and identified as endogenous or exogenous, and relations are specified between them. This is a mechanism for separating off some aspect of an open-system reality for analysis. But, for consistency with the subject matter, any analytical closure needs to be justified on the grounds that, for the purposes of the analysis, it is not unreasonable to treat the variables as having a stable identity, for them to have stable interrelations and not to be subject to unanticipated influences from outside … But in applying such an analysis it is important then to consider what has been assumed away …
Keynes’s argument is that any formal model is bound to be an incomplete representation of an open-system reality … Models are inevitably partial representations, invoking closures which are both porous and provisional. They can only be approximated in reality and even then cannot be presumed to persist …
This approach is not the equivalent of rational expectations theory’s assumption that the economist’s expectations are formed with full knowledge of all interactions. The inconsistency of such an assumption with the real world is at the heart of Keynes’s philosophy …
This methodology explains why Keynes’s general theory did not take the form of a single large model, including formal microfoundations … It was not that Keynes lacked a microeconomic analysis, but rather that his study of individual behavior concluded that it was organic rather than atomistic.
What does concern me about my discipline … is that its current core — by which I mainly mean the so-called dynamic stochastic general equilibrium approach — has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.
While it often makes sense to assume rational expectations for a limited application to isolate a particular mechanism that is distinct from the role of expectations formation, this assumption no longer makes sense once we assemble the whole model. Agents could be fully rational with respect to their local environments and everyday activities, but they are most probably nearly clueless with respect to the statistics about which current macroeconomic models expect them to have full information and rational information.
This issue is not one that can be addressed by adding a parameter capturing a little bit more risk aversion about macroeconomic, rather than local, phenomena. The reaction of human beings to the truly unknown is fundamentally different from the way they deal with the risks associated with a known situation and environment … In realistic, real-time settings, both economic agents and researchers have a very limited understanding of the mechanisms at work. This is an order-of-magnitude less knowledge than our core macroeconomic models currently assume, and hence it is highly likely that the optimal approximation paradigm is quite different from current workhorses, both for academic andpolicy work. In trying to add a degree of complexity to the current core models, by bringing in aspects of the periphery, we are simultaneously making the rationality assumptions behind that core approach less plausible.
The challenges are big, but macroeconomists can no longer continue playing internal games. The alternative of leaving all the important stuff to the “policy”-typ and informal commentators cannot be the right approach. I do not have the answer. But I suspect that whatever the solution ultimately is, we will accelerate our convergence to it, and reduce the damage we do along the transition, if we focus on reducing the extent of our pretense-of-knowledge syndrome.
Economics is a discipline with the avowed ambition to produce theory for the real world. But it fails in this ambition, Lars Pålsson Syll asserts in Chapter 12, at least as far as the dominant mainstream neoclassical economic theory is concerned. Overly confident in deductivistic Euclidian methodology, neoclassical economic theory lines up series of mathematical models that display elaborate internal consistency but lack clear counterparts in the real world. Such models are at best unhelpful, if not outright harmful, and it is time for economic theory to take a critical realist perspective and explain economic life in depth rather than merely modeling it axiomatically.
The state of economic theory is not as bad as Pålsson Syll describes, Fredrik Hansen retorts in Chapter 13. Looking outside the mainstream neoclassic tradition, one can find numerous economic perspectives that are open to other disciplines and manifest growing interest in methodological matters. He is confident that theoretical and methodological pluralism will be able to refresh the debate on economic theory, particularly concerning the nature of realism in economic theory, a matter about which Pålsson Syll and Hansen clearly disagree.
In a truly wonderful essay – chapter three of Error and Inference (Cambridge University Press, 2010, eds. Deborah Mayo and Aris Spanos) – Alan Musgrave gives a forceful plaidoyer for scientific realism and inference to the best explanation:
For realists, the name of the scientific game is explaining phenomena, not just saving them. Realists typically invoke ‘inference to the best explanation’ or IBE …
IBE is a pattern of argument that is ubiquitous in science and in everyday life as well. van Fraassen has a homely example:
“I hear scratching in the wall, the patter of little feet at midnight, my cheese disappears – and I infer that a mouse has come to live with me. Not merely that these apparent signs of mousely presence will continue, not merely that all the observable phenomena will be as if there is a mouse, but that there really is a mouse.” (1980: 19-20)
Here, the mouse hypothesis is supposed to be the best explanation of the phenomena, the scratching in the wall, the patter of little feet, and the disappearing cheese.
What exactly is the inference in IBE, what are the premises, and what the conclusion? van Fraassen says “I infer that a mouse has come to live with me”. This suggests that the conclusion is “A mouse has come to live with me” and that the premises are statements about the scratching in the wall, etc. Generally, the premises are the things to be explained (the explanandum) and the conclusion is the thing that does the explaining (the explanans). But this suggestion is odd. Explanations are many and various, and it will be impossible to extract any general pattern of inference taking us from explanandum to explanans. Moreover, it is clear that inferences of this kind cannot be deductively valid ones, in which the truth of the premises guarantees the truth of the conclusion. For the conclusion, the explanans, goes beyond the premises, the explanandum. In the standard deductive model of explanation, we infer the explanandum from the explanans, not the other way around – we do not deduce the explanatory hypothesis from the phenomena, rather we deduce the phenomena from the explanatory hypothesis …
The intellectual ancestor of IBE is Peirce’s abduction, and here we find a different pattern:
The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, … A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)
Here the second premise is a fancy way of saying “A explains C”. Notice that the explanatory hypothesis A figures in this second premise as well as in the conclusion. The argument as a whole does not generate the explanans out of the explanandum. Rather, it seeks to justify the explanatory hypothesis …
Abduction is deductively invalid … IBE attempts to improve upon abduction by requiring that the explanation is the best explanation that we have. It goes like this:
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, H is true
(William Lycan, 1985: 138)
This is better than abduction, but not much better. It is also deductively invalid …
There is a way to rescue abduction and IBE. We can validate them without adding missing premises that are obviously false, so that we merely trade obvious invalidity for equally obvious unsoundness. Peirce provided the clue to this. Peirce’s original abductive scheme was not quite what we have considered so far. Peirce’s original scheme went like this:
The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, there is reason to suspect that A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)
This is obviously invalid, but to repair it we need the missing premise “There is reason to suspect that any explanation of a surprising fact is true”. This missing premise is, I suggest, true. After all, the epistemic modifier “There is reason to suspect that …” weakens the claims considerably. In particular, “There is reason to suspect that A is true” can be true even though A is false. If the missing premise is true, then instances of the abductive scheme may be both deductively valid and sound.
IBE can be rescued in a similar way. I even suggest a stronger epistemic modifier, not “There is reason to suspect that …” but rather “There is reason to believe (tentatively) that …” or, equivalently, “It is reasonable to believe (tentatively) that …” What results, with the missing premise spelled out, is:
It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true.
This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences.
Of course, to establish that any such inference is sound, the ‘explanationist’ owes us an account of when a hypothesis explains a fact, and of when one hypothesis explains a fact better than another hypothesis does. If one hypothesis yields only a circular explanation and another does not, the latter is better than the former. If one hypothesis has been tested and refuted and another has not, the latter is better than the former. These are controversial issues, to which I shall return. But they are not the most controversial issue – that concerns the major premise. Most philosophers think that the scheme is unsound because this major premise is false, whatever account we can give of explanation and of when one explanation is better than another. So let me assume that the explanationist can deliver on the promises just mentioned, and focus on this major objection.
People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths.
What if the best explanation not only might be false, but actually is false. Can it ever be reasonable to believe a falsehood? Of course it can. Suppose van Fraassen’s mouse explanation is false, that a mouse is not responsible for the scratching, the patter of little feet, and the disappearing cheese. Still, it is reasonable to believe it, given that it is our best explanation of those phenomena. Of course, if we find out that the mouse explanation is false, it is no longer reasonable to believe it. But what we find out is that what we believed was wrong, not that it was wrong or unreasonable for us to have believed it.
People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.
Oxford professor John Kay has a very interesting article on why economists have tended to go astray in their – as my old mentor Erik Dahmén used to say – tool sheds:
Consistency and rigour are features of a deductive approach, which draws conclusions from a group of axioms – and whose empirical relevance depends entirely on the universal validity of the axioms. The only descriptions that fully meet the requirements of consistency and rigour are completely artificial worlds, such as the “plug-and-play” environments of DSGE – or the Grand Theft Auto computer game.
For many people, deductive reasoning is the mark of science: induction – in which the argument is derived from the subject matter – is the characteristic method of history or literary criticism. But this is an artificial, exaggerated distinction. Scientific progress – not just in applied subjects such as engineering and medicine but also in more theoretical subjects including physics – is frequently the result of observation that something does work, which runs far ahead of any understanding of why it works.
Not within the economics profession. There, deductive reasoning based on logical inference from a specific set of a priori deductions is “exactly the right way to do things”. What is absurd is not the use of the deductive method but the claim to exclusivity made for it. This debate is not simply about mathematics versus poetry. Deductive reasoning necessarily draws on mathematics and formal logic: inductive reasoning, based on experience and above all careful observation, will often make use of statistics and mathematics.
Economics is not a technique in search of problems but a set of problems in need of solution. Such problems are varied and the solutions will inevitably be eclectic. Such pragmatic thinking requires not just deductive logic but an understanding of the processes of belief formation, of anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses and governments do.
The belief that models are not just useful tools but are capable of yielding comprehensive and universal descriptions of the world blinded proponents to realities that had been staring them in the face. That blindness made a big contribution to our present crisis, and conditions our confused responses to it. Economists – in government agencies as well as universities – were obsessively playing Grand Theft Auto while the world around them was falling apart.
The article is essential reading for all those who want to understand why mainstream – neoclassical – economists actively have contributed to causing todays’s economic crisis rather than to solving it.
Perhaps this becomes less perplexing to grasp when considering what one of its main proponents today – Robert Lucas – maintained already in 2003:
My thesis in this lecture is that macroeconomics in this original sense has succeeded: its central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.
And this comes from an economist who has built his whole career on the assumption that people are hyper rational “robot imitations” with rational expectations and next to perfect ability to process information. Mirabile dictu!
One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. But the researcher cannot stop at this. As a consequence of the relations and connections that the researcher finds, a will and demand arise for critical reflection on the findings. To show that unemployment depends on rigid social institutions or adaptations to European economic aspirations to integration, for instance, constitutes at the same time a critique of these conditions. It also entails an implicit critique of other explanations that one can show to be built on false beliefs. The researcher can never be satisfied with establishing that false beliefs exist but must go on to seek an explanation for why they exist. What is it that maintains and reproduces them? To show that something causes false beliefs – and to explain why – constitutes at the same time a critique of that thing.
This I think is something particular to the humanities and social sciences. There is no full equivalent in the natural sciences, since the objects of their study are not fundamentally created by human beings in the same sense as the objects of study in social sciences. We do not criticize apples for falling to earth in accordance with the law of gravitation.
The explanatory critique that constitutes all good social science thus has repercussions on the reflective person in society. To digest the explanations and understandings that social sciences can provide means a simultaneous questioning and critique of one’s self-understanding and the actions and attitudes it gives rise to. Science can play an important emancipating role in this way. Human beings can fulfill and develop themselves only if they do not base their thoughts and actions on false beliefs about reality. Fulfillment may also require changing fundamental structures of society. Understanding of the need for this change may issue from various sources like everyday praxis and reflection as well as from science.
Explanations of societal phenomena must be subject to criticism, and this criticism must be an essential part of the task of social science. Social science has to be an explanatory critique. The researcher’s explanations have to constitute a critical attitude toward the very object of research, society. Hopefully, the critique may result in proposals for how the institutions and structures of society can be constructed. The social scientist has a responsibility to try to elucidate possible alternatives to existing institutions and structures.
In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.
Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.
The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is “closed,” and since social reality is fundamentally “open,” models of that kind cannot explain anything of what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.
In the face of the kind of methodological individualism and rational choice theory that dominate positivist social science we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary “rational” actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals.
The overarching flaw with methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.
With a non-reductionist approach we avoid both determinism and voluntarism. For although the individual in society is formed and influenced by social structures that he does not construct himself, he can as an individual influence and change the given structures in another direction through his own actions. In society the individual is situated in roles or social positions that give limited freedom of action (through conventions, norms, material restrictions, etc.), but at the same time there is no principal necessity that we must blindly follow or accept these limitations. However, as long as social structures and positions are reproduced (rather than transformed), the actions of the individual will have a tendency to go in a certain direction.
What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the “deep structure” of society.
We have to acknowledge the ontological fact that the world is mind-independent. This does not in any way reduce the epistemological fact that we can only know what the world is like from within our languages, theories, or discourses. But that the world is epistemologically mediated by theories does not mean that it is the product of them.
Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.
Social science is relational. It studies and uncovers the social structures in which individuals participate and position themselves. It is these relations that have enough continuity, autonomy, and causal power to endure in society and be the real object of knowledge in social science. It is also only in their capacity as social relations and positions that individuals can be given power or resources (or the lack of them). To be a chieftain, a capital-owner, or a slave is not an individual property of an individual, but can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena – just as a cheque presupposes a banking system and tribe-members presuppose a tribe.
Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.
Contrary to the well-known symmetry hypothesis, I would also maintain that explanation and prediction are not the same. To explain something is to uncover the generative mechanisms behind an event, while prediction only concerns actual events and does not have to say anything about the underlying causes of the events in question. The barometer may be used for predicting today’s weather changes. But these predictions are not explanatory, since they say nothing of the underlying causes.
Methodologically, this implies that the basic question one has to pose when studying social relations and events is what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.
If we want the knowledge we produce to have practical relevance, our the knowledge we aspire to and our methods have to adapt to our object of study. In social sciences – such as economics, history, or anthropology – we will never reach complete explanations. Instead we have to aim for satisfactory and adequate explanations.
As is well known, there is no unequivocal criterion for what should be considered a satisfactory explanation. All explanations (with the possible exception of those in mathematics and logic) are fragmentary and incomplete; self-evident relations and conditions are often left out so that one can concentrate on the nodal points. Explanations must, however, be real in the sense that they “correspond” to reality and are capable of being used.
The relevance of an explanation can be judged only by reference to a given aspect of a problem. An explanation is then relevant if, for example, it can point out the generative mechanisms that rule a phenomenon or if it can illuminate the aspect one is concerned with. To be relevant from the explanatory viewpoint, the adduced theory has to provide a good basis for believing that the phenomenon to be explained really does or did take place. One has to be able to say: “That’s right! That explains it. Now I understand why it happened.”
While positivism tries to develop a general a priori criterion for evaluation of scientific explanations, it would be better to realize that all we can try for is adequate explanations, which it is not possible to disconnect from the specific, contingent circumstances that are always incident to what is to be explained. I think we have to be modest and acknowledge that our models and theories are time-space relative.
Besides being an aspect of the situation in which the event takes place, an explanatory factor ought also to be causally effective; that is, one has to consider whether the event would have taken place even if the factor did not exist. And it also has to be causally deep. If event e would have happened without factor f, then this factor is not deep enough. Triggering factors, for instance, often do not have this depth. And by contrasting different factors with each other we may find that some are irrelevant (without causal depth).
Without the requirement of depth, explanations most often do not have practical significance. This requirement leads us to the nodal point against which we have to take measures to obtain changes. If we search for and find fundamental structural causes for unemployment, we can hopefully also take effective measures to remedy it.
Scientific theories (ought to) do more than just describe event-regularities. They also analyze and describe the mechanisms, structures, and processes that exist. They try to establish what relations exist between these different phenomena and the systematic forces that operate within the different realms of reality.
Explanations are important within science, since the choice between different theories hinges in large part on their explanatory powers. The most reasonable explanation for one theory’s having greater explanatory power than others is that the mechanisms, causal forces, structures, and processes it talks of, really do exist.
When studying the relation between different factors, a social scientist is usually prepared to admit the existence of a reciprocal interdependence between them. One is seldom prepared, on the other hand, to investigate whether this interdependence might follow from the existence of an underlying causal structure. This is really strange. The actual configurations of a river, for instance, depend of course on many factors. But one cannot escape the fact that it flows downhill and that this fundamental fact influences and regulates the other causal factors. Not to come to grips with the underlying causal power that the direction of the current constitutes can only be misleading and confusing.
All explanations of a phenomenon have preconditions that limit the number of alternative explanations. These preconditions significantly influence the ability of the different potential explanations to really explain anything. If we have a system where underlying structural factors control the functional relations between the parts of the system, a satisfactory explanation can never disregard this precondition. Explanations that take the parts (micro-explanations) as their point of departure may well describe how and through which mechanisms something takes place, but without the structure we cannot explain why it happens.
But could one not just say that different explanations – such as individual and structural – are different, without a need to grade them as better or worse? I think not. That would be too relativistic. For although we are dealing with two different kinds of explanations that answer totally different questions, I would say that the structural most often answers the more relevant questions. In social sciences we often search for explanations of events because we want to be able to avoid or change certain outcomes. Giving individualistic explanations does not make this possible, since they only state sufficient but not necessary conditions. Without knowing the latter we cannot prevent or avoid these undesirable social phenomena.
All kinds of explanations in empirical sciences are pragmatic. We cannot just say that one type is false and another is true. Explanations have a function to fulfill, and some are better and others worse at this. Even if individual explanations can show the existence of a pattern, the pattern as such does not constitute an explanation. We want to be able to explain the pattern per se, and for that we usually require a structural explanation. By studying statistics of the labor market, for example, we may establish the fact that everyone who is at the disposal of the labor market does not have a job. We might even notice a pattern, that people in rural areas, old people, and women are often jobless. But we cannot explain with these data why this is a fact and that it may even be that a certain amount of unemployment is a functional requisite for the market economy. The individualistic frame of explanation gives a false picture of what kind of causal relations are at hand, and a fortiori a false picture of what needs to be done to enable a change. For that, a structural explanation of the kind mentioned above is required.
The dogma that macroeconomics must have ‘rigorous microfoundations’ requires us to adopt a micro-reduction strategy that has so often failed when attempted in other areas of scientific research. If it were to be made compulsory for economists it would bring about the euthanasia of macroeconomic theory …
I begin the book by taking issue with the spatial metaphor on which the case for microfoundations rests, and argue that in economics the relationship between macro and micro should be seen as a horizontal, not a vertical, one. The photograph on the cover makes the point very nicely: a bridge is NOT a foundation …
In the book I draw on an extensive literature in the philosophy of science to show how micro-reduction has failed, over and over again, in other areas of scientific thought. I make special reference to the unsuccessful ‘methodological individualism’ project in the social sciences in the 1950s and 1960s, and to Richard Dawkins’s more recent ‘hierarchical reductionism’, an abortive attempt to reduce the life sciences to propositions about the ‘selfish gene’. (I have stolen Dawkins’s title, but definitely not his ideas). I then explore the emergence of support for microfoundations in the literature of macroeconomics since the 1930s, distinguishing authors who supported the dogma without using the term from those who used the term without intending to support the dogma. I have a lot to say about the opponents of microfoundations, inside and (especially) outside the mainstream: Post Keynesians, institutionalists, Old Keynesians and even some Austrians. On this issue there are some surprises: Robert Solow is on the side of the angels, but so too is Milton Friedman, while many heterodox economists who really should have known better have instead given inadvertent comfort to the mainstream enemy. I show that most economic methodologists have been critical of microfoundations, and for very good reasons, but they have been very largely ignored by economic theorists of all persuasions. I conclude the book by reiterating the case for a (semi-)autonomous science of macroeconomics, whose practitioners will cooperate with their microeconomist colleagues without being subservient to them.
Som vetenskapsteoretiker är det intressant att konstatera att många ekonomer och andra samhällsvetare appellerar till ett krav på att förklaringar för att kunna sägas vara vetenskapliga kräver att ett enskilt fall ska kunna ”föras tillbaka på en allmän lag”. Som grundläggande princip åberopas ofta en allmän lag i form av ”om A så B” och att om man i de enskilda fallen kan påvisa att om ”A och B är förhanden så har man ’förklarat’ B”.
Denna positivistisk-induktiva vetenskapssyn är dock i grunden ohållbar. Låt mig förklara varför.
Enligt en positivistisk-induktivistisk syn på vetenskapen utgör den kunskap som vetenskapen besitter bevisad kunskap. Genom att börja med helt förutsättningslösa observationer kan en ”fördomsfri vetenskaplig observatör” formulera observationspåståenden utifrån vilka man kan härleda vetenskapliga teorier och lagar. Med hjälp av induktionsprincipen blir det möjligt att utifrån de singulära observationspåståendena formulera universella påståenden i form av lagar och teorier som refererar till förekomster av egenskaper som gäller alltid och överallt. Utifrån dessa lagar och teorier kan vetenskapen härleda olika konsekvenser med vars hjälp man kan förklara och förutsäga vad som sker. Genom logisk deduktion kan påståenden härledas ur andra påståenden. Forskningslogiken följer schemat observation – induktion – deduktion.
I mer okomplicerade fall måste vetenskapsmannen genomföra experiment för att kunna rättfärdiga de induktioner med vars hjälp han upprättar sina vetenskapliga teorier och lagar. Experiment innebär – som Francis Bacon så måleriskt uttryckte det – att lägga naturen på sträckbänk och tvinga den att svara på våra frågor. Med hjälp av en uppsättning utsagor som noggrant beskriver omständigheterna kring experimentet – initialvillkor – och de vetenskapliga lagarna kan vetenskapsmannen deducera påståenden som kan förklara eller förutsäga den undersökta företeelsen.
Den hypotetisk-deduktiva metoden för vetenskapens förklaringar och förutsägelser kan beskrivas i allmänna termer på följande vis:
1 Lagar och teorier
3 Förklaringar och förutsägelser
Enligt en av den hypotetisk-deduktiva metodens främsta förespråkare – Carl Hempel – har alla vetenskapliga förklaringar denna form, som också kan uttryckas enligt schemat nedan:
Alla A är B Premiss 1
a är A Premiss 2
a är B Konklusion
Som exempel kan vi ta följande vardagsnära företeelse:
Vatten som värms upp till 100 grader Celsius kokar
Denna kastrull med vatten värms till 100 grader Celsius
Denna kastrull med vatten kokar
Problemet med den hypotetisk-deduktiva metoden ligger inte så mycket i premiss 2 eller konklusionen, utan i själva hypotesen, premiss 1. Det är denna som måste bevisas vara riktig och det är här induktionsförfarandet kommer in.
Den mest uppenbara svagheten i den hypotetisk-deduktiva metoden är själva induktionsprincipen. Det vanligaste rättfärdigandet av den ser ut som följer:
Induktionsprincipen fungerade vid tillfälle 1
Induktionsprincipen fungerade vid tillfälle 2
Induktionsprincipen fungerade vid tillfälle n
Induktionsprincipen fungerar alltid
Detta är dock tveksamt eftersom ”beviset” använder induktion för att rättfärdiga induktion. Man kan inte använda singulära påståenden om induktionsprincipens giltighet för att härleda ett universellt påstående om induktionsprincipens giltighet.
Induktion är tänkt att spela två roller. Dels ska den göra det möjligt att generalisera och dels antas den utgöra bevis för slutsatsernas riktighet. Som induktionsproblemet visar klarar induktionen inte av båda dessa uppgifter. Den kan stärka sannolikheten av slutsatserna (under förutsättning att induktionsprincipen är riktig, vilket man dock inte kan bevisa utan att hamna i ett cirkelresonemang) men säger inte att dessa nödvändigtvis är sanna.
En annan ofta påpekad svaghet hos den hypotetisk-deduktiva metoden är att teorier alltid föregår observationspåståenden och experiment och att det därför är fel att hävda att vetenskapen börjar med observationer och experiment. Till detta kommer att observationspåståenden och experiment inte kan antas vara okomplicerat tillförlitliga och att de för sin giltighetsprövning kräver att man hänvisar till teori. Att även teorierna i sin tur kan vara otillförlitliga löser man inte främst med fler observationer och experiment, utan med andra och bättre teorier. Man kan också invända att induktionen inte på något sätt gör det möjligt för oss att få kunskap om verklighetens djupareliggande strukturer och mekanismer, utan endast om empiriska generaliseringar och lagbundenheter. Inom vetenskapen är det oftast så att förklaringen av händelser på en nivå står att finna i orsaker på en annan, djupare, nivå. Induktivismens syn på vetenskap leder till att vetenskapens huvuduppgift beskrivs som att ange hur något äger rum, medan andra vetenskapsteorier menar att vetenskapens kardinaluppgift måste vara att förklara varför det äger rum.
Till följd av de ovan anförda problemen har mer moderata empirister resonerat som att kommit att eftersom det i regel inte existerar något logiskt tillvägagångssätt för hur man upptäcker en lag eller teori startar man helt enkelt med lagar och teorier utifrån vilka man deducerar fram en rad påståenden som fungerar som förklaringar eller prediktioner. I stället för att undersöka hur man kommit fram till vetenskapens lagar och teorier försöker man att förklara vad en vetenskaplig förklaring och prediktion är, vilken roll teorier och modeller spelar i dessa, och hur man ska kunna värdera dem.
I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklarings-modellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. [Men det finns problem med denna uppfattning t. o. m. inom naturvetenskapen. Många av naturvetenskapens lagar säger egentligen inte något om vad saker gör, utan om vad de tenderar att göra. Detta beror till stor del på att lagarna beskriver olika delars beteende, snarare än hela fenomenet som sådant (utom möjligen i experimentsituationer). Och många av naturvetenskapens lagar gäller egentligen inte verkliga entiteter, utan bara fiktiva entiteter. Ofta är detta en följd av matematikens användande inom den enskilda vetenskapen och leder till att dess lagar bara kan exemplifieras i modeller (och inte i verkligheten).] Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad.
En följd av att man accepterar den hypotetisk-deduktiva förklaringsmodellen är oftast att man också accepterar den s. k. symmetritesen. Enligt denna är den enda skillnaden mellan förutsägelse och förklaring att man i den förstnämnda antar explanansen vara känd och försöker göra en prediktion, medan man i den senare antar explanandum vara känd och försöker finna initialvillkor och lagar ur vilka det undersökta fenomenet kan härledas.
Ett problem med symmetritesen äer dock att den inte tar hänsyn till att orsaker kan förväxlas med korrelationer. Att storken dyker upp samtidigt med människobarnen utgör inte någon förklaring till barns tillkomst.
Symmetritesen tar inte heller hänsyn till att orsaker kan vara tillräckliga men inte nödvändiga. Att en cancersjuk individ blir överkörd gör inte cancern till dödsorsak. Cancern skulle kunna ha varit den riktiga förklaringen till individens död. Men även om vi t. o. m. skulle kunna konstruera en medicinsk lag – i överensstämmelse med den deduktivistiska modellen – som säger att individer med den aktuella typen av cancer kommer att dö av denna cancer, förklarar likväl inte lagen denna individs död. Därför är tesen helt enkelt inte riktig.
Att finna ett mönster är inte detsamma som att förklara något. Att på frågan varför bussen är försenad få till svar att den brukar vara det, utgör inte någon acceptabel förklaring. Ontologi och naturlig nödvändighet måste ingå i ett relevant svar, åtminstone om man i en förklaring söker något mer än ”constant conjunctions of events”.
Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, ge en metod för testning av förklaringar, och visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.
En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.
Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.
The claimed strength of a social experiment, relatively to non-experimental methods, is that few assumptions are required to establish its internal validity in identifying a project’s impact. The identification is not assumption-free. People are (typically and thankfully) free agents who make purposive choices about whether or not they should take up an assigned intervention. As is well understood by the randomistas, one needs to correct for such selective compliance … The randomized assignment is assumed to only affect outcomes through treatment status (the “exclusion restriction”).
There is another, more troubling, assumption just under the surface. Inferences are muddied by the presence of some latent factor—unobserved by the evaluator but known to the participant—that influences the individual-specific impact of the program in question … Then the standard instrumental variable method for identifying [the average treatment effect on the treated] is no longer valid, even when the instrumental variable is a randomized assignment … Most social experiments in practice make the implicit and implausible assumption that the program has the same impact for everyone.
While internal validity … is the claimed strength of an experiment, its acknowledged weakness is external validity—the ability to learn from an evaluation about how the specific intervention will work in other settings and at larger scales. The randomistas see themselves as the guys with the lab coats—the scientists—while other types, the “policy analysts,” worry about things like external validity. Yet it is hard to argue that external validity is less important than internal validity when trying to enhance development effectiveness against poverty; nor is external validity any less legitimate as a topic for scientific inquiry.
In this video science philosopher Nancy Cartwright explains why using Randomized Controlled Trials (RCTs) is not at all the “gold standard” that it has lately often been portrayed as. As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.
When it comes to questions of causality, randomized controlled trials (RCTs) are nowadays considered some kind of “gold standard” in social sciences and policies. Everything has to be “evidence based,” and the evidence preferably has to come from randomized experiments.
But randomization is basically – just as e. g. econometrics – a deductive method. Given warranted assumptions (manipulability, transitivity, separability, additivity, linearity etc) this method delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are warranted and a fortiori being able to justify our causal conclusions. Although randomization may contribute to controlling for “confounding,” it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. Even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
So RCTs are not at all the “gold standard” they have lately often been portrayed as. RCTs usually do not provide evidence that their results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.
Even though I can present evidence for being able to sharpen my pencils with Rube Goldberg’s ingenious construction – mainly becuase flying kites in my windy hometown (Lund, Sweden) is no match – it does not come with a warranted export license. Most people would probably find ordinary pencil sharpeners more efficacious.
Probabilistic reasoning in science – especially Bayesianism – reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there are strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.
In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.
Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.
That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.
I think this critique of Bayesianism is in accordance with the views of Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by probabilistically reasoning Bayesian economists.
In an interesting article on his blog, John Kay shows that these strictures on probabilistic-reductionist reasoning do not only apply to everyday life and science, but also to the law:
English law recognises two principal standards of proof. The criminal test is that a charge must be “beyond reasonable doubt”, while civil cases are decided on “the balance of probabilities”.
The meaning of these terms would seem obvious to anyone trained in basic statistics. Scientists think in terms of confidence intervals – they are inclined to accept a hypothesis if the probability that it is true exceeds 95 per cent. “Beyond reasonable doubt” appears to be a claim that there is a high probability that the hypothesis – the defendant’s guilt – is true. Perhaps criminal conviction requires a higher standard than the scientific norm – 99 per cent or even 99.9 per cent confidence is required to throw you in jail. “On the balance of probabilities” must surely mean that the probability the claim is well founded exceeds 50 per cent.
And yet a brief conversation with experienced lawyers establishes that they do not interpret the terms in these ways. One famous illustration supposes you are knocked down by a bus, which you did not see (that is why it knocked you down). Say Company A operates more than half the buses in the town. Absent other evidence, the probability that your injuries were caused by a bus belonging to Company A is more than one half. But no court would determine that Company A was liable on that basis.
A court approaches the issue in a different way. You must tell a story about yourself and the bus. Legal reasoning uses a narrative rather than a probabilistic approach, and when the courts are faced with probabilistic reasoning the result is often a damaging muddle …
When I have raised these issues with people with scientific training, they tend to reply that lawyers are mostly innumerate and with better education would learn to think in the same way as statisticians. Probabilistic reasoning has become the dominant method of structured thinking about problems involving risk and uncertainty – to such an extent that people who do not think this way are derided as incompetent and irrational …
It is possible – common, even – to believe something is true without being confident in that belief. Or to be sure that, say, a housing bubble will burst without being able to attach a high probability to any specific event, such as “house prices will fall 20 per cent in the next year”. A court is concerned to establish the degree of confidence in a narrative, not to measure a probability in a model.
Such narrative reasoning is the most effective means humans have developed of handling complex and ill-defined problems … Probabilistic thinking … often fails when we try to apply it to idiosyncratic events and open-ended problems. We cope with these situations by telling stories, and we base decisions on their persuasiveness. Not because we are stupid, but because experience has told us it is the best way to cope. That is why novels sell better than statistics texts.