Radical uncertainty — a question of economic methodology

10 Apr, 2019 at 13:13 | Posted in Economics | 9 Comments

Between 1920 and 1950, a debate took place which defined the future of economics in the second half of the 20th century. On one side were John Maynard Keynes and Frank Knight; on the other, Frank Ramsey and Jimmie Savage.

UnknownKnight and Keynes believed in the ubiquity of “radical uncertainty”. Not only did we not know what was going to happen, we had a very limited ability to even describe the things that might happen. They distinguished risk, which could be described with the aid of probabilities, from real uncertainty—which could not. In Knight’s world, such uncertainties gave rise to the profit opportunities which were the dynamic of a capitalist economy. Keynes saw these uncertainties as at the root of the inevitable instability in such economies.

Their opponents insisted instead that all uncertainties could be described probabilistically. And their opponents won, not least because their probabilistic world was convenient: it could be described axiomatically and mathematically.

It is difficult to exaggerate the practical consequence of the outcome of that technical argument. To acknowledge the role of radical uncertainty is to knock away the foundations of finance theory and much modern macroeconomics. But the reigning consensus is beset with glaring weaknesses. Keynes and Knight were right, and their opponents wrong. And recognition of that is a necessary preliminary to the rebuilding of a more relevant economic theory.

John Kay

Many economists have over time tried to diagnose what’s the problem behind the ‘intellectual poverty’ that characterizes modern mainstream economics. Kay points to the questionable reduction of uncertainty into probabilistic risk. Rationality postulates, rational expectations, market fundamentalism, general equilibrium, atomism, and over-mathematisation, are some other things one have been pointing at. But although these assumptions/axioms/practices are deeply problematic, they are mainly reflections of a deeper and more fundamental problem.

c9dd533b1cb4e7a2e1d6569481907beeThe main problem with mainstream economics is its methodology.

The fixation on constructing models showing the certainty of logical entailment has been detrimental to the development of a relevant and realist economics. Insisting on formalistic (mathematical) modelling forces the economist to give upon on realism and substitute axiomatics for real-world relevance. The price for rigour and precision is far too high for anyone who is ultimately interested in using economics to pose and (hopefully) answer real-world questions and problems.

This deductivist orientation is the main reason behind the difficulty that mainstream economics has in terms of understanding, explaining and predicting what takes place in our societies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of – and justification for – the assumptions on which the deductivist foundation is erected. Asking these questions is an important ingredient in a sustained critical effort at showing how nonsensical is the embellishing of a smorgasbord of models founded on wanting (often hidden) methodological foundations.

The mathematical-deductivist straitjacket used in mainstream economics presupposes atomistic closed-systems – i.e., something that we find very little of in the real world, a world significantly at odds with an (implicitly) assumed logic world where deductive entailment rules the roost. Ultimately then, the failings of modern mainstream economics have its root in a deficient ontology. The kind of formal-analytical and axiomatic-deductive mathematical modelling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant.

Although there has been a clearly discernible increase and focus on “empirical” economics in recent decades, the results in these research fields have not fundamentally challenged the main deductivist direction of mainstream economics. They are still mainly framed and interpreted within the core “axiomatic” assumptions of individualism, instrumentalism and equilibrium that make up even the “new” mainstream economics. Although, perhaps, a sign of an increasing – but highly path-dependent – theoretical pluralism, mainstream economics is still, from a methodological point of view, mainly a deductive project erected on a foundation of empty formalism.

If we want theories and models to confront reality there are obvious limits to what can be said “rigorously” in economics. For although it is generally a good aspiration to search for scientific claims that are both rigorous and precise, we have to accept that the chosen level of precision and rigour must be relative to the subject matter studied. An economics that is relevant to the world in which we live can never achieve the same degree of rigour and precision as in logic, mathematics or the natural sciences. Collapsing the gap between model and reality in that way will never give anything else than empty formalist economics.

In mainstream economics, with its addiction to the deductivist approach of formal- mathematical modelling, model consistency trumps coherence with the real world. That is surely getting the priorities wrong. Creating models for their own sake is not an acceptable scientific aspiration – impressive-looking formal-deductive models should never be mistaken for truth.t is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!


When applying deductivist thinking to economics, economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

For Keynes, the world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a “weight of argument” that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, Keynes would add “nor do people”. The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by “legal atoms” with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and our theories and models of it.

Keynes once wrote that economics “is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Now, if the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

Models preferably ought to somehow reflect/express/correspond to reality. I’m not saying that the answers are self-evident, but at least you have to do some philosophical under-labouring to rest your case. Too often that is wanting in modern economics, just as it was when Keynes in the 1930s complained about Tinbergen’s and other econometricians lack of justifications of the chosen models and methods.

“Human logic” has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that “the more we know the more we know we don’t know”.

The models and methods we choose to work with have to be in conjunction with the economy as it is situated and structured. Epistemology has to be founded on ontology. Deductivist closed-system theories, as all the varieties of the Walrasian general equilibrium kind, could perhaps adequately represent an economy showing closed-system characteristics. But since the economy clearly has more in common with an open-system ontology we ought to look out for other theories – theories who are rigorous and precise in the meaning that they can be deployed for enabling us to detect important causal mechanisms, capacities and tendencies pertaining to deep layers of the real world.

the-first-principle-isRigour, coherence and consistency have to be defined relative to the entities for which they are supposed to apply. Too often they have been restricted to questions internal to the theory or model. But clearly, the nodal point has to concern external questions, such as how our theories and models relate to real-world structures and relations. Applicability rather than internal validity ought to be the arbiter of taste.

So — if we want to develop a new and better economics we have to give up on the deductivist straitjacket methodology. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economies.

If economics is going to be useful, it has to change its methodology. Economists have to get out of their deductivist theoretical ivory towers and start asking questions about the real world. A relevant economics science presupposes adopting methods suitable to the object it is supposed to predict, explain or understand.


  1. “root of the inevitable instability…”.
    Having worked in the areas of hydrodynamic stability as well as loop stability criteria for servo mechanisms and controls I just want to point out that word, instability, is bandied about much in macroeconomics but the implied meaning in that context seems to be poorly defined.
    In math physics and in applied math stability refers to a question:
    Given that a steady solution to the governing equations exists, is that solution unique, and if not, will an infinitesimal perturbation result in a return to the first solution or will it evolve away from the first and go to a second solution or perhaps evolve to an unbounded state (blow up).
    One example of a steady solution is static equilibrium. That’s about as steady as it gets. For example consider a rod pendulum. This system has two solutions that both satisfy the governing equation for a pendulum in static equilibrium. On has the bob hanging vertically down and the other has the bob pointing straight up. Both are steady solution for all time, but if we perturb these with a slight puff, only the bob down solution is stable to small perturbations. The bob up solution is not. One puff and it goes to the bob down configuration.
    This is a non parametric system with two solutions one stable and one unstable.
    A parametric or conditional instability is given by the Euler buckling column. Consider an elastic material forming a slender column loaded precisely axially like a piece of coat hanger squeezed axially by an adjustable spring loaded vice. Experiment shows that very reproducibly the beam will buckle at a precise loading determined by the beam geometry, it’s material properties(modulus) and the load forming a single dimensionless parameter. Equally precisely you can model this system and show there are two solutions; straight, and buckled and you can test the stability of the straight solution by perturbing slightly and seeing if the perturbation grows exponentially in time or returns to the unperturbed solution. The agreement between experiment and theory is highly precise and this design is used for resettable pressure relief valves.
    Another example has a steady solution that is not static. Consider flow of a fluid in a round tube. In this case the basic state solution is a uniaxial parallel steady flow with a parabolic velocity profile. The stability parameter is the dimensionless centerline velocity(Reynold’s number) and the experiment result is small perturbations do nothing up to a critical value of Reynold’s number (N)but above about N=2000 permanent transition to turbulent or chaotic flow is seen. The corresponding analysis remains somewhat intractable because both experiment and theory suggest other parameters like wall roughness and details of any perturbation are needed to capture the full behavior.
    With this in mind, systems like preditor/prey populations oscillation like Steve Keen is deriving for macro modeling are not unstable either parametrically or otherwise. They simply evolve forward in time given some initial state and system parameters. There is no steady basic solution to test by perturbation method. The word instability has little meaning although one can exhibit strange attractor type behaviors in the phase plane. Here solutions orbit one locus for a while but then evolve away and go into orbit around another attractor. No perturbation is required because the first attractor is not a steady solution for all time. The system is doomed from the start to end up orbiting the second attractor.
    My point is there is a rich literature for stability of dynamical systems. Key words are: perturbation theory, bifurcation theory, nonlinear behavior of dynamical systems.
    When those of us steeped in traditional stability theory see economists bandy about terms like instability of market systems it’s often hard to imagine what they mean.

    • When economists bandy about terms like “market system” it is always hard to know to what they refer, because there is manifestly no such thing. There is a lot of hand-waving in economics; the pseudo-physics math is purely ornamental.

  2. Uncertainty is not absolute. It is relative, a function of our filters of observation and analysis, and of the rules and structure of the system under consideration.

    What the poverty level will be at a given future date is radically uncertain in a laissez-faire economy where such an outcome fluctuates widely, depending on the confluence of a vast number unrestricted individual free-decisions. But it is much less uncertain in a more controlled socialist economy with financial safety nets and social welfare systems in place.

    How much of ‘radical uncertainty’ is a universal property of economics, and how much is merely a feature of present systems? Until we rule out alternative system designs that organize economic behavior in ways that confer reasonable certainty to the system components and outcomes of interest to us, the most we can say is that economies now *seem* to be characterized by radical uncertainty.

    • The term laissez-faire economy, market system, free market are in my view all possessed of the same illusionary thinking embodied in the magical “invisible hand” rhetoric. The illusion of consumers making “unrestricted individual free-decisions” while ignoring how the markers are rigged (monopolies, monopsonies, etc.), frequently to the determinant of the consumer, is documented by Akerlof et. al. (Phishing for Phools), David Weil (The Fissured Workplace) and many others. Entire global supply chains have used technology smoke screens to hide such manipulation of entire sectors of the so-called “free market” to engage in manipulation and deception so that consumers/workers can be brainwashed into believing they are making “unrestricted individual free-decisions” about such important things as employment and wage-rates, when in reality they are being manipulated via deception into positions where power and its ability to control asymmetric information to disadvantage them is all that really matters.

      I encountered such a global valueless chain once a few years back. Having ran my own technology/software company prior to the rise of online technology platforms (LinkedIn, job boards like Monster, Indeed, CareerBuilder, relay server services like JobDiva that pride technology services to hide X-Originating-IPs and spoof telephone numbers via virtual PBXs) that enabled such manipulation and deception on a global scale I knew something was not right. So I applied own technology skills to disarticulate the deceptive supply chain and to apply reverse deception techniques to trap the liars and get them to admit what they were doing.

      What I uncovered was a global supply chain stretching from India to Microsoft, Facebook, Bank of America, Wells Fargo, and any other major corporation engaging in the Fissured Workplace model build form many middle layers designed to deceive workers and to manipulate them into working for a transnational network of “third party vendors” (aka body-shops, sweat-shops) the sole purpose of which is to wage-scalp highly-educated and highly-skilled workers in the same way that low-skilled and low-educated workers were used during the rise of outsourcing. The difference being they don’t need to export the jobs anymore because the sweat-shops are located in the same country as the highly-skilled/highly-educated workers live already.

      One day you can be working as a FT employee of Microsoft with a descent wage and benefits, and the next subject to the “employee shedding” and end up in a “third party vendor” with its office sitting right next to Redmond campus, but with 60% of your base pay “extracted” by the “third party vendor” and with little or no benefits and practically no labor law protections.

      The workers had no “free choice” in this so-called “free market,” and yet an entire global valueless chain is created to deceive them into thinking they are making a free choice.

      After disarticulating this manipulative and deceptive supply chain I turned to an in depth study of economics to place this into some context. It was the economist David Weil in his book The Fissured Workplace that exposes what is going on in this corrupt and predatory so-called “free market.”

      To wit:

  3. A change in methodology really is the only way forward in economics. I am deeply grateful that you argue that way tirelessly.
    One question that has been nagging at me is this : suppose that economists are rational, suppose further that they therefore understand that radical uncertainty rules instead of probability, why is it then that they do not switch methodology.
    After thinking about this problem for some time I came up with this answer. Mainstream economists do well understand the issues related to uncertainty v. probability but since they cannot handle the former they rationally use the latter in order to “resolve” economic problems. In other words, the use of the wrong methodology simply is a means of coping with something that has no solution. What most economists ignore along this way, however, is that their “solution” has more or less the same ontological justification religion or pretence of knowledge or similar approaches have.
    To cut a long story short, we must apply the insights into the role of radical uncertainty to the analysis of economists’ behaviour in order to bring about a change in economic methodology.

  4. Finance adapts to uncertainty. The Fed has proven it can supply unlimited liquidity to deal with the uncertainty that finance firms can’t fully hedge away.

  5. I am not sure I understand why pervasive or radical uncertainty is particularly a “methodological” problem. Is the argument that an axiomatic assumption of pervasive uncertainty makes it impossible to reach a conclusion or complete a definite proof, so that considerations of “tractability” in the use of axiomatic/deductive methods lead theorists to focus their analysis on problem framing where pervasive uncertainty and its implications are excluded from consideration?
    Strictly speaking, that’s an argument against an improper use of axiomatic/deductive modeling in analysis of economic problems, and not really an argument against axiomatic/deductive methods per se. And, it begs a question you seem reluctant to answer: what is your alternative? If the problem of methods comes down to taking the easy way out and ignoring the implications of uncertainty in order to get definite but wrong answers, then the alternative is obviously taking the hard path, because the hard path is the true path.
    The important question would be: what does analytic reasoning tell us are the implications of pervasive and radical uncertainty for the organization of the economy as an institutionalized political system?
    I actually think informal analytic reasoning coupled with acute observation of actually existing institutions can tell us a great deal about the implications of radical and pervasive uncertainty, because social adaptations to uncertainty are ubiquitous in the institutional structures of the economy: it is simply necessary to be realistic and look in order to begin learning. Getting economists to look, and having looked, to see — that’s a big problem! But, is it particularly a problem of methodology? Are you arguing that the reluctance of economists to look and to see reality is properly a problem of methodology? I suppose I agree that it could be characterized in that way, and I would call it an overreliance on analysis without synthesis, analytic modeling without operational modeling, the neglect of effective methods of systematic observation and measurement.
    If analytic reasoning of the axiomatic/deductive type truly tells us nothing about the implications of pervasive and radical uncertainty, then I suppose we would be right to discard it altogether, though what modes of analytic reasoning we could apply in substitution leaves me puzzled. Why types of analysis are not more or less axiomatic/deductive in character? Or, is it your contention that we could dispose of analysis altogether? That does not seem plausible (or consistent with your thinking on the problems of reasoning about probability and statistical correlation).
    I guess I think the problem of axiomatic/deductive methods in economics is not those methods are inherently wrong (indeed, I am not sure they are not epistemically indispensable), but that economists use them both wrongly and exclusively — a bad combination indeed. For all the claims by economists of rigor and precision in reasoning, all the critical junctures in the narrative logic chain of mainstream economics (aka Econ 101) consist of hand-waving over obvious errors and omissions. The smorgasboard is overladen because no model is ever discarded as wrong and disproven, though most of the ones customarily served up are seriously defective. (And, those defects may well be clearly exposed in a proper application of axiomatic/deductive methods — so why concede false claims of rigor?)
    Anyway those are my thoughts this morning. Very much visiting your blog. Thanks for your tireless effort.

  6. The point is that the ‘principles’ by which a society or a group lives in tolerable harmony are essentially religious. The essential nature of a religious principle is that not merely is it immoral to oppose it, but to ask what it is, is morally identical with denial and attack.

    There must be ultimates, and they must be religious, in economics as anywhere else, if one has anything to say touching conduct or social policy in a practical way. Man is a believing animal and to few, if any, is it given to criticize the foundations of belief ‘intelligently’.

    To inquire into the ultimates behind accepted group values is obscene and sacrilegious: objective inquiry is an attempt to uncover the nakedness of man, his soul as well as his body, his deeds, his culture, and his very gods. (Knight, 1932, p. 4489).

    Certainly the large general [economics] courses should be prevented from raising any question about objectivity, but should assume the objectivity of the slogans they inculcate, as a sacred feature of the system. (Knight, 1932, p. 455).

    ~ Knight, Frank H. (1932) “The Newer Economics and the Control of Economic Activity.” Journal of Political Economy, 40(4), pp. 43376. Emphasis added.

    I find that last sentence chilling. That is sick and very twisted. A cult like any other, blind faith is demanded of its followers. This was cited in Norgaard.

  7. Euler, Lagrange, Stokes: Economics could learn a lot from these guys. You can build a physics of fluids by starting with a model for atoms and working from there. It turns out this will get you almost no where because, as great math physicist once told me, grinning,”It’s not where the action is. “. It turns out you need to replace the atomistic view of fluids with a so called “continuum approximation”. Only this way can you obtain continuously differentiable field equations which believe me, are complicated enough without including the atomistic reality.
    Fundamentally I think this failure to make the correct approximations in deriving the model framework for mainstream macroeconomics is like getting carried away doing statistical mechanics to derive emergent properties like viscosity or compressibility. It turns out you can bring such material properties into your field equations empirically as coefficients defined by constitutive equations like the ideal gas law, or the definition of Newtonian viscosity. It’s easier, more precise, and this is not where the real rubber/road hitting action is anyway. The governing field equations are highly non linear especially in certain parametric regimes like high Reynold’s number. Non linearity means multiple non unique solutions exist, so which solution do we get in practice? This opens a whole new field of hydrodynamic stability theory. That’s where the action is! Chaos theory, bifurcation theory, perturbation method, phase space, turbulent transition, lift, drag, Lorentz strange attractors and so on.
    Steve Keen keeps making this point for economics but it’s like watching a jet pilot explain to a primitive cargo cult that the dashboard instruments aren’t used to summon the gods.

Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and Comments feeds.