Why Bayesianism has not resolved a single fundamental​ scientific​ dispute

3 Jan, 2019 at 23:10 | Posted in Economics, Theory of Science & Methodology | 3 Comments

419fn8sv1fl-_sx332_bo1204203200_Bayesian reasoning works, undeniably, where we know (or are ready to assume) that the process studied fits certain special though abstract causal structures, often called ‘statistical models’ … However, when we choose among hypotheses in important scientific controversies, we usually lack such prior knowledge​ of causal structures, or it is irrelevant to the choice. As a consequence, such Bayesian inference to the preferred alternative has not resolved, even temporarily, a single fundamental scientific dispute.

Mainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately — via some “Dutch book” or “money pump” argument — susceptible to being ruined by some clever “bookie”.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but — even granted this questionable reductionism — do rational agents really have to be Bayesian? However, there are no strong warrants for believing so.

In many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1​ if you are rational. That is, in this case — and based on symmetry — a rational individual would have to assign probability 10% to become unemployed and 90% to become employed.

Its-the-lawThat feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

We live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we ‘simply do not know.’ There are no strong reasons why we should accept the Bayesian view of modern mainstream economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” As argued by Keynes, we rather base our expectations on the confidence or “weight” we put on different events and alternatives. Expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that standardly have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by mainstream economists.


  1. “We live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations.”
    You can hedge risk. Inflation swaps essentially take inflation out of the equation for the duration of the contract. A party that benefits from inflation gives some of the inflation-driven profit back to a counterparty that locks in a maximum inflation rate that it is willing to tolerate. Both parties benefit: if inflation goes up, the inflation winner gives some back to the inflation loser but the swap asset increases so the funding ratio remains the same. If inflation goes down, the inflation protection buyer pays the fixed rate it planned for and the inflation protection seller gets a higher payment than actual inflation. Both sides win in an inflation swap because both sides’ funding ratio remains the same.
    I think that’s mostly correct. Economists should be studying derivatives because they allow you to strip out risk, so an inflation risk is hedged. There might still be counterparty risk, but collateral and Credit Default Swap derivatives eliminate that risk …
    Rather than decry Knightian uncertainty, figure out how finance has devised ways to hedge it.

  2. I read the Dynamic Choice article, linked to under “money pump”.
    The article argues that unranked choice preferences lead to vulnerabilities. Someone can exploit your lack of transitive ordering of preferences to extract money from you.
    Such arguments assume money is necessarily limited. Budget constraints are assumed. Then public policy creates a self-fulfilling prophecy by arbitrarily limiting budgets. Private financial policies however allow vast credit creation, which circulates as money for long periods, much of the privately-created credit ending up as bona fide claims on the central bank.
    But the article ignores the question of who gets to decide that your budget is limited.
    Music strikes me as a place where intransitive choices abound, creating pleasure. You can improvise; you can express the idea that one note is too many and a thousand is not enough through your playing (you can lay out, not play any notes, just when the time series predicts you should be playing 1000).
    The last example is like what a recovering addict feels: one is too many, and a thousand is not enough. The article uses the self-torturer story to illustrate this same concept of zero being less than one, but greater than one thousand. Such irrationality is said to be punished in markets. I say, do away with markets then. Give me the freedom to be irrational; let me self-provision on unenclosed land, away from markets …

  3. The usual application of the Bayesian principle of ignorance is to probability values for outcomes, ie, if we know nothing (Bayesians argue), then we should assign equal probability to each possible outcome. But this approach implicitly assigns a uniform distribution to the unknown variable. Surely, in a situation of complete ignorance, one should not assume a specific distribution. Rather, it would be more rational to assume no knowledge of the distribution, and assume all possible distributions are equally likely, not all possible outcomes. Bayesianism fails its own criterion of reasonableness under ignorance.

Sorry, the comment form is closed at this time.

Blog at WordPress.com.
Entries and comments feeds.