Bayesianism — a patently​ absurd approach to science

13 January, 2019 at 14:54 | Posted in Theory of Science & Methodology | 7 Comments

Back in 1991, when yours truly earned his first PhD with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level, it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

This, of course, was like swearing in church. My mainstream colleagues were — to say the least — not exactly überjoyed.

The decision theoretical approach I was most critical of, was the one building on the then reawakened Bayesian subjectivist (personalistic) interpretation of probability.

One of my inspirations when working on the dissertation was Henry E. Kyburg, and I still think his critique is the ultimate take-down of Bayesian hubris:

bFrom the point of view of the “logic of consistency”, no set of beliefs is more rational than any other, so long as they both satisfy the quantitative relationships expressed by the fundamental laws of probability. Thus I am free to assign the number 1/3 to the probability that the sun will rise tomorrow; or, more cheerfully, to take the probability to be 9/10 that I have a rich uncle in Australia who will send me a telegram tomorrow informing me that he has made me his sole heir. Neither Ramsey, nor Savage, nor de Finetti, to name three leading figures in the personalistic movement, can find it in his heart to detect any logical shortcomings in anyone, or to find anyone logically culpable, whose degrees of belief in various propositions satisfy the laws of the probability calculus, however odd those degrees of belief may otherwise be …

Now this seems patently absurd. It is to suppose that even the most simple statistical inferences have no logical weight where my beliefs are concerned. It is perfectly compatible with these laws that I should have a degree of belief equal to 1/4 that this coin will land heads when next I toss it; and that I should then perform a long series of tosses (say, 1000), of which 3/4 should result in heads; and then that on the 1001st toss, my belief in heads should be unchanged at 1/4 …

There is another argument against both subjestivistic and logical theories that depends on the fact that probabilities are represented by real numbers … The point can be brought out by considering an old fashioned urn containing black and white balls. Suppose that we are in an appropriate state of ignorance, so that, on the logical view, as well as on the subjectivistic view, the probability that the first ball drawn will be black, is a half … Now suppose that we draw a thousand balls from this urn, and that half of them are black. Relative to this information both the subjectivistic and the logical theories would lead to the assignment of a conditional probability of 1/2 to the statement that a black ball will be drawn on the 1001st draw …

Although it does seem perfectly plausible that our bets concerning black balls and white balls should be offered at the same odds before and after the extensive sample, it surely does not seem plausible to characterize our beliefs in precisely the same way in the two cases … This is a strong argument, I think, for considering the measure of rational belief to be two dimensional …

Henry E. Kyburg

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find mainstream economists that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

treatprobVariation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight. Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments — even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by mainstream economists.

How strange that mainstream economists do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes two-dimensional concepts of evidential weight and uncertainty are not possible to squeeze into a single calculable numerical ‘probability’ (Peirce had a similar view — ” to express the proper state of belief, not one number but two are requisite, the first depending on the inferred probability, the second on the amount of knowledge on which that probability is based”).  In the quest for calculable risk, one puts a blind eye to genuine uncertainty and looks the other way.

7 Comments »

RSS feed for comments on this post. TrackBack URI

  1. Lol. I finished mine on rationality and formal logic, making parallel arguments (only touching briefly on problems with Bayesian approaches), in 1990.

  2. I don’t understand how Professor Syll can be against the rational expectations hypothesis yet assume GDP growth is good. Making consumption the goal of public policy assumes more is better, which is the core of rational expectations …
    .
    Not that there’s anything wrong with inconsistency. Just curious whether Lars is aware of it.

    • assumes more is better, which is the core of rational expectations

      Huh?

      • From Investopedia:
        .
        “The rational expectations theory is an economic concept whereby people make choices based on their rational outlook, available information and past experiences.”
        .
        The choices must be ranked. More is necessarily better.
        .
        “The rational expectations theory also explains how producers and suppliers use past events to predict future business operations. If a company believes that the price for its product will be higher in the future, for example, it will stop or slow production until the price rises.”
        .
        The company wants more dollars, because more is better.
        .
        Prioritizing GDP growth as a goal of public policy does the same.
        .
        The “more is better” assumption is shared by both GDP fetishists and rational expectation theorists.

    • That does not make any sense. Rational expectations is a way of modeling how people act in response to events, calculating from those events probable futures and behaving in anticipation of consequences thought of as evolving thru time from stochastic processes.

      • The higher the GDP, the better. Rational expectations, when you get beyond the mumbo-jumbo, reduces to prescribing behavior that increases your utility, or amount of things for sale. The goal of higher GDP is only valid if you assume the rational expectation of all economic agents is to spend more. The goal of increasing aggregate demand depends on the value judgment that more income is necessarily better. Rational expectations assumes the same …

  3. I do not see how Keynes’s “degrees of belief” or “weight of evidence” help us out of the conundrum introduced by the failure to confront the the concept of probability with a concept of control. For Bayesian subjectivity, we are asked to substitute Keynesian subjectivity — where is there any gain?
    .
    Many commonly invoked probability concepts are predicated on having established a very sharp boundary between what we know and what we do not know: confidence that a process is producing, say, a normal distribution and will continue to do so in the foreseeable future, implies knowing exactly what we do not know, and, therefore, the path thru which we may smoothly learn. It is a crazy notion.
    .
    Probability concepts apply fruitfully in highly-engineered contexts, whether engineered by nature or by the deliberate hands of men. No one should be willy nilly applying common probability concepts while ignoring the highly engineered context that creates a controlled evolution of outcomes along a fully anticipated path.
    .
    If you want to know the bias of a coin used in coin-flipping, it would make a lot more sense to carefully examine the coin (and the means of flipping the coin and the environment where the coin is to be flipped) than it would be to engage in flipping the coin a thousand times, while naively hoping that the coin, the process of flipping and the environment are not changing.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.
Entries and comments feeds.