## The limits of probabilistic reasoning

12 February, 2018 at 09:48 | Posted in Statistics & Econometrics | 8 CommentsProbabilistic reasoning in science — especially Bayesianism — reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but, even granted this questionable reductionism, it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there is strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.

In many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind and that in those situations it is not possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1 if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to become unemployed and 90% to become employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ *A Treatise on Probability* (1921) and *General Theory* (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by probabilistically reasoning Bayesian economists.

We always have to remember that economics and statistics are two quite different things, and as long as economists cannot identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events -– in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment -– there, strictly seen, is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures –- something seldom or never done in economics.

And this is the basic problem!

If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice in science. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions! Not doing that, you simply conflate statistical and economic inferences.

The present ‘machine learning’ and ‘big data’ hype shows that many social scientists — falsely — think that they can get away with analysing real-world phenomena without any (commitment to) theory. But — data never speaks for itself. Without a prior statistical set-up, there actually are no data at all to process. And — using a machine learning algorithm will only produce what you are looking for. Theory matters.

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in-depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

And even worse — some economists using statistical methods think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like ‘faithfulness’ or ‘stability’ is to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived statistical theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

## 8 Comments »

RSS feed for comments on this post. TrackBack URI

### Leave a Reply

Blog at WordPress.com.

Entries and comments feeds.

Prof. Syll claims that he has a superior methodology for doing applied economics:

“I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in.”

But:

1. All applied economists are just as earnest as Prof. Syll in aiming for and trying to find “powerful, deep, causal, features and mechanisms”.

2. Prof. Syll rejects all conventional empirical economics, but he provides no alternative criteria or methodology for deciding when belief is “warranted and justified”.

3. In the past Prof. Syll has declined all challenges to provide examples which might illustrate his purported superior methodology.

Comment by Kingsley Lewis— 12 February, 2018 #

Kingsley, I think you’re, actually, wrong on all three points:

1 No, unfortunately, not all applied economists go for “powerful, deep, causal features and mechanisms.” Lacking a sustainable ontology, most of them are content with rather superficial measurable relations of a non-causal nature. I’ve had many posts up on this critique vis-a-vis econometric procedures.

2 In the post I, e.g., explicitly mention IBE as a far better “methodology” than the deductive-axiomatic one standardly used in mainstream economics.

3 Cf. point 2 !

Comment by Lars Syll— 12 February, 2018 #

Prof Syll, many thanks for the response.

Yes, you do mention IBE. This approach judges the “best” explanation according to various aspects of its “loveliness”, instead of according to its likelihood.

Unfortunately, the “loveliness” of competing explanations is a usually a very weak and subjective criteria/methodology for deciding when belief is “warranted and justified”.

IBE may appear to work for some scientific theories (e.g. Darwin’s theory of evolution), but these are exceptional cases when all competing explanations are very weak or plainly conflict with the evidence.

Regarding examples, please reference an example of IBE applied to economics.

Comment by Kingsley Lewis— 13 February, 2018 #

Re the weakness of the IBE criteria I think — we are discussing real-world open systems and not closed deductive-axiomatic systems — Keynes’ dictum “It is better to be vaguely right than precisely wrong” says it all.

Comment by Lars Syll— 13 February, 2018 #

Kingleys, as i see it Lars main point is not to avoid the use of statistical methods or mathematics per see,but to see the limitations to use those methods in a social-science like economics.

Remember that the first one that used Bayesian Theory more properly was Laplace in his studies of Mechanics and Planetary Astronomics.

I think that Laplace would be rather stunned if he lived today and saw the heavy missuse of the Bayesian methods in all sort of “Scienses” from Political, Ethnology ,Sociology and foremost Macro Economics.

The main point for every scientist in choice methods as i see it should be, to first ask oneself, do my method apply?If not i should of course have to chose another one.

As C Wright Mills once stated:as a social scientist you in many cases have to be your own methodologist,simply because in most areas that are of most interest ,there are no developed methodologies that apply.

Incase of Critique of Bayesianism

see :Andrew Gelman : Bayesian Analysis 2008, Number 3, pp. 445-450

Objections to Bayesian statistics

“Bayesian inference is one of the more controversial approaches to

statistics. The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience. The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian

inference. This article presents a series of objections to Bayesian inference, written in the voice of a hypothetical anti-Bayesian statistician. The article is intended to elicit elaborations and extensions of these and other arguments from non-Bayesians and responses from Bayesians who might have different perspectives on these issues.

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that different methods work well in different settings .

Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of

the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective

belief, and second, that it’s not clear how to assess subjective knowledge in any case.

Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an all purpose statistical solution to genuinely hard problems. Compared to classical inference,which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for

a generation of statistics to be ignorant of experimental design and analysis of variance,instead becoming experts on the convergence of the Gibbs sampler.

In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer

scientists, and others to move in and ll in the gap.

I find it clearest to present the objections to Bayesian statistics in the voice of a hypothetical anti-Bayesian statistician. I am imagining someone with experience in theoretical and applied statistics, who understands Bayes’ theorem but might not be aware of recent developments in the eld. In presenting such a persona, I am not trying to mock or parody anyone but rather to present a strong rm statement of attitudes that deserve serious consideration.

Here follows the list of objections from a hypothetical or paradigmatic non-Bayesian:

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person,and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically denied, which it’s not). Where do prior distributions

come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence.

To put it another way, why should I believe your subjective prior? If I really believed it, then I could just feed you some data and ask you for your subjective posterior. That would save me a lot of eort!

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16. I’d rather start

with tried and true methods, and then generalize using something I can trust, such as statistical theory and minimax principles, that don’t depend on your subjective beliefs.

Especially when the priors I see in practice are typically just convenient conjugate forms. What a coincidence that, of all the innite variety of priors that could be chosen, it always seems to be the normal, gamma, beta, etc., that turn out to be the right choices?

To restate these concerns mathematically: I like unbiased estimates and I like conffidence intervals that really have their advertised condence coverage. I know that these aren’t always going to be possible, but I think the right way forward is to get as close to these goals as possible and to develop robust methods that work with minimal assumptions.

The Bayesian approach to give up even trying to approximate unbiasedness and to instead rely on stronger and stronger assumptions|that seems like the wrong way to go.

In the old days, Bayesian methods at least had the virtue of being mathematically clean.

Nowadays, they all seem to be computed using Markov chain Monte Carlo,

which means that, not only can you not realistically evaluate the statistical properties of the method, you can’t even be sure it’s converged, just adding one more item to the list of unveriable (and unveried) assumptions. Computations for classical methods aren’t easy running from nested bootstraps at one extreme to asymptotic theory on

the otherbut there is a clear goal of designing procedures with proper coverage, in contrast to Bayesian simulation which seems stuck in an innite regress of inferentialuncertainty.

People tend to believe results that support their preconceptions and disbelieve results that surprise them. Bayesian methods encourage this undisciplined mode of thinking.

I’m sure that many individual Bayesian statisticians are acting in good faith, but they’re providing encouragement to sloppy and unethical scientists everywhere.

And, probably worse, Bayesian techniques motivate even the best-intentioned researchers to get stuck in the rut of prior beliefs.

As the applied statistician Andrew Ehrenberg wrote in 1986, Bayesianism assumes:

(a) Either a weak or uniform prior, in which case why bother?, (b) Or a strong prior, in which case why collect new data?, (c) Or more realistically, something in between,in which case Bayesianism always seems to duck the issue.

Nowadays people use a lot of empirical Bayes methods. I applaud the Bayesians’ newfound commitment to empiricism but am skeptical of this particular approach, which always seems to rely on an assumption of exchangeability.”

In political science, people are embracing Bayesian statistics as the latest methodological fad. Well, let me tell you something.

The 50 states aren’t exchangeable. I’ve lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly.

Calling it a hierarchical or a multilevel model doesn’t change things|it’s an additional level of modeling that I’d rather not do. Call me old-fashioned, but I’d rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.

So, don’t these empirical and hierarchical Bayes methods use the data twice? If you’re going to be Bayesian, then be Bayesian: it seems like a cop-out and contradictory to the Bayesian philosophy to estimate the prior from the data. If you want to do multilevel modeling, I prefer a method such as generalized estimating equations that makes minimal assumptions.

And don’t even get me started on what Bayesians say about data collection. The mathematics of Bayesian decision theory lead inexorably to the idea that random sampling and random treatment allocation are inecient, that the best designs are deterministic.

I have no quarrel with the mathematics here the mistake lies deeper in the

philosophical foundations, the idea that the goal of statistics is to make an optimal decision.

A Bayes estimator is a statistical estimator that minimizes the average risk, but when we do statistics, we’re not trying to \minimize the average risk,” we’re trying to do estimation and hypothesis testing. If the Bayesian philosophy of axiomatic reasoning implies that we shouldn’t be doing random sampling, then that’s a strike against the theory right there.

Bayesians also believe in the irrelevance of stopping times that,if you stop an experiment based on the data, it doesn’t change your inference. Unfortunately for the Bayesian theory,the p-value does change when you alter the stopping rule, and no amount of philosophical reasoning will get you around that point.

I can’t keep track of what all those Bayesians are doing nowadays,unfortunately,all sorts of people are being seduced by the promises of automatic inference through the magic of MCMC,but I wish they would all just stop already and get back to doing statistics the way it should be done, back in the old days when a p-value stood for something, when a condence interval meant what it said, and statistical bias was something to eliminate, not something to embrace.”

Comment by Jan Milch— 16 February, 2018 #

“But how do you conceive of the analogous ‘nomological machines’ for prices”

Use linear algebra to represent all possible future market states in a matrix A. List your minimum desired payouts in a vector b. Use constraint relaxation techniques to solve Ax <= b. x is your optimal portfolio.

x can include insurance and other hedges. You can select subsets of markets to make the math easier. The only risk is how much you will profit, not whether you make a profit …

Practitioners in the private sector are writing programs to do this. I propose we hold challenges to open source the process and use perfect self-funding hedges to fund public spending. The Fed backstops everything if there is a crisis. (In 2008 the insurance piece broke because AIG for example was using MBS to guarantee payout on MBS defaults; supposedly, that is fixed now. In any case the Fed proved it can supply arbitrary liquidity to make mistakes go away.)

Comment by Robert Mitchell— 13 February, 2018 #

Ax >= b if b is your minimum desired profit …

Comment by Robert Mitchell— 13 February, 2018 #

[…] The limits of probabilistic reasoning, larspsyll.wordpress.com […]

Pingback by Reading List (Feb 21, 2018) | Bespoke Quantitative Solutions— 22 February, 2018 #