## Bayesian absurdities

20 Jun, 2022 at 15:23 | Posted in Statistics & Econometrics | 1 CommentIn other words, if a decision-maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd.

So leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.

To get the Bayesian probability calculus going you sometimes have to assume strange things — so strange that you should perhaps start wondering if maybe there is something wrong with your theory …

**Added:** For those interested in these questions concerning the reach and application of statistical theories, do read Sander Greenland’s insightful comment:

My take is that the quoted passage is a poster child for what’s wrong with statistical foundations for applications. Mathematics only provides contextually void templates for what might be theories if some sensible mapping can be found between the math and the application context. Just as with frequentist and all other statistical “theories”, Bayesian mathematical theory (template) works fine as a tool when the problem can be defined in a very small world of an application in which the axioms make contextual sense under the mapping and the background information is not questioned. There is no need for leaving any probability on “green cheese” if you aren’t using Bayes as a philosophy, for if green cheese is really found, the entire contextual knowledge base is undermined and all well-informed statistical analyses sink with it.

The problems often pointed out for econometrics are general ones of statistical theories, which can quickly degenerate into math gaming and are usually misrepresented as scientific theories about the world. Of course, with a professional sales job to do, statistics has encouraged such reification through use of deceptive labels like “significance”, “confidence”, “power”, “severity” etc. for what are only properties of objects in mathematical spaces (much like identifying social group dynamics with algebraic group theory or crop fields with vector field theory). Those stat-theory objects require extraordinary physical control of unit selection and experimental conditions to even begin to connect to the real-world meaning of those conventional labels. Such tight controls are often possible with inanimate materials (although even then they can cost billions of dollars to achieve, as with large particle colliders). But they are infrequently possible with humans, and I’ve never seen them approached when whole societies are the real-world target, as in macroeconomics, sociology, and social medicine. In those settings, at best our analyses only provide educated guesses about what will happen as a consequence of our decisions.

## 1 Comment

Sorry, the comment form is closed at this time.

Blog at WordPress.com.

Entries and Comments feeds.

“you should perhaps start wondering if maybe there is something wrong with your theory”…What theory?

My take is that the quoted passage is a poster child for what’s wrong with statistical foundations for applications. Mathematics only provides contextually void templates for what might be theories if some sensible mapping can be found between the math and the application context. Just as with frequentist and all other statistical “theories”, Bayesian mathematical theory (template) works fine as a tool when the problem can be defined in a very small world of an application in which the axioms make contextual sense under the mapping and the background information is not questioned. There is no need for leaving any probability on “green cheese” if you aren’t using Bayes as a philosophy, for if green cheese is really found, the entire contextual knowledge base is undermined and all well-informed statistical analyses sink with it.

The problems often pointed out for econometrics are general ones of statistical theories, which can quickly degenerate into math gaming and are usually misrepresented as scientific theories about the world. Of course, with a professional sales job to do, statistics has encouraged such reification through use of deceptive labels like “significance”, “confidence”, “power”, “severity” etc. for what are only properties of objects in mathematical spaces (much like identifying social group dynamics with algebraic group theory or crop fields with vector field theory). Those stat-theory objects require extraordinary physical control of unit selection and experimental conditions to even begin to connect to the real-world meaning of those conventional labels. Such tight controls are often possible with inanimate materials (although even then they can cost billions of dollars to achieve, as with large particle colliders). But they are infrequently possible with humans, and I’ve never seen them approached when whole societies are the real-world target, as in macroeconomics, sociology, and social medicine. In those settings, at best our analyses only provide educated guesses about what will happen as a consequence of our decisions.

[In fairness, I think Lindley understood these limitations of theory in practice, but was at times given to misleading hyperbole or oversimplification when selling Bayes. DeFinetti, his idol, definitely warned against reification, and operationalized the educated guessing in the guise of betting.]

Comment by lesdomes— 20 Jun, 2022 #