Why I am not a Bayesian

17 February, 2019 at 18:20 | Posted in Theory of Science & Methodology | 7 Comments

What I do not believe is that the relation that matters is simply the entailment relation between the theory, on the one hand, and the evidence on the other. The reasons that the relation cannot be simply that of entailment are exactly the reasons why the hypothetico-deductive account … is inaccurate; but the suggestion is at least correct in sensing that our judgment of the relevance of evidence to theory depends on the perception of a structural connection between the two, and that degree of belief is, at best, epiphenomenal. In the determination of the bearing of evidence on theory there seem to be mechanisms and strategems that have no apparent connection with degrees of belief, which are shared alike by people advocating different theories. Save for the most radical innovations, scientists seem to be in close agreement regarding what would or would not be evidence relevant to a novel theory; claims as to the relevance to some hypothesis of some observation or experiment are frequently buttressed by detailed calculations and arguments.e905b578f6 All of these features of the determination of evidential relevance suggest that that relation depends somehow on structural, objective features connecting statements of evidence and statements of theory. But if that is correct, what is really important and really interesting is what these structural features may be. The condition of positive relevance, even if it were correct, would simply be the least interesting part of what makes evidence relevant to theory.

None of these arguments is decisive against the Bayesian scheme of things … But taken together, I think they do at least strongly suggest that there must be relations between evidence and hypotheses that are important to scientific argument and to confirmation but to which the Bayesian scheme has not yet penetrated.

Clark Glymour

The vain search​ for The Holy Grail of Science

13 February, 2019 at 17:40 | Posted in Theory of Science & Methodology | Leave a comment

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.
monthly-sharpe-header

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive inference, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

The quest​ for certainty — a new substitute for religion

10 February, 2019 at 16:13 | Posted in Theory of Science & Methodology | 6 Comments

popIn this post-rationalist age of ours, more and more books are written in symbolic languages, and it becomes more and more difficult to see why: what it is all about, and why it should be necessary, or advantageous, to allow oneself to be bored by volumes of symbolic trivialities. It almost seems as if the symbolism were becoming a value in itself, to be revered for its sublime ‘exactness’: a new expression of the old quest for certainty, a new symbolic ritual, a new substitute for religion.

As a critic of mainstream economics mathematical-formalist Glasperlenspiel it is easy to share the feeling of despair …

Bayesian ‘old evidence’ problems

9 February, 2019 at 21:08 | Posted in Theory of Science & Methodology | Leave a comment

debWhy is the subjective Bayesian supposed to have an old evidence problem?

The allegation … goes like this: If probability is a measure of degree of belief, then if an agent already knows that e has occurred, the agent must assign P(e) the value 1. Hence P(e|H) is assigned a value of 1. But this means no Bayesian support accrues from e. For if P(e) = P(e|H) = 1, then P(H|e) = P(H). The Bayesian condition for support is not met …

How do subjective Bayesians respond to the charge that they have an old evidence problem? The standard subjective Bayesian response is  …

“The Bayesian interprets P(e|H) as how likely you think e would be were h to be false” …

But many people — Bayesians included — are not too clear about how this “would be” probability is supposed to work.

Yes indeed — how is such a “would be” probability to be interpreted? The only feasible solution is arguably to restrict the Bayesian calculus to problems where well-specified nomological machines are operating. Throwing a die or pulling balls from an urn is fine, but then the Bayesian calculus would of course not have much to say about science …

Den postmoderna utförsbacken

4 February, 2019 at 17:20 | Posted in Theory of Science & Methodology | Leave a comment

slippyDisse teorier om det sociale kan så rettes mod den sociale institution, vi kalder viden. Hermed bliver viden til eksempelvis internaliserede objektiverede og legitimerede eksternaliseringer af menneskelig adfærd … Dette er i og for sig uproblematisk forudsat, at man kan opretholde en distinktion mellem det vi i samfundet kalder ”viden” og så viden i sin filosofiske og epistemologiske forstand, som viden om virkeligheden (”justified true belief”) … Men det er lige her, at problemerne begynder. For den instans, der skulle opretholde og indfri dette skel mellem det vi opfatter som viden og så rigtig – altså sand – viden, er netop videnskaben … Det klassiske videnssociologiske relativismeproblem dukker hermed op. Videnskabelig viden er ikke særligt begrundet, og har i den sidste ende ingen epistemologisk fortrinsret over for f.eks. pseudo-videnskabelige videnspåstande.

Man starter som god kritisk samfundsforsker og ender som relativist. Det er, hvad jeg kalder den socialkonstruktivistiske glidebane. Den er ingen naturlige steder, hvor man kan hoppe af undervejs. I hvert fald ikke, når videnskaben ikke længere kan skelne mellem samfundsmæssige viden og rigtig (videnskabelig) viden. Ja – faktisk kan glidebanen fortsætte længere endnu. Man kan – også på baggrund af videnskabsforståelsens seneste udviklinger – få så store problemer med at skelne mellem viden og virkelighed, at man ender i en ontologisk idealisme, hvor også virkeligheden selv er socialt konstrueret …

Mange af socialkonstruktivismens tilhængere har på grund af de mange forskellige socialkonstruktivismer ikke lige øje for skellet mellem det uproblematiske udgangspunkt og den radikale erkendelsesteoretiske relativisme, som de ender i, og derfor erklærer de ubekymret deres socialkonstruktivistiske ståsted.

Søren Barlebo Wenneberg

Den postmoderna socialkonstruktivismens ‘slippery slope’ diskuteras även i den här powerpointen som yours truly presenterade för några år sedan.

Postmodernism — en antiintellektuell avgrund

3 February, 2019 at 11:28 | Posted in Theory of Science & Methodology | Leave a comment

Den antiintellektuella avgrunden är nära när den postmoderna sanningsrelativismen infekterar det offentliga samtalet på alla nivåer, inklusive den akademiska världen.

truth_exit_signI Sverige tycks den pedagogiska disciplinen vara värst smittad. En docent i pedagogik fick för några år sedan Skolverkets uppgift att skriva en rapport om fysikundervisningen i den svenska skolan, samt komma med förslag på hur den skulle attrahera fler flickor.

Ur rapporten:

”Föreställningen om det vetenskapliga tänkandets självklara överhöghet rimmar illa med jämställdhets- och demokratiidealen. […] Vissa sätt att tänka och resonera premieras mera än andra i naturvetenskapliga sammanhang. […] Om man inte uppmärksammar detta riskerar man att göra missvisande bedömningar. Till exempel genom att oreflekterat utgå från att ett vetenskapligt tänkande är mer rationellt och därför borde ersätta ett vardagstänkande” …

Pedagogen skriver vidare i rapporten: ”En genusmedveten och genuskänslig fysik förutsätter en relationell infallsvinkel på fysiken samt att en hel del av det traditionella vetenskapliga kunskapsinnehållet i fysiken plockas bort.”

Det vetenskapliga kunskapsinnehållet i fysiken ska alltså ”plockas bort” för att ”underlätta” för flickor. Inte nog med att detta är en förfärlig kunskapssyn, det är dessutom kränkande att betrakta flickor som oförmögna eller sämre på att ta till sig kunskap i fysik.

Författaren till rapporten heter Moira von Wright och är numera professor i pedagogik och rektor för Södertörns högskola. När nu en sådan kunskapsteoretisk grundsyn slagit rot i våra högre lärosäten har vi ett problem.

Martin Ingvar  Christer Sturmark  Åsa Wikforss

Postmodern mumbo jumbo på våra universitet

2 February, 2019 at 13:08 | Posted in Theory of Science & Methodology | 1 Comment

hur-gar-det-till-inom-vetenskapen_200Fyra viktiga drag är gemensamma för de olika rörelserna:

1 Centrala idéer förklaras inte.

2 Grunderna för en övertygelse anges inte.

3 Framställningen av läran har en språklig stereotypi … samma formuleringar återkommer gång på gång, utan att nyanseras och utan att utvecklas.

4 När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn förekommer. Heidegger, Foucault och Derrida kommer tillbaka, åter och åter …

Till de fyra punkterna vill jag emellertid, helt polemiskt, lägga till en femte:

5 Vederbörande har inte något väsentligt nytt att framföra …

Innehållslösheten måste döljas, och krystade och knepiga formuleringar kommer då med i spelet …

61YybQj5SvLOch ingenstans i den svenska akademin befinner sig en disciplin mer uppenbart på randen av denna antiintellektuella postmoderna avgrund än inom pedagogiken. I inget annat akademiskt ämne har postmodern sanningsrelativism och kvasi-vetenskaplig mumbo jumbo fått en sådan framskjuten position.

Ett tydligt belägg för sakernas bedrövliga tillstånd inom svensk så kallad ‘pedagogisk forskning’ kan man få genom att till exempel läsa artikeln “En pedagogisk relation mellan människa och häst — på väg mot en pedagogisk filosofisk utforskning av mellanrummet” i Pedagogisk Forskning i Sverige:

Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.

Eller, om man så vill, genom att ta del av bidragen i en nyligen utkommen ‘forskningsantologi’ i pedagogik:

posthumanistisk-pedagogik-teori-undervisning-och-forskningspraktikDen posthumanistiska pedagogiken utmanar oss att producera nya verkligheter där människan inte längre sätter sig själv i centrum. Kropp, materia, djur och natur blir aktiva deltagare när kunskap blir till. Det möjliggör att se lärande och kunskap på ett nytt och annorlunda sätt.

I stället för att fokusera på gränser visar den posthumanistiska pedagogiken hur fruktbart det kan vara att följa oriktade och asymmetriska rörelser åt oförutsägbara och okontrollerbara håll …

Boken ger olika ingångar till posthumanistisk pedagogik, så som filosofi, etik, feminism, poesi, visuell kunskap och dokumentation, men också bokhundar, skolböcker, pennskrin, dataskärmar, monterade djur och ultraljudsbilder.

elite-daily-sleeping-studentOch så säger man att pedagogikämnet är i kris. Undrar varför …

Why science is not a game of chance

29 January, 2019 at 15:54 | Posted in Theory of Science & Methodology | Leave a comment

cohenIf human scientists could be supposed to play a system of analogous games of chance … the evidential support available for successful scientific hypotheses could be measured by a Pascalian probability-function … But unfortunately the analogy breaks down at several points. The number of co-ordinate alternative outcomes​ that are possible in any one trial of the issue investigated may be infinite, indeterminate, or at least unknowable … And even more importantly, the trial outcomes may not be independent of one another … In short, science is not a game of chance with Nature, and we can grade enumerative induction by an indifference-type Pascalian probability only when we are generalizing about outcomes over a selected finite set of trials in a supposedly genuine game of chance.

Critical rationalism

26 January, 2019 at 16:24 | Posted in Theory of Science & Methodology | 3 Comments

critical-rationalist-project-history-philosophy_3For realists, the name of the scientific game is explaining phenomena, not just saving them. Realists typically invoke ‘inference to the best explanation’ [IBE] …

What exactly is the inference in IBE, what are the premises, and what the conclusion? 

The intellectual ancestor of IBE is Peirce’s abduction:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, … A is true.

Here the second premise is a fancy way of saying “A explains C”. Notice that the explanatory hypothesis A figures in this second premise as well as in the conclusion. The argument as a whole does not generate the explanans out of the explanandum. Rather, it seeks to justify the explanatory hypothesis …

Abduction is deductively invalid … [but] there is a way to rescue abduction and IBE … What results, with the missing premise spelled out, is:

It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true.

This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences …

People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths …

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Alan Musgrave

Bayesianism — a patently​ absurd approach to science

13 January, 2019 at 14:54 | Posted in Theory of Science & Methodology | 7 Comments

Back in 1991, when yours truly earned his first PhD with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level, it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

This, of course, was like swearing in church. My mainstream colleagues were — to say the least — not exactly überjoyed.

The decision theoretical approach I was most critical of, was the one building on the then reawakened Bayesian subjectivist (personalistic) interpretation of probability.

One of my inspirations when working on the dissertation was Henry E. Kyburg, and I still think his critique is the ultimate take-down of Bayesian hubris:

bFrom the point of view of the “logic of consistency”, no set of beliefs is more rational than any other, so long as they both satisfy the quantitative relationships expressed by the fundamental laws of probability. Thus I am free to assign the number 1/3 to the probability that the sun will rise tomorrow; or, more cheerfully, to take the probability to be 9/10 that I have a rich uncle in Australia who will send me a telegram tomorrow informing me that he has made me his sole heir. Neither Ramsey, nor Savage, nor de Finetti, to name three leading figures in the personalistic movement, can find it in his heart to detect any logical shortcomings in anyone, or to find anyone logically culpable, whose degrees of belief in various propositions satisfy the laws of the probability calculus, however odd those degrees of belief may otherwise be …

Now this seems patently absurd. It is to suppose that even the most simple statistical inferences have no logical weight where my beliefs are concerned. It is perfectly compatible with these laws that I should have a degree of belief equal to 1/4 that this coin will land heads when next I toss it; and that I should then perform a long series of tosses (say, 1000), of which 3/4 should result in heads; and then that on the 1001st toss, my belief in heads should be unchanged at 1/4 …

There is another argument against both subjestivistic and logical theories that depends on the fact that probabilities are represented by real numbers … The point can be brought out by considering an old fashioned urn containing black and white balls. Suppose that we are in an appropriate state of ignorance, so that, on the logical view, as well as on the subjectivistic view, the probability that the first ball drawn will be black, is a half … Now suppose that we draw a thousand balls from this urn, and that half of them are black. Relative to this information both the subjectivistic and the logical theories would lead to the assignment of a conditional probability of 1/2 to the statement that a black ball will be drawn on the 1001st draw …

Although it does seem perfectly plausible that our bets concerning black balls and white balls should be offered at the same odds before and after the extensive sample, it surely does not seem plausible to characterize our beliefs in precisely the same way in the two cases … This is a strong argument, I think, for considering the measure of rational belief to be two dimensional …

Henry E. Kyburg

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find mainstream economists that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

treatprobVariation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight. Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments — even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by mainstream economists.

How strange that mainstream economists do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes two-dimensional concepts of evidential weight and uncertainty are not possible to squeeze into a single calculable numerical ‘probability’ (Peirce had a similar view — ” to express the proper state of belief, not one number but two are requisite, the first depending on the inferred probability, the second on the amount of knowledge on which that probability is based”).  In the quest for calculable risk, one puts a blind eye to genuine uncertainty and looks the other way.

What counts as evidence?

10 January, 2019 at 15:29 | Posted in Theory of Science & Methodology | Comments Off on What counts as evidence?

What counts as evidence? I suspect we tend to overweight some kinds of evidence, and underweight others.

Yeh’s paper is a lovely illustration of a general problem with randomized control trials – that they tell us how a treatment worked under particular circumstances, but are silent about its effects in other circumstances. They can lack external validity. Yeh shows that parachutes are useless for someone jumping from a plane when it is on the ground. But this tells us nothing about their value when the plane is in the air – which is an important omission.

evidenceWe should place this problem with RCTs alongside two other Big Facts in the social sciences. One is the replicability crisis … The other (related) is the fetishization of statistical significance despite the fact that, as Deirdre McCloskey has said, it “has little to do with a defensible notion of scientific inference, error analysis, or rational decision making” and “is neither necessary nor sufficient for proving discovery of a scientific or commercially relevant result.”

If we take all this together, it suggests that a lot of conventional evidence isn’t as compelling as it seems. Which suggests that maybe the converse is true.

Stumbling and Mumbling

Why I am not a Bayesian

6 January, 2019 at 19:30 | Posted in Theory of Science & Methodology | 2 Comments

No matter how atheoretical their inclination, scientists are interested in relations between properties of phenomena, not in lists of readings from dials of instruments that detect those properties …

imagesHere as elsewhere, Bayesian philosophy of science obscures a difference between scientists’ problems of hypothesis choice and the problems of prediction that are the standard illustrations and applications of probability theory. In the latter situations, such as the standard guessing games about coins and urns, investigators know an enormous amount about the reality they are examining, including the effects of different values of the unknown factor. Scientists can rarely take that much knowledge for granted. It should not be surprising if an apparatus developed to measure degrees of belief in situations of isolated and precisely regimented uncertainty turns out to be inaccurate, irrelevant or incoherent in the face of the latter, much more radical uncertainty.

Richard W. Miller

Although Bayesians think otherwise, to me there’s nothing magical about Bayes’ theorem. The important thing in science is for you to have strong evidence. If your evidence is strong, then applying Bayesian probability calculus is rather unproblematic. Otherwise — garbage in, garbage out. Applying Bayesian probability calculus to subjective beliefs founded on weak evidence is not a recipe for scientific progress. It is important not to equate science with statistical calculation or applied probability theory. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Statistical models are no substitutes for doing real science. Although Bayesianism has tried to extend formal deductive logic into real-world settings via probability theory, this is not a viable scientific way forward. Choosing between theories and hypotheses can never be a question of inner coherence and consistency. Bayesian probabilism says absolutely​ nothing about reality.

Rejecting probabilism, Popper not only rejects Carnap-style logic of confirmation, he denies scientists are interested in highly probable hypotheses … They seek bold, informative, interesting conjectures and ingenious and severe attempts to refute them.

Debora​h Mayo​

Why Bayesianism has not resolved a single fundamental​ scientific​ dispute

3 January, 2019 at 23:10 | Posted in Economics, Theory of Science & Methodology | 3 Comments

419fn8sv1fl-_sx332_bo1204203200_Bayesian reasoning works, undeniably, where we know (or are ready to assume) that the process studied fits certain special though abstract causal structures, often called ‘statistical models’ … However, when we choose among hypotheses in important scientific controversies, we usually lack such prior knowledge​ of causal structures, or it is irrelevant to the choice. As a consequence, such Bayesian inference to the preferred alternative has not resolved, even temporarily, a single fundamental scientific dispute.

Mainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately — via some “Dutch book” or “money pump” argument — susceptible to being ruined by some clever “bookie”.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but — even granted this questionable reductionism — do rational agents really have to be Bayesian? However, there are no strong warrants for believing so.

In many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1​ if you are rational. That is, in this case — and based on symmetry — a rational individual would have to assign probability 10% to become unemployed and 90% to become employed.

Its-the-lawThat feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

We live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we ‘simply do not know.’ There are no strong reasons why we should accept the Bayesian view of modern mainstream economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” As argued by Keynes, we rather base our expectations on the confidence or “weight” we put on different events and alternatives. Expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that standardly have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by mainstream economists.

The essence of scientific reasoning

26 December, 2018 at 16:54 | Posted in Theory of Science & Methodology | 5 Comments

dedIn deductive reasoning all knowledge obtainable is already latent in the postulates. Rigour is needed to prevent the successive inferences growing less and less accurate as we proceed. The conclusions are never more accurate than the data. In inductive reasoning we are performing part of the process by which new knowledge is created. The conclusions normally grow more and more accurate as more data are included. It should never be true, though it is still often said, that the conclusions are no more accurate than the data on which they are based.

R. A. Fisher

 
In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmesFollowing the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (evidence only counts as evidence in relation to a specific hypothesis) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Inference to the best explanation

21 December, 2018 at 15:41 | Posted in Theory of Science & Methodology | 15 Comments


In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition in which the main task of science is studying the structure of reality.

Science is made possible by the fact that there are structures that are durable and independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. Contrary to positivism, yours truly would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts, but rather to identify and explain the underlying structure and forces that produce the observed events.

Given that what we are looking for is to be able to explain what is going on in the world we live in, it would — instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliative deductive reasoning, as in mainstream economic theory — be so much more fruitful and relevant to apply inference to the best explanation.

Why all models are wrong

13 December, 2018 at 17:53 | Posted in Economics, Theory of Science & Methodology | 11 Comments

moModels share three common characteristics: First, they simplify, stripping away unnecessary details, abstracting from reality, or creating anew from whole cloth. Second, they formalize, making precise definitions. Models use mathematics, not words … Models create structures within which we can think logically … But the logic comes at a cost, which leads to their third characteristic: all models are wrong … Models are wrong because they simplify. They omit details. By considering many models, we can overcome the narrowing of rigor by crisscrossing the landscape of the possible.

To rely on a single  model is hubris. It invites disaster … We need many models to make sense of complex systems.

Yes indeed. To rely on a single mainstream economic theory and its models is hubris.  It certainly does invite disaster. To make sense of complex economic phenomena we need many theories and models. We need pluralism. Pluralism both in theories and methods.

Using ‘simplifying’ mathematical tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

The final court of appeal for models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, model building is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to the real world.

Mainstream economists construct closed formalistic-mathematical theories and models for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their ‘laboratories’ they hope they can perform ‘thought experiments’ and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in mainstream economic models are rather based on formalistic mathematical tractability. In the models they often appear as unrealistic ‘tractability’ assumptions, usually playing a decisive role in getting the deductive machinery to deliver precise’ and ‘rigorous’ results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

If the ultimate criteria for success of a deductivist system are to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Real-world economic systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

What is wrong with mainstream economics is not that it employs models per se. What is wrong is that it employs poor models. They — and the tractability assumptions on which they to a large extent build on — are poor because they do not bridge to the real world in which we live. And — as Page writes — “if a model cannot explain, predict, or help us reason, we must set it aside.”

Ten theory of science books that should be on every economist’s reading​ list

22 November, 2018 at 17:28 | Posted in Theory of Science & Methodology | Comments Off on Ten theory of science books that should be on every economist’s reading​ list

top-10-retail-news-thumb-610xauto-79997-600x240

• Archer, Margaret (1995). Realist social theory: the morphogenetic approach. Cambridge: Cambridge University Press

• Bhaskar, Roy (1978). A realist theory of science. Hassocks: Harvester

• Cartwright, Nancy (2007). Hunting causes and using them. Cambridge: Cambridge University Press

• Chalmers, Alan  (2013). What is this thing called science?. 4th. ed. Buckingham: Open University Press

• Garfinkel, Alan (1981). Forms of explanation: rethinking the questions in social theory. New Haven: Yale U.P.

• Harré, Rom (1960). An introduction to the logic of the sciences. London: Macmillan

• Lawson, Tony (1997). Economics and reality. London: Routledge

• Lieberson, Stanley (1987). Making it count: the improvement of social research and theory. Berkeley: Univ. of California Press

• Lipton, Peter (2004). Inference to the best explanation. 2. ed. London: Routledge

• Miller, Richard (1987). Fact and method: explanation, confirmation and reality in the natural and the social sciences. Princeton, N.J.: Princeton Univ. Press

Superficial ‘precision’

22 November, 2018 at 17:12 | Posted in Theory of Science & Methodology | Comments Off on Superficial ‘precision’

lieberOne problem among social researchers is the tendency to view specificity and concreteness as equal to Science and Rigorous Thinking … But this ‘precision’ is often achieved only by analyzing​ surface causes because some of them are readily measured and operationalized. This​ is hardly sufficient reason for turning away from broad generalizations and causal principles.

Truth and probability

11 November, 2018 at 13:05 | Posted in Theory of Science & Methodology | 1 Comment

uncertainty-7Truth exists, and so does uncertainty. Uncertainty acknowledges the existence of an underlying truth: you cannot be uncertain of nothing: nothing is the complete absence of anything. You are uncertain of something, and if there is some thing, there must be truth. At the very least, it is that this thing exists. Probability, which is the science of uncertainty, therefore aims at truth. Probability presupposes truth; it is a measure or characterization of truth. Probability is not necessarily the quantification of the uncertainty of truth, because not all uncertainty is quantifiable. Probability explains the limitations of our knowledge of truth, it never denies it. Probability is purely epistemological, a matter solely of individual understanding. Probability does not exist in things; it is not a substance. Without truth, there could be no probability.

William Briggs’ approach is — as he acknowledges in the preface of his interesting and thought-provoking book — “closely aligned to Keynes’s.”

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in statistics — and the axiomatic probability theory underlying it — is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues – ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different.’

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we ‘simply do not know.’ As Keynes writes in Treatise:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …  In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history onto the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different ‘variables’ is not enough. If they cannot get at the causal structure that generated the data, they are not really ‘identified.’

How strange that writers of statistics textbook, as a rule, do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical ‘probability.’ In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the statistics carpet.

It’s high time that statistics textbooks give Keynes his due.

Richard Feynman om mathematics

7 November, 2018 at 20:18 | Posted in Economics, Theory of Science & Methodology | 1 Comment

In a comment on one of yours truly’s posts last week, Jorge Buzaglo wrote this truly interesting comment:

Nobel prize winner Richard Feynman on the use of mathematics:

Mathematicians, or people who have very mathematical minds, are often led astray when “studying” economics because they lose sight of the economics. They say: ‘Look, these equations … are all there is to economics; it is admitted by the economists that there is nothing which is not contained in the equations.

510935-Richard-P-Feynman-Quote-If-all-of-mathematics-disappeared-physics

The equations are complicated, but after all they are only mathematical equations and if I understand them mathematically inside out, I will understand the economics inside out.’ Only it doesn’t work that way. Mathematicians who study economics with that point of view — and there have been many of them — usually make little contribution to economics and, in fact, little to mathematics. They fail because the actual economic situations in the real world are so complicated that it is necessary to have a much broader understanding of the equations.

I have replaced the word “physics” (and similar) by the word “economics” (and similar) in this quote from Page 2-1 in: R. Feynman, R. Leighton and M. Sands, The Feynman Lectures on Physics, Volume II, Addison-Wesley Publishing, Reading, 1964,

Next Page »

Blog at WordPress.com.
Entries and comments feeds.