Theory-ladenness

22 May, 2022 at 23:38 | Posted in Theory of Science & Methodology | 1 Comment

Observation of real world objects and events is viewed as theory laden....  | Download Scientific DiagramIt is now widely recognised that observation is not theory-neutral but theory-laden, and that theory does not merely ‘order facts’ but makes claims about the nature of its object. So, in evaluating observations we are also assessing particular theoretical concepts and existential claims. A common response to this shattering of innocent beliefs in the certainty and neutrality of observation has been the development of idealist (especially conventionalist and rationalist) philosophies which assume that if observation is theory-laden, it must necessarily be theory-determined, such that it is no longer possible to speak of criteria of ‘truth’ or ‘objectivity’ which are not entirely internal to ‘theoretical discourse’. However, this is a non-sequitur for at least two reasons. First, theory-laden observation need not be theory-determined. Even the arch-conventionalist Feyerabend (1970) acknowledges that ‘it is possible to refute a theory by an experience that is entirely interpreted within its own terms’. If I ask how many leaves there are on a tree, my empirical observation will be controlled by concepts regarding the nature of trees, leaves and the operation of counting, but to give an answer I’d still have to go and look! In arguing that there are no extra-discursive criteria of truth, recent idealists such as Hindess and Hirst echo Wittgenstein’s identification of the limits of our world with the limits of language, and share the confusion of questions of What exists? with What can be known to exist? The truism that extra-discursive controls on knowledge can only be referred to in discourse does not mean that what is referred to is purely internal to discourse. Secondly, and more simply, it does not follow from the fact that all knowledge is fallible, that it is all equally fallible.

Andrew Sayer

Gödel and the limits of mathematics

10 May, 2022 at 16:50 | Posted in Theory of Science & Methodology | 1 Comment

.

Gödel’s incompleteness theorems raise important questions about the foundations of mathematics.

The most important concern is the question of how to select the specific systems of axioms that mathematics is supposed to be founded on. Gödel’s theorems irrevocably show that no matter what system is chosen, there will always have to be other axioms to prove previously unproven truths.

This, of course, ought to be of paramount interest for those mainstream economists who still adhere to the dream of constructing deductive-axiomatic economics with analytic truths that do not require empirical verification. Since Gödel showed that any complex axiomatic system is undecidable and incomplete, any such deductive-axiomatic economics will always consist of some undecidable statements. When not even being able to fulfil the dream of a complete and consistent axiomatic foundation for mathematics, it’s totally incomprehensible that some people still think that could be achieved for economics.

Separating questions of logic and empirical validity may — of course — help economists to focus on producing rigorous and elegant mathematical theorems that people like Lucas and Sargent consider “progress in economic thinking.” To most other people, not being concerned with empirical evidence and model validation is a sign of social science becoming totally useless and irrelevant. Economic theories building on known to be ridiculously artificial assumptions without an explicit relationship with the real world is a dead end. That’s probably also the reason why general equilibrium analysis today (at least outside Chicago) is considered a total waste of time. In the trade-off between relevance and rigour, priority should always be on the former when it comes to social science. The only thing followers of the Bourbaki tradition within economics — like Karl Menger, John von Neumann, Gerard Debreu, Robert Lucas, and Thomas Sargent — have given us are irrelevant model abstractions with no bridges to real-world economies. It’s difficult to find a more poignant example of intellectual resource waste in science.

Enlightenment and mathematics

9 May, 2022 at 09:27 | Posted in Theory of Science & Methodology | 6 Comments

Dialectic of Enlightenment (Cultural Memory in the Present) - Kindle  edition by Horkheimer, Max, Adorno, Theodor W., Noeri, Gunzelin Schmid,  Jephcott, Edmund. Politics & Social Sciences Kindle eBooks @ Amazon.com.When in mathematics the unknown becomes the unknown quantity in an equation, it is made into something long familiar before any value has been assigned. Nature, before and after quantum theory, is what can be registered mathematically; even what cannot be assimilated, the insoluble and irrational, is fenced in by mathematical theorems. In the preemptive identification of the thoroughly mathematized world with truth, enlightenment believes itself safe from the return of the mythical. It equates thought with mathematics. The latter is thereby cut loose, as it were, turned into an absolute authority …

The reduction of thought to a mathematical apparatus condemns the world to be its own measure. What appears as the triumph of subjectivity, the subjection of all existing things to logical formalism, is bought with the obedient subordination of reason to what is immediately at hand. To grasp existing things as such, not merely to note their abstract spatial-temporal relationships, by which they can then be seized, but, on the contrary, to think of them as surface, as mediated conceptual moments which are only fulfilled by revealing their social, historical, and human meaning—this whole aspiration of knowledge is abandoned. Knowledge does not consist in mere perception, classification, and calculation but precisely in the determining negation of whatever is directly at hand. Instead of such negation, mathematical formalism, whose medium, number, is the most abstract form of the immediate, arrests thought at mere immediacy. The actual is validated, knowledge confines itself to repeating it, thought makes itself mere tautology. The more completely the machinery of thought subjugates existence, the more blindly it is satisfied with reproducing it. Enlightenment thereby regresses to the mythology it has never been able to escape …

The subsumption of the actual, whether under mythical prehistory or under mathematical formalism, the symbolic relating of the present to the mythical event in the rite or to the abstract category in science, makes the new appear as something predetermined which therefore is really the old. It is not existence that is without hope, but knowledge which appropriates and perpetuates existence as a schema in the pictorial or mathematical symbol.

Foucault’s cryptonormative approach — a critique

2 May, 2022 at 18:38 | Posted in Theory of Science & Methodology | 1 Comment

I always found Foucault’s work frustrating to read. His empirical accounts are interesting and some of his concepts fruitful – disciplinary power, capillary power, surveillance, technologies of the self, the entrepreneur of the self, for example – and he was prescient about neoliberalism, but his theoretical reasoning is often confused. His attempts to define power, and his unacknowledged slippage between different concepts of truth in The History of Sexuality Part I are examples …

Andrew Sayer - SASEFoucault’s accounts of the social world have a generally ominous tone, but they fail to identify what is bad and why, so one is left wanting to write ‘so what?’ in the margin. Thus, sociologists of health sciences inspired by him would often describe certain practices as involving the ‘medicalization’ of certain conditions without saying whether this was appropriate or inappropriate, or good or bad, and why. If we don’t know whether people are harmed or benefitted by a practice, then we don’t know much about it; cryptonormative accounts of social life are also deficient as descriptions.

Actually, the problem goes beyond Foucault: self-styled critical social science has often failed to explore in any depth the normative issues concerning what is bad about the objects of its critiques, as if it could rely on readers reading between the lines in the desired way. This was an effect of the unhappy divorce of positive and normative thought in social science. Tellingly, Foucault invoked the is-ought framework to defend his refusal of normativity, saying that it was not his job to tell people what to do, as if normativity were primarily about instructions rather than evaluations. While post-structuralism did provide novel insights, the combination of its resistance to normativity (as reducible to the limitation of possibilities through ‘normalizing’) and its anti-humanism (‘humanist’ became another sneer term) also made social science less critical.

Andrew Sayer (interviewed by Jamie Morgan)

Was ist Kausalität?

31 Jan, 2022 at 16:30 | Posted in Theory of Science & Methodology | Comments Off on Was ist Kausalität?

.

The Holy Grail of Science

28 Jan, 2022 at 14:01 | Posted in Theory of Science & Methodology | 1 Comment

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.
monthly-sharpe-header

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive inference, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

Statistical philosophies and idealizations

18 Jan, 2022 at 18:16 | Posted in Theory of Science & Methodology | 2 Comments

As has been long and widely emphasized in various terms … frequentism and Bayesianism are incomplete both as learning theories and as philosophies of statistics, in the pragmatic sense that each alone are insufficient for all sound applications. Notably, causal justifications are the foundation for classical frequentism, which demands that all model constraints be deduced from real mechanical constraints on the physical data-generating process. Nonetheless, it seems modeling analyses in health, medical, and social sciences rarely have such physical justification …

The deficiency of strict coherent (operational subjective) Bayesianism is its assumption that all aspects of this uncertainty have been captured by the prior and likelihood, thus excluding the possibility of model misspecification. DeFinetti himself was aware of this limitation:

“…everything is based on distinctions which are themselves uncertain and vague, and which we conventionally translate into terms of certainty only because of the logical formulation…In the mathematical formulation of any problem it is necessary to base oneself on some appropriate idealizations and simplification. This is, however, a disadvantage; it is a distorting factor which one should always try to keep in check, and to approach circumspectly. It is unfortunate that the reverse often happens. One loses sight of the original nature of the problem, falls in love with the idealization, and then blames reality for not conforming to it.” [DeFinetti 1975, p. 279]

By asking for physically causal justifications of the data distributions employed in statistical analyses (whether those analyses are labeled frequentist or Bayesian), we may minimize the excessive certainty imposed by simply assuming a probability model and proceeding as if that idealization were a known fact.

Sander Greenland

Bayesian superficiality

15 Jan, 2022 at 19:06 | Posted in Theory of Science & Methodology | Comments Off on Bayesian superficiality

Richard William Miller | Sage School of Philosophy Cornell Arts & SciencesThe bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

Richard W. Miller

Scientific realism and inference to the best explanation

15 Jan, 2022 at 16:28 | Posted in Theory of Science & Methodology | 2 Comments

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to math or logic, not science.

What exactly is the inference in ‘inference to the best explanation’, what are the premises, and what the conclusion? …

It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true.

This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences …

alan musgravePeople object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths …

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Alan Musgrave

Why Bayesianism doesn’t resolve scientific disputes

15 Jan, 2022 at 10:41 | Posted in Theory of Science & Methodology | Comments Off on Why Bayesianism doesn’t resolve scientific disputes

✓ Frases y citas célebres de Mario Bunge 📖The occurrence of unknown prior probabilities, that must be stipulated arbitrarily, does not worry the Bayesian anymore than God’s inscrutable designs worry the theologian. Thus Lindley (1976), one of the leaders of the Bayesian school, holds that this difficulty has been ‘grossly exaggerated’. And he adds: ‘I am often asked if the [Bayesian] method gives the right answer: or, more particularly, how do you know if you have got the right prior [probability]. My reply is that I don’t know what is meant by ‘right’ in this context. The Bayesian theory is about coherence, not about right or wrong.’ Thus the Bayesian, along with the philosopher who only cares about the cogency of arguments, fits in with the reasoning madman …

One should not confuse the objective probabilities of random events with mere intuitive likelihoods of such events or the plausibility (or verisimilitude) of the corresponding hypotheses in the light of background knowledge. As Peirce (1935: p. 363) put it, this confusion ‘is a fertile source of waste of time and energy’. A clear case of such waste is the current proliferation of rational-choice theories in the social sciences, to model processes that are far from random, from marriage to crime to business transactions to political struggles.

Mario Bunge

Beyond Bayesian probabilism

13 Jan, 2022 at 23:44 | Posted in Theory of Science & Methodology | 7 Comments

Although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. Bayesian statistics is one thing, and Bayesian epistemology something else. Science is not reducible to betting, and scientific inference is not a branch of probability theory. It always transcends mathematics. The unfulfilled dream of constructing an inductive logic of probabilism — the Bayesian Holy Grail — will always remain unfulfilled.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. That probabilities may work for expressing uncertainty when we pick balls from an urn, does not automatically make it relevant for making inferences in science. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on often vague and subjective personal beliefs? People have a lot of beliefs, and when they are plainly wrong, we shall not do any calculations whatsoever on them. We simply reject them. Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as many Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I strongly doubt it.

unknown I want to know what my personal probability ought to be, partly because I want to behave sensibly and much more importantly because I am involved in the writing of a report which wants to be generally convincing. I come to the conclusion that my personal probability is of little interest to me and of no interest whatever to anyone else unless it is based on serious and so far as feasible explicit information. For example, how often have very broadly comparable laboratory studies been misleading as regards human health? How distant are the laboratory studies from a direct process affecting health? The issue is not to elicit how much weight I actually put on such considerations but how much I ought to put. Now of course in the personalistic  [Bayesian] approach having (good) information is better than having none but the point is that in my view the personalistic probability is virtually worthless for reasoned discussion​ unless it is based on information, often directly or indirectly of a broadly frequentist kind. The personalistic approach as usually presented is in danger of putting the cart before the horse.

David Cox

[Added 21.15: Those interested in these questions, do read Sander Greenland’s insightful comment.]

Bayesianism — the new positivism

12 Jan, 2022 at 23:21 | Posted in Theory of Science & Methodology | 6 Comments

Fact and Method: Miller, Richard W.: 9780691020457: Amazon.com: BooksNo matter how atheoretical their inclination, scientists are interested in relations between properties of phenomena, not in lists of readings from dials of instruments that detect those properties …

Here as elsewhere, Bayesian philosophy of science obscures a difference between scientists’ problems of hypothesis choice and the problems of prediction that are the standard illustrations and applications of probability theory. In the latter situations, such as the standard guessing games about coins and urns, investigators know an enormous amount about the reality they are examining, including the effects of different values of the unknown factor. Scientists can rarely take that much knowledge for granted. It should not be surprising if an apparatus developed to measure degrees of belief in situations of isolated and precisely regimented uncertainty turns out to be inaccurate, irrelevant or incoherent in the face of the latter, much more radical uncertainty.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

Given that we study processes that are adequately captured by our statistical models (think of urns, cards, coins, etc), Bayesian reasoning works. The problem, however, is that when we choose among scientific hypotheses, we standardly lack that kind of knowledge. As a consequence — as Miller puts it — “Bayesian inference to the preferred alternative has not resolved, even temporarily, a single fundamental scientific dispute.”

Assume you’re a Bayesian turkey/chicken and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys/chickens and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

Bayes and the ‘old evidence’ problem

10 Jan, 2022 at 23:32 | Posted in Theory of Science & Methodology | 1 Comment

Among the many achievements of Newton’s theory of gravitation was its prediction of the tides and their relation to the lunar orbit. Presumably the success of this prediction confirmed Newton’s theory, or in Bayesian terms, the observable facts about the tides e raised the probability of Newton’s theory h.

bayes-theorem - Rens van de SchootBut the Bayesian it turns out can make no such claim. Because the facts about the tides were already known when Newton’s theory was formulated, the probability for e was equal to one. It follows immediately that both C (e ) and C (e |h ) are equal to one (the latter for any choice of h ). But then the Bayesian multiplier is also one, so Newton’s theory does not receive any probability boost from its prediction of the tides. As either a description of actual scientific practice, or a prescription for ideal scientific practice, this is surely wrong.

The problem generalizes to any case of “old evidence”: If the evidence e is received before a hypothesis h is formulated then e is incapable of boosting the probability of h by way of conditionalization. As is often remarked, the problem of old evidence might just as well be called the problem of new theories, since there would be no difficulty if there were no new theories, that is, if all theories were on the table before the evidence began to arrive. Whatever you call it, the problem is now considered by most Bayesians to be in urgent need of a solution. A number of approaches have been suggested, none of them entirely satisfactory.

A recap of the problem: If a new theory is discovered midway through an inquiry, a prior must be assigned to that theory. You would think that, having assigned a prior on non-empirical grounds, you would then proceed to conditionalize on all the evidence received up until that point. But because old evidence has probability one, such conditionalization will have no effect. The Bayesian machinery is silent on the significance of the old evidence for the new theory.

Michael Strevens

The fatal flaw of mathematics

21 Nov, 2021 at 18:16 | Posted in Theory of Science & Methodology | 7 Comments

.

Gödel’s incompleteness theorems raise important questions about the foundations of mathematics.

The most important concerns the question of how to select the specific systems of axioms that mathematics are supposed to be founded on. Gödel’s theorems irrevocably show that no matter what system is chosen, there will always have to be other axioms to prove previously unproved truths.

This, of course, ought to be of paramount interest for those mainstream economists who still adhere to the dream of constructing a deductive-axiomatic economics with analytic truths that do not require empirical verification. Since Gödel showed that any complex axiomatic system is undecidable and incomplete, any such deductive-axiomatic economics will always consist of some undecidable statements. When not even being able to fulfil the dream of a complete and consistent axiomatic foundation for mathematics, it’s totally incomprehensible that some people still think that could be achieved for economics.

Separating questions of logic and empirical validity may — of course — help economists to focus on producing rigorous and elegant mathematical theorems that people like Lucas and Sargent consider “progress in economic thinking.” To most other people, not being concerned with empirical evidence and model validation is a sign of social science becoming totally useless and irrelevant. Economic theories building on known to be ridiculously artificial assumptions without an explicit relationship with the real world is a dead end. That’s probably also the reason why general equilibrium analysis today (at least outside Chicago) is considered a total waste of time. In the trade-off between relevance and rigour, priority should always be on the former when it comes to social science. The only thing followers of the Bourbaki tradition within economics — like Karl Menger, John von Neumann, Gerard Debreu, Robert Lucas, and Thomas Sargent — has given us are irrelevant model abstractions with no bridges to real-world economies. It’s difficult to find a more poignant example of an intellectual resource waste in science.

Social mechanisms and inference to the best explanation

5 Nov, 2021 at 14:44 | Posted in Theory of Science & Methodology | 1 Comment

A Realist Philosophy of Social Science : Explanation and Understanding by  Peter T. Manicas (2006, Perfect) for sale online | eBayEpistemologically speaking, all theory is a representation of reality, an intellectual construct, and it is always abstract: it can never catch the full-bodied reality … But if we accept the theory, we accept that the generative mechanism is real. That is, not only could it have produced the outcome, but having ruled out alternative explanations, we believe that it did produce the outcome …

But we must resist an instrumentalist interpretation of social mechanisms, typical of mainstream economics … On this view, the assumptions of the mechanism need not be realistic at all. That is, not only need there be no real persons with all the attributes of the construction, but the assumptions can be contrary to facts known about them.

Peter Manicas is certainly right in emphasizing the need for non-instrumentalist interpretations of social mechanisms — but one could perhaps still wonder how we rule out “alternative explanations” and believe that the generative mechanism we have chosen really “did produce the outcome.”

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmes

Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (e only counts as evidence in relation to a specific hypothesis H) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: