Time — a question of life and death

7 February, 2016 at 15:48 | Posted in Theory of Science & Methodology | Leave a comment

It is thus seen that the time-dimension is completely ignored in spite of its crucial role in problems of this sort. For let us assume, for example, that the probability of some event E in a stochastic structure of outcomes is 1/10 000. Small though this probability is, there is a very high probability, 1-1/10000000000, that E should occur at least once in 230 000 outcomes. Now, if one outcome occurs every second, we need be willing to wait only three days in order to be fairly certain to witness E. In this case, we can hardly say that E is a rare event in time. But let the velocity of outcomes be one outcome per century, and the same E would be an extraordinary event even in the life of our planet.

Nicholas Georgescu-Roegen

harrisburg

LOGIC of science vs. METHODS of science

29 January, 2016 at 17:19 | Posted in Theory of Science & Methodology | Leave a comment

 

Deduction — induction — abduction

25 January, 2016 at 14:42 | Posted in Theory of Science & Methodology | 3 Comments

 

mcgregor4_clip_image002_0000

In science – and economics – one could argue that there basically are three kinds of argumentation patterns/schemes/methods/strategies available:

Deduction

Premise 1: All Chicago economists believe in REH
Premise 2: Robert Lucas is a Chicago economist
—————————————————————–
Conclusion: Robert Lucas believes in REH

Here we have an example of a logically valid deductive inference (and, following Quine, whenever logic is used in this essay, ‘logic’ refers to deductive/analytical logic).

In a hypothetico-deductive reasoning — hypothetico-deductive confirmation in this case — we would use the conclusion to test the law-like hypothesis in premise 1 (according to the hypothetico-deductive model, a hypothesis is confirmed by evidence if the evidence is deducible from the hypothesis). If Robert Lucas does not believe in REH we have gained some warranted reason for non-acceptance of the hypothesis (an obvious shortcoming here being that further information beyond that given in the explicit premises might have given another conclusion).

The hypothetico-deductive method (in case we treat the hypothesis as absolutely sure/true, we rather talk of an axiomatic-deductive method) basically means that we

•Posit a hypothesis
•Infer empirically testable propositions (consequences) from it
•Test the propositions through observation or experiment
•Depending on the testing results either find the hypothesis corroborated or falsified.

However, in science we regularly use a kind of ‘practical’ argumentation where there is little room for applying the restricted logical ‘formal transformations’ view of validity and inference. Most people would probably accept the following argument as a ‘valid’ reasoning even though it from a strictly logical point of view is non-valid:

Premise 1: Robert Lucas is a Chicago economist
Premise 2: The recorded proportion of Keynesian Chicago economists is zero
————————————————————————–
Conclusion: So, certainly, Robert Lucas is not a Keynesian economist

How come? Well I guess one reason is that in science, contrary to what you find in most logic text-books, not very many argumentations are settled by showing that ‘All Xs are Ys.’ In scientific practice we instead present other-than-analytical explicit warrants and backings — data, experience, evidence, theories, models — for our inferences. As long as we can show that our ‘deductions’ or ‘inferences’ are justifiable and have well-backed warrants our colleagues listen to us. That our scientific ‘deductions’ or ‘inferences’ are logical non-entailments simply is not a problem. To think otherwise is committing the fallacy of misapplying formal-analytical logic categories to areas where they are pretty much irrelevant or simply beside the point.

Scientific arguments are not analytical arguments, where validity is solely a question of formal properties. Scientific arguments are substantial arguments. If Robert Lucas is a Keynesian or not, is nothing we can decide on formal properties of statements/propositions. We have to check out what the guy has actually been writing and saying to check if the hypothesis that he is a Keynesian is true or not.

In a deductive-nomological explanation — also known as a covering law explanation — we would try to explain why Robert Lucas believes in REH with the help of the two premises (in this case actually giving an explanation with very little explanatory value). These kinds of explanations — both in their deterministic and statistic/probabilistic versions — rely heavily on deductive entailment from assumed to be true premises. But they have preciously little to say on where these assumed to be true premises come from.

Deductive logic of confirmation and explanation may work well — given that they are used in deterministic closed models! In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in the science of economics.  Applying it to real world systems, however, immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyze confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science we argue and try to substantiate our beliefs and hypotheses with reliable evidence — proportional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Deduction — and the inferences that goes with it — is an example of ‘explicative reasoning,’  where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning. But it is also its limitation, since truth in the deductive context does not refer to  a real world ontology (only relating propositions as true or false within a formal-logic system) and as an argument scheme is totally non-ampliative — the output of the analysis is nothing else than the input.
Continue Reading Deduction — induction — abduction…

Axiomatics — the economics fetish

18 January, 2016 at 20:40 | Posted in Theory of Science & Methodology | 4 Comments

Mainstream — neoclassical — economics has become increasingly irrelevant to the understanding of the real world. The main reason for this irrelevance is the failure of economists to match their deductive-axiomatic methods with their subject.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice …

This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism …

theory of vlueThe fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium … The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance …

I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

David Glasner

 
It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models — as long as mainstream economists do not come up with any export licenses for their theories and models to the real world in which we live — is beyond my imagination. Sure, the simplicity that axiomatics and analytical arguments bring to economics is attractive to many economists. But …

aSimplicity, however, has its perils. It is one thing to choose as one’s first object of theoretical study the type of arguments open to analysis in the simplest terms. But it is quite another to treat this type of argument as a paradigm and to demand that arguments in other fields should conform to its standards regardless, or build up from a study of the simplest forms of argument alone a set of categories intended for application to arguments of all sorts: one must at any rate begin by inquiring carefully how far the artificial simplicity of one’s chosen modal results in these logical categories also being artificially simple. The sorts of risks one runs otherwise are obvious enough. Distinctions which all happen to cut along the same line for the simplest arguments may need to be handled quite separately in the general case; if we forget this, and our new found logical categories yield paradoxical results when applied to more complex arguments, we may be tempted to put these rules down to defects in the arguments instead of in our categories; and we may end up by thinking that, for some regrettable reason hidden deep in the nature of things, only our original, peculiarly simple arguments are capable of attaining to the ideal of validity.

Bayes theorem — what’s the big deal?

7 January, 2016 at 20:04 | Posted in Theory of Science & Methodology | 2 Comments

The plausibility of your belief depends on the degree to which your belief–and only your belief–explains the evidence for it. The more alternative explanations there are for the evidence, the less plausible your belief is. That, to me, is the essence of Bayes’ theorem.

thomas_bayes

“Alternative explanations” can encompass many things. Your evidence might be erroneous, skewed by a malfunctioning instrument, faulty analysis, confirmation bias, even fraud. Your evidence might be sound but explicable by many beliefs, or hypotheses, other than yours.

In other words, there’s nothing magical about Bayes’ theorem. It boils down to the truism that your belief is only as valid as its evidence. If you have good evidence, Bayes’ theorem can yield good results. If your evidence is flimsy, Bayes’ theorem won’t be of much use. Garbage in, garbage out.

The potential for Bayes abuse begins with your initial estimate of the probability of your belief, often called the “prior.” …

In many cases, estimating the prior is just guesswork, allowing subjective factors to creep into your calculations. You might be guessing the probability of something that–unlike cancer—does not even exist, such as strings, multiverses, inflation or God. You might then cite dubious evidence to support your dubious belief. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe. Scientists often fail to heed this dictum, which helps explains why so many scientific claims turn out to be erroneous. Bayesians claim that their methods can help scientists overcome confirmation bias and produce more reliable results, but I have my doubts.

And as I mentioned above, some string and multiverse enthusiasts are embracing Bayesian analysis. Why? Because the enthusiasts are tired of hearing that string and multiverse theories are unfalsifiable and hence unscientific, and Bayes’ theorem allows them to present the theories in a more favorable light. In this case, Bayes’ theorem, far from counteracting confirmation bias, enables it.

John Horgan

One of yours truly’s favourite ‘problem situating lecture arguments’ against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump” argument – susceptible to being ruined by some clever “bookie”.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing repeatedly over the years, there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

Its-the-lawThat feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of KeynesA Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

51DD5REVJFLThe bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

The best advice I ever got as a scientist

4 January, 2016 at 17:34 | Posted in Theory of Science & Methodology | 1 Comment

popBe as clear as you can about the various theories you hold, and be aware that we all hold theories unconsciously, or take them for granted, although most of them are almost certain to be false …

And try to construct alternative theories — alternatives even to those theories which appear to you inescapable; for only in this way will you understand the theories you hold. Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.
 

Dani Rodrik’s blind spot (II)

16 December, 2015 at 18:53 | Posted in Economics, Theory of Science & Methodology | 1 Comment

51A1kO+7AoL._SX329_BO1,204,203,200_As I argued in a previous post, Dani Rodrik’s Economics Rules  describes economics as a more or less problem-free smorgasbord collection of models. Economics is portrayed as advancing through a judicious selection from a continually expanding library of models, models that are presented as “partial maps” or “simplifications designed to show how specific mechanisms  work.”

But one of the things that’s missing in Rodrik’s view of economic models is the all-important distinction between core and auxiliary assumptions. Although Rodrik repeatedly speaks of ‘unrealistic’ or ‘critical’ assumptions, he basically just lumps them all together without differentiating between different types of assumptions, axioms or theorems. In a typical passage, Rodrik writes (p. 25):
 

Consumers are hyperrational, they are selfish, they always prefer more consumption to less, and they have a long time horizon, stretching into infinity. Economic models are typically assembled out of many such unrealistic assumptions. To be sure, many models are more realistic in one or more of these dimensions. But even in these more layered guises, other unrealistic assumptions can creep in somewhere else.

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/ goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.

Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other.

So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

But in Rodrik’s model depiction we are essentially given the following structure,

A1, A2, … An
———————-
Theorem,

where a set of undifferentiated assumptions are used to infer a theorem.

This is, however, to vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modeling strategy, where — as I’ve argued in a previous post — there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure

CA1, CA2, … CAn & AA1, AA2, … AAn
———————————————–
Theorem

or,

CA1, CA2, … CAn
———————-
(AA1, AA2, … AAn) → Theorem,

more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.

This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get

CA1, CA2, … CAn
———————
Theorem,

where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or

AA1, AA2, … AAn
———————-
Theorem,

where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.

Not clearly differentiating between CA and AA means that Rodrik can’t make this all-important interpretative distinction, and so opens up for unwarrantedly “saving” or “immunizing” models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.

Are economic models ‘true enough’?

13 November, 2015 at 19:39 | Posted in Theory of Science & Methodology | 2 Comments

Stylized facts are close kin of ceteris paribus laws. They are ‘broad generalizations true in essence, though perhaps not in detail’. They play a major role in economics, constituting explananda that economic models are required to explain. Models of economic growth, for example, are supposed to explain the (stylized) fact that the profit rate is constant. The unvarnished fact of course is that profit rates are not constant. All sorts of non-economic factors — e.g., war, pestilence, drought, political chicanery — interfere. Manifestly, stylized facts are not (what philosophers would call) facts, for the simple reason that they do not actually obtain. It might seem then that economics takes itself to be required to explain why known falsehoods are true. (Voodoo economics, indeed!) This can’t be correct. truth_and_liesRather, economics is committed to the view that the claims it recognizes as stylized facts are in the right neighborhood, and that their being in the right neighborhood is something economic models should account for. The models may show them to be good approximations in all cases, or where deviations from the economically ideal are small, or where economic factors dominate non-economic ones. Or they might afford some other account of their often being nearly right. The models may diverge as to what is actually true, or as to where, to what degree, and why the stylized facts are as good as they are. But to fail to acknowledge the stylized facts would be to lose valuable economic information (for example, the fact that if we control for the effects of such non-economic interference as war, disease, and the president for life absconding with the national treasury, the profit rate is constant.) Stylized facts figure in other social sciences as well. I suspect that under a less alarming description, they occur in the natural sciences too. The standard characterization of the pendulum, for example, strikes me as a stylized fact of physics. The motion of the pendulum which physics is supposed to explain is a motion that no actual pendulum exhibits. What such cases point to is this: The fact that a strictly false description is in the right neighborhood sometimes advances understanding of a domain.

Catherine Elgin

Catherine Elgin thinks we should accept model claims when we consider them to be ‘true enough,’ and Uskali Mäki has argued in a similar vain, maintaining that it could be warranted — based on diverse pragmatic considerations — to accept model claims that are negligibly false.

Hmm …

When criticizing the basic (DSGE) workhorse model for its inability to explain involuntary unemployment, its defenders maintain that later elaborations — especially newer search models — manage to do just that. However, one of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — far from ‘negligibly false’ or ‘true enough’?

Suck on that — and tell me if those typical mainstream — neoclassical — modeling assumptions in any possibly relevant way — with or without due pragmatic considerations — can be considered anything else but imagined model worlds assumptions that has nothing at all to do with the real world we happen to live in!

The ultimate argument for scientific realism

10 November, 2015 at 15:03 | Posted in Theory of Science & Methodology | 2 Comments

No-miracle-640x426Realism and relativism stand opposed. This much is apparent if we consider no more than the realist aim for science. The aim of science, realists tell us, is to have true theories about the world, where ‘true’ is understood in the classical correspondence sense. And this seems immediately to presuppose that at least some forms of relativism are mistaken … If realism is correct, then relativism (or some versions of it) is incorrect …

Whether or not realism is correct depends crucially upon what we take realism to assert, over and above the minimal claim about the aim of science.

My way into these issues is through what has come to be called the ‘Ultimate Argument for Scientific Realism’.’ The slogan is Hilary Putnam’s: “Realism is the only philosophy that does not make the success of science a miracle” …

We can at last be clear about what the Ultimate Argument actually is. It is an example of a so-called inference to the best explanation. How, in general, do such inferences work?

The intellectual ancestor of inference to the best explanation is Peirce’s abduction. Abduction goes something like this:

F is a surprising fact.
If T were true, F would be a matter of course.
Hence, T is true.

The argument is patently invalid: it is the fallacy of affirming the consequent …

What we need is a principle to the effect that it is reasonable to accept a satisfactory explanation which is the best we have as true. And we need to amend the inference-scheme accordingly. What we finish up with goes like this:

It is reasonable to accept a satisfactory explanation of any fact, which is also the best available explanation of that fact, as true.
F is a fact.
Hypothesis H explains F.
Hypothesis H satisfactorily explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to accept H as true …

To return to the Ultimate Argument for scientific realism. It is, I suggest, an inference to the best explanation. The fact to be explained is the (novel) predictive success of science. And the claim is that realism (more precisely, the conjecture that the realist aim for science has actually been achieved) explains this fact, explains it satisfactorily, and explains it better than any non-realist philosophy of science. And the conclusion is that it is reasonable to accept scientific realism (more precisely, the conjecture that the realist aim for science has actually been achieved) as true.

Alan Musgrave

One of my absolute favourites

6 November, 2015 at 19:42 | Posted in Theory of Science & Methodology | 5 Comments

Inference to the Best Explanation can be seen as an extension of the idea of `self-evidencing’ explanations, where the phenomenon that is explained in turn provides an essential part of the reason for believing the explanation is correct. For example, a star’s speed of recession explains why its characteristic spectrum is red-shifted by a specified amount, but the observed red-shift may be an essential part of the reason the astronomer has for believing that the star is receding at that speed. Self-evidencing explanations exhibit a curious circularity, but this circularity is benign.

WS00323Inference_zpsd842ff44The recession is used to explain the red-shift and the red-shift is used to confirm the recession, yet the recession hypothesis may be both explanatory and well-supported. According to Inference to the Best Explanation, this is a common situation in science: hypotheses are supported by the very observations they are supposed to explain. Moreover, on this model, the observations support the hypothesis precisely because it would explain them. Inference to the Best Explanation thus partially inverts an otherwise natural view of the relationship between inference and explanation. According to that natural view, inference is prior to explanation. First the scientist must decide which hypotheses to accept; then, when called upon to explain some observation, she will draw from her pool of accepted hypotheses. According to Inference to the Best Explanation, by contrast, it is only by asking how well various hypotheses would explain the available evidence that she can determine which hypotheses merit acceptance. In this sense, Inference to the Best Explanation has it that explanation is prior to inference.

Next Page »

Create a free website or blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.