Roy Bhaskar

5 Aug, 2019 at 10:58 | Posted in Theory of Science & Methodology | 13 Comments

royWhat properties do societies possess that might make them possible objects of knowledge for us? My strategy in developing an answer to this question will be effectively based on a pincer movement. But in deploying the pincer I shall concentrate first on the ontological question of the properties that societies possess, before shifting to the epistemological question of how these properties make them possible objects of knowledge for us. This is not an arbitrary order of development. It reflects the condition that, for transcendental realism, it is the nature of objects that determines their cognitive possibilities for us; that, in nature, it is humanity that is contingent and knowledge, so to speak, accidental. Thus it is because sticks and stones are solid that they can be picked up and thrown, not because they can be picked up and thrown that they are solid (though that they can be handled in this sort of way may be a contingently necessary condition for our knowledge of their solidity).

No philosopher of science has influenced yours truly’s thinking more than Roy did, and in a time when scientific relativism is still on the march, it is important to keep up his claim for not reducing science to a pure discursive level.

Roy-Bhaskar-009Science is made possible by the fact that there exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything about​ what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events are​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

Minimal realism — much ado about nothing

8 Jul, 2019 at 17:40 | Posted in Theory of Science & Methodology | 3 Comments

ccTo generalise Mäki’s distinction between realism and realisticness, someone who believes that economic theories must or should include unrealistic assumptions is not necessarily a non-realist in the broader sense of philosophical realism: “A realist economist is permitted, indeed required, to use unrealistic assumptions in order to isolate what are believed to be the most essential features in a complex situation … To count as a minimal realist, an economist is required to believe that economic reality is unconstituted by his or her representations of it and that whatever truth value those representations have is independent of his or her or anybody else’s opinions of it” (Mäki 1994: 248).

Although Lawson would presumably not deny that orthodox economic theorists account as minimal realists in this sense, his concern is that orthodox economic theory is unrealistic in not representing the way things really are in that it does not refer factually and does not latch onto what is essential in the social domain … Lawson’s standpoint is that economic theory should strive for true explanations of social phenomena, hence Lawson is a methodological realist in this respect.

Duncan Hodge

The explanation paradox in economics

2 Jul, 2019 at 15:28 | Posted in Economics, Theory of Science & Methodology | 7 Comments

hotHotelling’s model, then, is false in all relevant senses … And yet, it is considered explanatory. Moreover, and perhaps more importantly, it feels explanatory. If we have not thought much about Hotelling’s kind of cases, it seems that we have genuinely learned something. We begin to see Hotelling situations all over the place. Why do electronics shops in London concentrate in Tottenham Court Road and music shops in Denmark Street? Why do art galleries in Paris cluster around Rue de Seine? Why have so many hi-fi-related retailers set up business in Calle Barquillo in Madrid such that it has come to be known as ‘Calle del Sonido’ (Street of Sound)? And why the heck are most political parties practically indistinguishable? But we do not only come to see that, we also intuitively feel that Hotelling’s model must capture something that is right.

We have now reached an impasse of the kind philosophers call a paradox: a set of statements, all of which seem individually acceptable or even unquestionable but which, when taken together, are jointly contradictory. These are the statements:

(1) Economic models are false.
(2) Economic models are nevertheless explanatory.
(3) Only true accounts can explain.

When facing a paradox, one may respond by either giving up one or more of the jointly contradictory statements or else challenge our logic. I have not found anyone writing on economic models who has explicitly challenged logic (though their writings sometimes suggest otherwise).

Julian Reiss

The logic of economic models

1 Jul, 2019 at 17:28 | Posted in Economics, Theory of Science & Methodology | 2 Comments

nancyAnalogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.

Nancy Cartwright

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’ Galileo's falling bodies experimentIf we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s experiments were according to Nancy Cartwright (Hunting Causes and Using Them, p. 223)

designed to find out what contribution the motion due to the pull of the earth will make, with the assumption that the contribution is stable across all the different kinds of situations falling bodies will get into … He eliminated (as far as possible) all other causes of motion on the bodies in his experiment so that he could see how they move when only the earth affects them. That is the contribution that the earth’s pull makes to their motion.

Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

The problem articulated by Cartwright (in the quote at the top of this post) is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

Let me just take one example to show that as a result of this the Galilean virtue is totally lost — there is no way the results achieved within the model can be exported to other circumstances.

When Pissarides — in his ‘Loss of Skill during Unemployment and the Persistence of Unemployment Shocks’ QJE (1992) —try to explain involuntary unemployment, he do so by constructing a model using assumptions such as e. g. ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, and ”job matching describable by a probability distribution.” The core assumption of expected utility maximizing agents doesn’t take the models anywhere, so to get some results Pissarides have to load his model with all these constraining auxiliary assumptions. Without those assumptions, the model would deliver nothing. The auxiliary assumptions matter crucially. So, what’s the problem? There is no way the results we get in that model would happen in reality! Not even extreme idealizations in the form of invoking non-existent entities such as ‘actors maximizing expected utility’ delivers. The model is not a Galilean thought-experiment. Given the set of constraining assumptions, this happens. But change only one of these assumptions and something completely different may happen.

Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they.

A. Alexandrova & R. Northcott

The lack of ‘robustness’ with respect to variation of the model assumptions underscores that this is not the kind of knowledge we are looking for. We want to know what happens to unemployment in general in the real world, not what might possibly happen in a model given a constraining set of known to be false assumptions. This should come as no surprise. How that model with all its more or less outlandishly looking assumptions ever should be able to connect with the real world is, to say the least, somewhat unclear. The total absence of strong empirical evidence and the lack of similarity between the heavily constrained model and the real world makes it even more difficult to see how there could ever be any inductive bridging between them. As Cartwright has it, the assumptions are not only unrealistic, they are unrealistic “in just the wrong way.”

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all int their models.

So why do mainstream economists keep on pursuing this modelling project?

Continue Reading The logic of economic models…

My philosophy of economics

18 Jun, 2019 at 13:45 | Posted in Economics, Theory of Science & Methodology | 4 Comments

A critique yours truly sometimes encounters is that as long as I cannot come up with some own alternative to the failing mainstream theory, I shouldn’t expect people to pay attention.

This is, however, to totally and utterly misunderstand the role of philosophy and methodology of economics!

As John Locke wrote in An Essay Concerning Human Understanding:

19557-004-21162361The Commonwealth of Learning is not at this time without Master-Builders, whose mighty Designs, in advancing the Sciences, will leave lasting Monuments to the Admiration of Posterity; But every one​e must not hope to be a Boyle, or a Sydenham; and in an Age that produces such Masters, as the Great-Huygenius, and the incomparable Mr. Newton, with some other of that Strain; ’tis Ambition enough to be employed as an Under-Labourer in clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge.

That’s what philosophy and methodology can contribute to economics — clearing obstacles to science by clarifying limits and consequences of choosing specific modelling strategies, assumptions, and ontologies.

respectEvery now and then I also get some upset comments from people wondering why I’m not always ‘respectful’ of people like Eugene Fama, Robert Lucas, Greg Mankiw, Paul Krugman, Simon Wren-Lewis, and others of the same ilk.

But sometimes it might actually, from a Lockean perspective, be quite appropriate to be disrespectful.

New Classical and ‘New Keynesian’ macroeconomics is rubbish that ‘lies in the way to Knowledge.’

And when New Classical and ‘New Keynesian’ economists resurrect fallacious ideas and theories that were proven wrong already in the 1930s, then I think a less respectful and more colourful language is called for.

The LOGIC of science vs the METHODS of science

10 Jun, 2019 at 10:02 | Posted in Theory of Science & Methodology | Comments Off on The LOGIC of science vs the METHODS of science


Postmodern mumbo jumbo

30 May, 2019 at 13:18 | Posted in Theory of Science & Methodology | 2 Comments

Fyra viktiga drag är gemensamma för de olika rörelserna:

    1. Centrala idéer förklaras inte.
    2. Grunderna för en övertygelse anges inte.
    3. Framställningen av läran har en språklig stereotypi …
    4. När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn återkommer. Heidegger, Foucault, och Derrida kommer tillbaka, åter och åter …

Till de fyra punkterna vill jag emellertid … lägga till en femte:

5. Vederbörande har inte något väsentligen nytt att framföra.

Överdrivet? Elakt? Tja, smaken är olika. Men smaka på den här soppan och försök sen säga att det inte ligger något i den gamle lundaprofessorns karakteristik …

MUMBO-JUMBO1The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

Judith Butler

RCTs — gold standard or monster?

8 May, 2019 at 16:11 | Posted in Theory of Science & Methodology | Comments Off on RCTs — gold standard or monster?

tttOne important comment, repeated — but not unanimously — can perhaps be summarized as ‘All that said and done, RCTs are still generally the best that can be done in estimating average treatment effects and in warranting causal conclusions.’ It is this claim that is the monster that seemingly can never be killed, no matter how many stakes are driven through its heart. We strongly endorse Robert Sampson’s statement “That experiments have no special place in the hierarchy of scientific evidence seems to me to be clear.” Experiments are sometimes the best that can be done, but they are often not. Hierarchies that privilege RCTs over any other evidence irrespective of context or quality are indefensible and can lead to harmful policies. Different methods have different relative advantages and disadvantages.

Angus Deaton & Nancy Cartwright

Revisiting the foundations of randomness and probability

30 Apr, 2019 at 14:17 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 5 Comments

dudeRegarding models as metaphors leads to a radically different view regarding the interpretation of probability. This view has substantial advantages over conventional interpretations …

Probability does not exist in the real world. We must search for her in the Platonic world of ideals. We have shown that the interpretation of probability as a metaphor leads to several substantial changes in interpretations and justifications for conventional frequentist procedures. These changes remove several standard objections which have been made to these procedures. Thus our model seems to offer a good foundation for re-building our understanding of how probability should be interpreted in real world applications. More generally, we have also shown that regarding scientific models as metaphors resolves several puzzles in the philosophy of science.

Asad Zaman

Although yours truly has to confess of not being totally convinced that redefining​ probability as a metaphor is the right way to go forward on these foundational issues, Zaman’s article​ sure raises some very interesting questions on the way the concepts of randomness and probability are used in economics.

Modern mainstream economics relies to a large degree on the notion of probability. To at all be amenable to applied economic analysis, economic observations have to be conceived as random events that are analyzable within a probabilistic framework. But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

slide_1When attempting to convince us of the necessity of founding empirical economic analysis on probability models,  mainstream economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world (although I’m not sure Zaman agrees but rather puts also randomness in ‘the Platonic world of ideals’). Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events — in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment — there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures — something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences — and certainly in economics — are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data — whether cross-section, time series or panel data — as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes — at least outside of nomological machines like dice and roulette-wheels — are not self-evidently best modelled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks sound foundations.

When economists and econometricians — often uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations!​

On the impossibility of objectivity in science

25 Apr, 2019 at 14:16 | Posted in Theory of Science & Methodology | Comments Off on On the impossibility of objectivity in science

objOperations Research does not incorporate the arts and humanities largely because of its distorted belief that doing so would reduce its objectivity, a misconception it shares with much of science. The meaning of objectivity is less clear than that of optimality. Nevertheless, most scientists believe it is a good thing. They also believe that objectivity in research requires the exclusion of any ethical-moral values held by the researchers. We need not argue the desirability of objectivity so conceived; it is not possible.

Most, if not all, scientific inquiries involve either testing hypotheses or estimating the values of variables. Both of these procedures necessarily involve balancing two types of error. Hypotheses-testing procedures require use of a significance level, the significance of which appears to escape most scientists. Their choice of such a level is usually made unconsciously, dictated by convention. This level, as many of you know, is a probability of rejecting a hypothesis when it is true. Naturally, we would like to make this probability as small as possible. Unfortunately, however, the lower we set this probability, the higher is the probability of accepting a hypothesis when it is false. Therefore, choice of a significance level involves a value judgment by the scientist about the relative seriousness of these two types of error. The fact that he usually makes this value judgment unconsciously does not attest to his objectivity, but to his ignorance.

There is a significance level at which any hypothesis is acceptable, and a level at which it is not. Therefore, statistical significance is not a property of data or a hypothesis but is a consequence of an implicit or explicit value judgment applied to them.

The choice of an estimating procedure can also be shown to require the evaluation of the relative importance of negative and positive errors of estimation. The most commonly used procedures are “unbiased”; therefore, they provide best estimates only when errors of equal magnitude but of opposite sign are equally serious — a condition I have never found in the real world.

Russell L. Ackoff

RCTs — a method in search of ontological foundations

29 Mar, 2019 at 19:41 | Posted in Theory of Science & Methodology | 3 Comments

dessin2sRCTs treat social reality as though some simulacrum of laboratory conditions was a feasible and appropriate scientific method to apply, but in development research, unlike laboratory condition treatments, interventions are not manipulations of individuated and additive or simply combinable material components … but rather intervention into material social relations. While for the former, assuming away or stripping away everything other than a given effect focus can reveal the underlying invariant mechanics of that effect, in the latter one cannot take it as given that there is an underlying invariant mechanics that will continue to apply and one is just as liable to be assuming or stripping away what is important to the constitution of the material social relations … As such, RCTs may make for poor social science, because the approach is based on a mismatch between the RCT procedure and the constitution of reality under investigation—including the treatment of humans as deliberative centers of ultimate concern. In any case, technical sophistication is no guarantor of appropriately conceived “rigour” if the orientation of methods is inappropriate …

Jamie Morgan

Morgan’s reasoning confirms what yours truly has repeatedly argued on this blog and in On the use and misuse of theories and models in mainstream economics  — RCTs usually do not provide evidence that the results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

What makes collective action more likely?

25 Feb, 2019 at 18:09 | Posted in Theory of Science & Methodology | Comments Off on What makes collective action more likely?

Why I am not a Bayesian

17 Feb, 2019 at 18:20 | Posted in Theory of Science & Methodology | 16 Comments

What I do not believe is that the relation that matters is simply the entailment relation between the theory, on the one hand, and the evidence on the other. The reasons that the relation cannot be simply that of entailment are exactly the reasons why the hypothetico-deductive account … is inaccurate; but the suggestion is at least correct in sensing that our judgment of the relevance of evidence to theory depends on the perception of a structural connection between the two, and that degree of belief is, at best, epiphenomenal. In the determination of the bearing of evidence on theory there seem to be mechanisms and strategems that have no apparent connection with degrees of belief, which are shared alike by people advocating different theories. Save for the most radical innovations, scientists seem to be in close agreement regarding what would or would not be evidence relevant to a novel theory; claims as to the relevance to some hypothesis of some observation or experiment are frequently buttressed by detailed calculations and arguments.e905b578f6 All of these features of the determination of evidential relevance suggest that that relation depends somehow on structural, objective features connecting statements of evidence and statements of theory. But if that is correct, what is really important and really interesting is what these structural features may be. The condition of positive relevance, even if it were correct, would simply be the least interesting part of what makes evidence relevant to theory.

None of these arguments is decisive against the Bayesian scheme of things … But taken together, I think they do at least strongly suggest that there must be relations between evidence and hypotheses that are important to scientific argument and to confirmation but to which the Bayesian scheme has not yet penetrated.

Clark Glymour

The vain search​ for The Holy Grail of Science

13 Feb, 2019 at 17:40 | Posted in Theory of Science & Methodology | Comments Off on The vain search​ for The Holy Grail of Science

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive inference, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

The quest​ for certainty — a new substitute for religion

10 Feb, 2019 at 16:13 | Posted in Theory of Science & Methodology | 6 Comments

popIn this post-rationalist age of ours, more and more books are written in symbolic languages, and it becomes more and more difficult to see why: what it is all about, and why it should be necessary, or advantageous, to allow oneself to be bored by volumes of symbolic trivialities. It almost seems as if the symbolism were becoming a value in itself, to be revered for its sublime ‘exactness’: a new expression of the old quest for certainty, a new symbolic ritual, a new substitute for religion.

As a critic of mainstream economics mathematical-formalist Glasperlenspiel it is easy to share the feeling of despair …

Next Page »

Blog at
Entries and comments feeds.