Science — the need for causal explanation

22 Sep, 2022 at 16:29 | Posted in Theory of Science & Methodology | Leave a comment

Many journal editors request authors to avoid causal language, and many observational researchers, trained in a scientific environment that frowns upon causality claims, spontaneously refrain from mentioning the C-word (“causal”) in their work …

Causal Inference that's not A/B Testing: Theory & Practical Guide | by Eva  Gong | Towards Data ScienceThe proscription against the C-word is harmful to science because causal inference is a core task of science, regardless of whether the study is randomized or nonrandomized. Without being able to make explicit references to causal effects, the goals of many observational studies can only be expressed in a roundabout way. The resulting ambiguity impedes a frank discussion about methodology because the methods used to estimate causal effects are not the same as those used to estimate associations. Confusion then ensues at the most basic levels of the scientific process and, inevitably, errors are made …

We all agree: confounding is always a possibility and therefore association is not necessarily causation. One possible reaction is to completely ditch causal language in observational studies. This reaction, however, does not solve the tension between causation and association; it just sweeps it under the rug …

Without causally explicit language, the means and ends of much of observational research get hopelessly conflated … Carefully distinguishing between causal aims and associational methods is not just a matter of enhancing scientific communication and transparency. Eliminating the causal–associational ambiguity has practical implications for the quality of observational research too.

Miguel Hernán

Highly recommended reading!

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations. Still, we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think the likeliest explanation is the best. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitutes actual scientific reasoning and inferences.

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proof. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do not produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real world, well, then we haven’t really obtained the causation we are looking for.

Mill’s methods of causal inference (student stuff)

29 Aug, 2022 at 10:40 | Posted in Theory of Science & Methodology | Comments Off on Mill’s methods of causal inference (student stuff)

.

As we all know, R. A. Fisher was not too happy about Mill’s method of difference, since it, according to him, built on the impossible requirement of being able to compare identical units under different circumstances. Fisher instead favoured the experimental method of randomized treatment assignment. But if you cannot assign treatment randomly — as in most observational studies — there is much that supports Mill’s method of difference for making causal inferences.

Science and crossword solving

20 Aug, 2022 at 16:03 | Posted in Theory of Science & Methodology | 1 Comment

Evidence and Inquiry: Towards Reconstruction in Epistemology by Susan Haack  - Hardcover - Reprint - 1994 - from Common Crow Books (SKU: B54435)The model is not . . . how one determines the soundness or otherwise of a mathematical proof; it is, rather, how one determines the reasonableness or otherwise of entries in a crossword puzzle. . . . The crossword model permits pervasive mutual support, rather than, like the model of a mathematical proof, encouraging an essentially one-directional conception. . . . How reasonable one’s confidence is that a certain entry in a crossword is correct depends on: how much support is given to this entry by the clue and any intersecting entries that have already been filled in; how reasonable, independently of the entry in question, one’s confidence is that those other already filled-in entries are correct; and how many of the intersecting entries have been filled in.

Yours truly — an avid crossword solver himself — can’t but agree.

In inference to the best explanation, we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analyzed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

What is Inference? | Ontotext Fundamentals SeriesThis, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like crossword solvers and detectives — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after — rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Sir. Austin Bradford Hill (1897-1991). | Download Scientific Diagram What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question — is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Austin Bradford Hill

Frank Ramsey — a portrait and a critique

4 Aug, 2022 at 17:10 | Posted in Theory of Science & Methodology | 2 Comments

.

Mainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately — via some “Dutch book” or “money pump” argument — susceptible to being ruined by some clever “bookie”.

Bayesian Stats Joke | Data science, Mathematik meme, Mathe witzeBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics, one could argue that there is simply not enough adequate and relevant information to ground beliefs of a probabilistic kind and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on your own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10 %. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1 if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign a probability of 10% of becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary and ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by Bayesian economists.

In economics, it’s an indubitable fact that few mainstream neoclassical economists work within the Keynesian paradigm. All more or less subscribe to some variant of Bayesianism. And some even say that Keynes acknowledged he was wrong when presented with Ramsey’s theory. This is a view that has unfortunately also been promulgated by Robert Skidelsky in his otherwise masterly biography of Keynes. But I think it’s fundamentally wrong. Let me elaborate on this point (the argumentation is more fully presented in my book John Maynard Keynes (SNS, 2007)).

It’s a debated issue in newer research on Keynes if he, as some researchers maintain, fundamentally changed his view on probability after the critique levelled against his A Treatise on Probability by Frank Ramsey. It has been exceedingly difficult to present evidence for this being the case.

Ramsey’s critique was mainly that the kind of probability relations that Keynes was speaking of in Treatise actually didn’t exist and that Ramsey’s own procedure  (betting) made it much easier to find out the “degrees of belief” people were having. I question this both from a descriptive and a normative point of view.

What Keynes is saying in his response to Ramsey is only that Ramsey “is right” in that people’s “degrees of belief” basically emanate from human nature rather than in formal logic.

Patrick Maher, former professor of philosophy at the University of Illinois, even suggests that Ramsey’s critique of Keynes’s probability theory in some regards is invalid:

Keynes’s book was sharply criticized by Ramsey. In a passage that continues to be quoted approvingly, Ramsey wrote:

“But let us now return to a more fundamental criticism of Mr. Keynes’ views, which is the obvious one that there really do not seem to be any such things as the probability relations he describes. He supposes that, at any rate in certain cases, they can be perceived; but speaking for myself I feel confident that this is not true. I do not perceive them, and if I am to be persuaded that they exist it must be by argument; moreover, I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.” (Ramsey 1926, 161)

I agree with Keynes that inductive probabilities exist and we sometimes know their values. The passage I have just quoted from Ramsey suggests the following argument against the existence of inductive probabilities. (Here P is a premise and C is the conclusion.)

P: People are able to come to very little agreement about inductive proba- bilities.
C: Inductive probabilities do not exist.

P is vague (what counts as “very little agreement”?) but its truth is still questionable. Ramsey himself acknowledged that “about some particular cases there is agreement” (28) … In any case, whether complicated or not, there is more agreement about inductive probabilities than P suggests …

I have been evaluating Ramsey’s apparent argument from P to C. So far I have been arguing that P is false and responding to Ramsey’s objections to unmeasurable probabilities. Now I want to note that the argument is also invalid. Even if P were true, it could be that inductive probabilities exist in the (few) cases that people generally agree about. It could also be that the disagreement is due to some people misapplying the concept of inductive probability in cases where inductive probabilities do exist. Hence it is possible for P to be true and C false …

I conclude that Ramsey gave no good reason to doubt that inductive probabilities exist.

Ramsey’s critique made Keynes more strongly emphasize the individuals’ own views as the basis for probability calculations, and less stress that their beliefs were rational. But Keynes’s theory doesn’t stand or fall with his view on the basis of our “degrees of belief” as logical. The core of his theory — when and how we are able to measure and compare different probabilities —he doesn’t change. Unlike Ramsey, he wasn’t at all sure that probabilities always were one-dimensional, measurable, quantifiable or even comparable entities.

Big Data — a critical realist perspective

1 Jul, 2022 at 09:28 | Posted in Theory of Science & Methodology | 1 Comment

.

Hegel in 60 minuten

22 Jun, 2022 at 18:14 | Posted in Theory of Science & Methodology | Comments Off on Hegel in 60 minuten

.

Abduction — beyond deduction and induction

6 Jun, 2022 at 08:54 | Posted in Theory of Science & Methodology | 7 Comments

Science is made possible by the fact that there are structures that are durable and independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. Contrary to positivism, yours truly would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts, but rather to identify and explain the underlying structures/forces/powers/ mechanisms that produce the observed events.

Given that what we are looking for is to be able to explain what is going on in the world we live in, it would — instead of building models based on logic-axiomatic, topic-neutral, context-insensitive, and non-ampliative deductive reasoning, as in mainstream economic theory — be so much more fruitful and relevant to apply inference to the best explanation.

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (e only counts as evidence in relation to a specific hypothesis H) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation, we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analyzed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

What is Inference? | Ontotext Fundamentals SeriesThis, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after — rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Sir. Austin Bradford Hill (1897-1991). | Download Scientific Diagram What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question — is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Austin Bradford Hill

Causal mediation

1 Jun, 2022 at 18:29 | Posted in Theory of Science & Methodology | 1 Comment

mediatorIn the real world, it’s my impression that almost all the mediation analyses that people actually fit in the social and medical sciences are misguided: lots of examples where the assumptions aren’t clear and where, in any case, coefficient estimates are hopelessly noisy and where confused people will over-interpret statistical significance …

So how to do it? I don’t think traditional path analysis or other multivariate methods of the throw-all-the-data-in-the-blender-and-let-God-sort-em-out variety will do the job. Instead we need some structure and some prior information.

Andrew Gelman

Most facts have many different, possible, alternative explanations, but we usually want to find — since all real explanation takes place relative to a set of alternatives — the best of all contrastive explanations.

So which is the best explanation?

Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of X is not in itself a strong argument for thinking it explains Y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons yours truly finds abduction — inference to the best explanation — a better description and account of what constitutes actual scientific reasoning and inferences.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions … However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proof. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do not produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real world, well, then we haven’t really obtained the causation we are looking for.

If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship should exist between statistical and subject-matter concerns …

The expert in social mobility who says education cannot make it happen |  Education | The Guardian The idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods and, especially to ‘practical science’ where the central concern is indeed with ‘the consequences of performing particular acts’. The development of this idea in the context of medical and agricultural research is as understandable as the development of that of causation as robust dependence within applied econometrics. However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that​ under the — highly anthropocentric — principle of ‘no causation without manipulation’, the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.

John H. Goldthorpe

Theory-ladenness

22 May, 2022 at 23:38 | Posted in Theory of Science & Methodology | 1 Comment

Observation of real world objects and events is viewed as theory laden....  | Download Scientific DiagramIt is now widely recognised that observation is not theory-neutral but theory-laden, and that theory does not merely ‘order facts’ but makes claims about the nature of its object. So, in evaluating observations we are also assessing particular theoretical concepts and existential claims. A common response to this shattering of innocent beliefs in the certainty and neutrality of observation has been the development of idealist (especially conventionalist and rationalist) philosophies which assume that if observation is theory-laden, it must necessarily be theory-determined, such that it is no longer possible to speak of criteria of ‘truth’ or ‘objectivity’ which are not entirely internal to ‘theoretical discourse’. However, this is a non-sequitur for at least two reasons. First, theory-laden observation need not be theory-determined. Even the arch-conventionalist Feyerabend (1970) acknowledges that ‘it is possible to refute a theory by an experience that is entirely interpreted within its own terms’. If I ask how many leaves there are on a tree, my empirical observation will be controlled by concepts regarding the nature of trees, leaves and the operation of counting, but to give an answer I’d still have to go and look! In arguing that there are no extra-discursive criteria of truth, recent idealists such as Hindess and Hirst echo Wittgenstein’s identification of the limits of our world with the limits of language, and share the confusion of questions of What exists? with What can be known to exist? The truism that extra-discursive controls on knowledge can only be referred to in discourse does not mean that what is referred to is purely internal to discourse. Secondly, and more simply, it does not follow from the fact that all knowledge is fallible, that it is all equally fallible.

Andrew Sayer

Gödel and the limits of mathematics

10 May, 2022 at 16:50 | Posted in Theory of Science & Methodology | 1 Comment

.

Gödel’s incompleteness theorems raise important questions about the foundations of mathematics.

The most important concern is the question of how to select the specific systems of axioms that mathematics is supposed to be founded on. Gödel’s theorems irrevocably show that no matter what system is chosen, there will always have to be other axioms to prove previously unproven truths.

This, of course, ought to be of paramount interest for those mainstream economists who still adhere to the dream of constructing deductive-axiomatic economics with analytic truths that do not require empirical verification. Since Gödel showed that any complex axiomatic system is undecidable and incomplete, any such deductive-axiomatic economics will always consist of some undecidable statements. When not even being able to fulfil the dream of a complete and consistent axiomatic foundation for mathematics, it’s totally incomprehensible that some people still think that could be achieved for economics.

Separating questions of logic and empirical validity may — of course — help economists to focus on producing rigorous and elegant mathematical theorems that people like Lucas and Sargent consider “progress in economic thinking.” To most other people, not being concerned with empirical evidence and model validation is a sign of social science becoming totally useless and irrelevant. Economic theories building on known to be ridiculously artificial assumptions without an explicit relationship with the real world is a dead end. That’s probably also the reason why general equilibrium analysis today (at least outside Chicago) is considered a total waste of time. In the trade-off between relevance and rigour, priority should always be on the former when it comes to social science. The only thing followers of the Bourbaki tradition within economics — like Karl Menger, John von Neumann, Gerard Debreu, Robert Lucas, and Thomas Sargent — have given us are irrelevant model abstractions with no bridges to real-world economies. It’s difficult to find a more poignant example of intellectual resource waste in science.

Enlightenment and mathematics

9 May, 2022 at 09:27 | Posted in Theory of Science & Methodology | 6 Comments

Dialectic of Enlightenment (Cultural Memory in the Present) - Kindle  edition by Horkheimer, Max, Adorno, Theodor W., Noeri, Gunzelin Schmid,  Jephcott, Edmund. Politics & Social Sciences Kindle eBooks @ Amazon.com.When in mathematics the unknown becomes the unknown quantity in an equation, it is made into something long familiar before any value has been assigned. Nature, before and after quantum theory, is what can be registered mathematically; even what cannot be assimilated, the insoluble and irrational, is fenced in by mathematical theorems. In the preemptive identification of the thoroughly mathematized world with truth, enlightenment believes itself safe from the return of the mythical. It equates thought with mathematics. The latter is thereby cut loose, as it were, turned into an absolute authority …

The reduction of thought to a mathematical apparatus condemns the world to be its own measure. What appears as the triumph of subjectivity, the subjection of all existing things to logical formalism, is bought with the obedient subordination of reason to what is immediately at hand. To grasp existing things as such, not merely to note their abstract spatial-temporal relationships, by which they can then be seized, but, on the contrary, to think of them as surface, as mediated conceptual moments which are only fulfilled by revealing their social, historical, and human meaning—this whole aspiration of knowledge is abandoned. Knowledge does not consist in mere perception, classification, and calculation but precisely in the determining negation of whatever is directly at hand. Instead of such negation, mathematical formalism, whose medium, number, is the most abstract form of the immediate, arrests thought at mere immediacy. The actual is validated, knowledge confines itself to repeating it, thought makes itself mere tautology. The more completely the machinery of thought subjugates existence, the more blindly it is satisfied with reproducing it. Enlightenment thereby regresses to the mythology it has never been able to escape …

The subsumption of the actual, whether under mythical prehistory or under mathematical formalism, the symbolic relating of the present to the mythical event in the rite or to the abstract category in science, makes the new appear as something predetermined which therefore is really the old. It is not existence that is without hope, but knowledge which appropriates and perpetuates existence as a schema in the pictorial or mathematical symbol.

Foucault’s cryptonormative approach — a critique

2 May, 2022 at 18:38 | Posted in Theory of Science & Methodology | 1 Comment

I always found Foucault’s work frustrating to read. His empirical accounts are interesting and some of his concepts fruitful – disciplinary power, capillary power, surveillance, technologies of the self, the entrepreneur of the self, for example – and he was prescient about neoliberalism, but his theoretical reasoning is often confused. His attempts to define power, and his unacknowledged slippage between different concepts of truth in The History of Sexuality Part I are examples …

Andrew Sayer - SASEFoucault’s accounts of the social world have a generally ominous tone, but they fail to identify what is bad and why, so one is left wanting to write ‘so what?’ in the margin. Thus, sociologists of health sciences inspired by him would often describe certain practices as involving the ‘medicalization’ of certain conditions without saying whether this was appropriate or inappropriate, or good or bad, and why. If we don’t know whether people are harmed or benefitted by a practice, then we don’t know much about it; cryptonormative accounts of social life are also deficient as descriptions.

Actually, the problem goes beyond Foucault: self-styled critical social science has often failed to explore in any depth the normative issues concerning what is bad about the objects of its critiques, as if it could rely on readers reading between the lines in the desired way. This was an effect of the unhappy divorce of positive and normative thought in social science. Tellingly, Foucault invoked the is-ought framework to defend his refusal of normativity, saying that it was not his job to tell people what to do, as if normativity were primarily about instructions rather than evaluations. While post-structuralism did provide novel insights, the combination of its resistance to normativity (as reducible to the limitation of possibilities through ‘normalizing’) and its anti-humanism (‘humanist’ became another sneer term) also made social science less critical.

Andrew Sayer (interviewed by Jamie Morgan)

Was ist Kausalität?

31 Jan, 2022 at 16:30 | Posted in Theory of Science & Methodology | Comments Off on Was ist Kausalität?

.

The Holy Grail of Science

28 Jan, 2022 at 14:01 | Posted in Theory of Science & Methodology | 1 Comment

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.
monthly-sharpe-header

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive inference, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

Statistical philosophies and idealizations

18 Jan, 2022 at 18:16 | Posted in Theory of Science & Methodology | 2 Comments

As has been long and widely emphasized in various terms … frequentism and Bayesianism are incomplete both as learning theories and as philosophies of statistics, in the pragmatic sense that each alone are insufficient for all sound applications. Notably, causal justifications are the foundation for classical frequentism, which demands that all model constraints be deduced from real mechanical constraints on the physical data-generating process. Nonetheless, it seems modeling analyses in health, medical, and social sciences rarely have such physical justification …

The deficiency of strict coherent (operational subjective) Bayesianism is its assumption that all aspects of this uncertainty have been captured by the prior and likelihood, thus excluding the possibility of model misspecification. DeFinetti himself was aware of this limitation:

“…everything is based on distinctions which are themselves uncertain and vague, and which we conventionally translate into terms of certainty only because of the logical formulation…In the mathematical formulation of any problem it is necessary to base oneself on some appropriate idealizations and simplification. This is, however, a disadvantage; it is a distorting factor which one should always try to keep in check, and to approach circumspectly. It is unfortunate that the reverse often happens. One loses sight of the original nature of the problem, falls in love with the idealization, and then blames reality for not conforming to it.” [DeFinetti 1975, p. 279]

By asking for physically causal justifications of the data distributions employed in statistical analyses (whether those analyses are labeled frequentist or Bayesian), we may minimize the excessive certainty imposed by simply assuming a probability model and proceeding as if that idealization were a known fact.

Sander Greenland

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: