How to do philosophy

16 Aug, 2021 at 19:25 | Posted in Theory of Science & Methodology | Comments Off on How to do philosophy

Introducing philosophy - OpenLearn - Open University - A211_1A contest was announced to see who could do the best job of carving up a side of beef.  The judge was announced as a famous chef, who had earned two Michelin stars.  Attracted by the prize money, a butcher and an analytic philosopher entered the contest.

The Analytic Philosopher went first.  A fresh side of beef was placed on a large wooden table, and he approached to begin.  He was dressed in freshly pressed chinos and a button-down shirt.  The Analytic Philosopher laid a leather case on one corner of the table and opened it, revealing a gleaming set of perfectly matched scalpels, newly sharpened.  He selected one scalpel carefully and addressed the side of beef.  After inspecting its surface carefully, he raised his hand and made the first cut, a precise slice in a perfectly straight line.  Working steadily, but with meticulous care, he proceeded to make slices and cross slices until he had completed the carving of the beef, a task that took him the better part of an hour.  When he had finished, he stepped back, wiped the scalpel clean on a piece of paper toweling, replaced it in the case, and with a bow to the judge, withdrew.

The butcher was next up.  Her side of beef was on a table next to that on which the Analytic Philosopher had been working.  She was dressed in overalls and a butcher’s apron, on which one could see spots of blood and stains from her work.  She took out a cleaver, a saw, and a sharp butcher’s knife, and went to work on her side of beef, wasting no time.  Bits of fat and gristle flew here and there, some ending up on her apron and even in her hair, which she had covered with a net.  She whistled as she worked at the table, until with a flourish, she put down her saw, bowed to the judge, and stepped back.

The judge examined each table for no more than a moment, and then without the slightest hesitation, handed the prize to the butcher.  The Analytic Philosopher was stunned.  “But,” he protested, “there is simply no comparison between the results on the two tables.  The butcher’s table is a shambles, a heap of pieces of meat, with fat and bits of bone and drops of blood all over the place.  My table is pristine — a careful display of perfectly carved cubes of meat, all with parallel sides and exactly the same size.  Why on earth have you given the prize to the butcher?”

The Judge explained.  “The butcher has turned her side of beef into a usable array of porterhouse steaks, T-bone steaks, sirloin steaks, beef roasts, and a small pile of beef scraps ready to be ground up for chop meat.  She clearly knew where the joints were in the beef, how to cut against the grain with the tough parts, where to apply her saw.  You, on the other hand, have reduced a perfectly good grade-A side of beef to stew meat.”

Moral:  When butchering a side of beef, it is best to know something about what lies beneath its surface.

Observation:  This is also not a bad idea when doing Philosophy.

The problem wih postmodernism

10 Aug, 2021 at 15:54 | Posted in Theory of Science & Methodology | 2 Comments

Catharine MacKinnon; Tony Trabert; Ralph Nader — Charlie RosePostmodernism is a flag flown by a diverse congeries, motley because lack of unity is their credo and they feel no need to be consistent. Part of the problem in coming to grips with postmodernism is that, pretending to be profound while being merely obscure (many are fooled), slathering subjects with words, its selfproclaimed practitioners fairly often don’t say much of anything …

Postmodernism as practiced often comes across as style — petulant, joyriding, more posture than position. But it has a method, making metaphysics far from dead. Its approach and its position, its posture toward the world and its view of what is real, is that it’s all mental. Postmodernism imagines that society happens in your head. Back in the modern period, this position was called idealism.

Catharine MacKinnon

What does a RCT tell us?

9 Aug, 2021 at 17:05 | Posted in Theory of Science & Methodology | 2 Comments

parachuteParachute use compared with a backpack control did not reduce death or major traumatic injury when used by participants jumping from aircraft in this first randomized evaluation of the intervention. This largely resulted from our ability to only recruit participants jumping from stationary aircraft on the ground. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials evaluating their effectiveness could selectively enroll individuals with a lower likelihood of benefit, thereby diminishing the applicability of trial results to routine practice. Therefore, although we can confidently recommend that individuals jumping from small stationary aircraft on the ground do not require parachutes, individual judgment should be exercised when applying these findings at higher altitudes.

Robert W Yeh et al.

Yeap — background​ knowledge sure is important when experimenting …

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

Why there is no relationship between truth and logic

5 Aug, 2021 at 11:58 | Posted in Theory of Science & Methodology | 7 Comments

.

To be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

But that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

Logic yields validity, not truth. - Post by Ziya on BoldomaticIn economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Mathematics and logic cannot establish the truth value of facts. Never has. Never will.

Hegel

3 Aug, 2021 at 10:11 | Posted in Theory of Science & Methodology | Comments Off on Hegel

.

Ontological emergence

10 Jul, 2021 at 10:37 | Posted in Theory of Science & Methodology | 4 Comments

.

Wittgensteins Sprachphilosophie — der Fliege den Ausweg aus dem Fliegenglas zeigen

4 Jul, 2021 at 16:43 | Posted in Theory of Science & Methodology | Comments Off on Wittgensteins Sprachphilosophie — der Fliege den Ausweg aus dem Fliegenglas zeigen

.

David Graeber on the importance of Roy Bhaskar’s work

3 Jul, 2021 at 11:09 | Posted in Theory of Science & Methodology | Comments Off on David Graeber on the importance of Roy Bhaskar’s work

.

No philosopher of science has influenced yours truly’s thinking more than Roy Bhaskar did. Roy always emphasised that the world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

On the limits of formal methods in causal inference

2 Jul, 2021 at 22:08 | Posted in Theory of Science & Methodology | Comments Off on On the limits of formal methods in causal inference

Causal Inference: Introduction to Causal Effect Estimation | inovex GmbHOur problem is … with the temptation to think that by stating some of our assumptions more clearly, we have successfully formalized the entire inferential process … Science may indeed seek objectivity, and for this reason a deductive method for causal inference is indeed highly desirable. But this does not mean that it is possible: we cannot have one just because we decide we need one. Causal conclusions do not follow deductively from data without a strong set of auxiliary assumptions, and … these assumptions are themselves not deductive consequences of the data. A formal method may indeed be extremely helpful, provided that its significance is not misunderstood and its dependence on supporting assumptions not forgotten …

If it is claimed that causal inference has been formalized and it is not explained that the formalism, powerful as it may be, is only as good as the assumptions that support it, then causal conclusions will look surer (‘more objective’) than they really are …

Estimations either of counterfactual contrasts or of interventions are interesting and important, but are often local effects in a particular time, place and population. And even these are not pure empirical findings, but are heavily theory-laden. They are not read or calculated from data, but inferred from it, and the inference depends upon a huge network of background hypotheses and scientific knowledge … Thus, causality is not a statistical concept whose presence or absence can be determined by statistical analysis of a set of data. It is a theoretical concept, even when invoked in quantitative estimates for particular populations. As  with any scientific theoretical finding, we infer causal conclusions (including estimations of causal effect) as the result of an inductive inference, considering all the available evidence.

Alex Broadbent, Jan P Vandenbroucke, Neil Pearce

Moving beyond induction and deduction

2 Jul, 2021 at 10:08 | Posted in Theory of Science & Methodology | 8 Comments

.

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition in which the main task of science is studying the structure of reality.

Science is made possible by the fact that there are structures that are durable and independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. Contrary to positivism, yours truly would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts, but rather to identify and explain the underlying structurex/forces/powers/ mechanisms that produce the observed events.

Given that what we are looking for is to be able to explain what is going on in the world we live in, it would — instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliative deductive reasoning, as in mainstream economic theory — be so much more fruitful and relevant to apply inference to the best explanation.

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (e only counts as evidence in relation to a specific hypothesis H) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

What is Inference? | Ontotext Fundamentals SeriesThis, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after — rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Sir. Austin Bradford Hill (1897-1991). | Download Scientific Diagram What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Austin Bradford Hill

The man who stopped smoking and saved millions of lives

1 Jul, 2021 at 16:53 | Posted in Theory of Science & Methodology | 3 Comments

.

Do RCTs really carry special epistemic weight?

30 Jun, 2021 at 17:09 | Posted in Theory of Science & Methodology | Comments Off on Do RCTs really carry special epistemic weight?

Mike Clarke, the Director of the Cochrane Centre in the UK, for example, states on the Centre’s Web site: ‘In a randomized trial, the only difference between the two groups being compared is that of most interest: the intervention under investigation’.

Evidence-based medicine is broken: why we need data and technology to fix itThis seems clearly to constitute a categorical assertion that by randomizing, all other factors — both known and unknown — are equalized between the experimental and control groups; hence the only remaining difference is exactly that one group has been given the treatment under test, while the other has been given either a placebo or conventional therapy; and hence any observed difference in outcome between the two groups in a randomized trial (but only in a randomized trial) must be the effect of the treatment under test.

Clarke’s claim is repeated many times elsewhere and is widely believed. It is admirably clear and sharp, but it is clearly unsustainable … Clearly the claim taken literally is quite trivially false: the experimental group contains Mrs Brown and not Mr Smith, whereas the control group contains Mr Smith and not Mrs Brown, etc. Some restriction on the range of differences being considered is obviously implicit here; and presumably the real claim is something like that the two groups have the same means and distributions of all the [causally?] relevant factors. Although this sounds like a meaningful claim, I am not sure whether it would remain so under analysis … And certainly, even with respect to a given (finite) list of potentially relevant factors, no one can really believe that it automatically holds in the case of any particular randomized division of the subjects involved in the study. Although many commentators often seem to make the claim … no one seriously thinking about the issues can hold that randomization is a sufficient condition for there to be no difference between the two groups that may turn out to be relevant …

In sum, despite what is often said and written, no one can seriously believe that having randomized is a sufficient condition for a trial result to be reasonably supposed to reflect the true effect of some treatment. Is randomizing a necessary condition for this? That is, is it true that we cannot have real evidence that a treatment is genuinely effective unless it has been validated in a properly randomized trial? Again, some people in medicine sometimes talk as if this were the case, but again no one can seriously believe it. Indeed, as pointed out earlier, modern medicine would be in a terrible state if it were true. As already noted, the overwhelming majority of all treatments regarded as unambiguously effective by modern medicine today — from aspirin for mild headache through diuretics in heart failure and on to many surgical procedures — were never (and now, let us hope, never will be) ‘validated’ in an RCT.

John Worrall

Testing causal claims

27 Jun, 2021 at 14:03 | Posted in Theory of Science & Methodology | Comments Off on Testing causal claims

.

What does randomisation guarantee? Nothing!

26 Jun, 2021 at 17:43 | Posted in Theory of Science & Methodology | 3 Comments

BJUP interview with John Worrall | Philosophy, Logic and Scientific MethodDoes not randomization somehow or other guarantee (or perhaps, much more plausibly, provide the nearest thing that we can have to a guarantee) that any possible links to … outcome, aside from the link to treatment …, are broken?

Although he does not explicitly make this claim, and although there are issues about how well it sits with his own technical programme, this seems to me the only way in which Pearl could, in the end, ground his argument for randomizing. Notice, first, however, that even if the claim works then it would provide a justification, on the basis of his account of cause, only for randomizing after we have deliberately matched for known possible confounders … Once it is accepted that for any real randomized allocation known factors might be unbalanced — and more sensible defenders of randomization do accept this (though curiously, as we saw earlier, they recommend rerandomizing until the known factors are balanced rather than deliberately balancing them!) — then it seems difficult to deny that a properly matched experimental and control group is better, so far as preventing known confounders from producing a misleading outcome, than leaving it to the happenstance of the tosses …

The random allocation may ‘sever the link’ with this unknown factor or it may not (since we are talking about an unknown factor, then, by definition, we will not and cannot know which). Pearl’s claim that Fisher’s method ‘guarantees’ that the link with the possible confounders is broken is then, in practical terms, pure bluster. 

John Worrall

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that (rather simplistic) view on randomization is that the claims made are both exaggerated and strictly seen false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

The problem many ‘randomistas’ end up with when underestimating heterogeneity and interaction is not only an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural, or quasi) experiments to different settings, populations, or target systems, is not easy. And since trials usually are not repeated, unbiasedness and balance on average over repeated trials say nothing about anyone trial. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalisable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

Randomisation may often — in the right contexts — help us to draw causal conclusions. But it certainly is not necessary to secure scientific validity or establish causality. Randomisation guarantees nothing. Just as observational studies may be subject to different biases, so are randomised studies and trials.

The epistemic fallacy

19 Jun, 2021 at 15:48 | Posted in Theory of Science & Methodology | 15 Comments

bhaskit is not the fact that science occurs that gives the world a structure such that it can be known by men. Rather, it is the fact that the world has such a structure that makes science, whether or not it actually occurs, possible. That is to say, it is not the character of science that imposes a determinate pattern or order on the world; but the order of the world that, under certain determinate conditions, makes possible the cluster of activities we call ‘science’. It does not follow from the fact that the nature of the world can only be known from (a study of) science, that its nature is determined by (the structure of) science. Propositions in ontology, i.e. about being, can only be established by reference to science. But this does not mean that they are disguised, veiled or otherwise elliptical propositions about science … The ‘epistemic fallacy’ consists in assuming that, or arguing as if, they are.

No philosopher of science has influenced yours truly’s thinking more than Roy did, and in a time when scientific relativism is still on the march, it is important to keep up his claim for not reducing science to a pure discursive level.

Roy-Bhaskar-009

Science is made possible by the fact that there exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything about​ what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events are​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: