From association to causation

11 May, 2021 at 18:15 | Posted in Theory of Science & Methodology | Leave a comment

.

Evidence-based policy

10 May, 2021 at 13:15 | Posted in Theory of Science & Methodology | Leave a comment

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right closures. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with a transportability warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to transport, the value of “rigorous” and “precise” methods — and ‘on-average-knowledge’ — is despairingly small.

Evidence-Based Policy: A Practical Guide To Doing It Better: Amazon.co.uk:  Cartwright, Nancy, Hardie, Jeremy: 9780199841622: BooksLike us, you want evidence that a policy will work here, where you are. Randomized controlled trials (RCTs) do not tell you that. They do not even tell you that a policy works. What they tell you is that a policy worked there, where the trial was carried out, in that population. Our argument is that the changes in tense – from “worked” to “work” – are not just a matter of grammatical detail. To move from one to the other requires hard intellectual and practical effort. The fact that it worked there is indeed fact. But for that fact to be evidence that it will work here, it needs to be relevant to that conclusion. To make RCTs relevant you need a lot more information and of a very different kind.

So, no, I find it hard to share the enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the transportability warrant …

The role of manipulation and intervention in theories of causality

8 May, 2021 at 11:52 | Posted in Theory of Science & Methodology | 2 Comments

largepreviewAs X’s effect on some other variable in the system S depends on there
being a possible intervention on X, and the possibility of an intervention in
turn depends on the modularity of S, it is a necessary condition for something to be a cause that the system in which it is a cause is modular with respect to that factor. The requirement that all systems are modular with respect to their causes can, in a way, be regarded as an interventionist addition to the unmanipulable causes problem … This implication has also been criticized in particular by Nancy Cartwright. She has proposed that many causal systems are not modular … Pearl has responded to this in 2009 (sect. 11.4.7), where he proposes, on the one hand, that it is in general sufficient that a symbolic intervention can be performed on the causal model, for the determination of causal effects, and on the other hand that we nevertheless could isolate the individual causal contributions …

It is tempting—to philosophers at least—to equate claims in this literature,
about the meaning of causal claims being given by claims about what would
happen under a hypothetical intervention—or an explicit definition of causation to the same effect—with that same claim as it would be interpreted in a philosophical context. That is to say, such a claim would normally be understood there as giving the truth conditions of said causal claims. It is generally hard to know whether any such beliefs are involved in the scientific context. However, Pearl in particular has denied, in increasingly explicit terms, that this is what is intended … He has recently liked to describe a factor Y , that is causally dependent on another factor X, as “listening” to X and determining “its value in response to what it hears” … This formulation suggests to me that it is the fact that Y is “listening” to X that explains why and how Y changes under an intervention on X. That is, what a possible intervention does, is to isolate the influence that X has on Y , in virtue of Y ’s “listening” to X. Thus, Pearl’s theory does not imply an interventionist theory of causation, as we understand that concept in this monograph. This, moreover, suggests that the intervention that is always available, for any cause that is represented by a variable in a causal model, is a formal operation. I take this to be supported by the way he responds to Nancy Cartwright’s objection that modularity does not hold of all causal systems: it is sufficient that a symbolic intervention can be performed. Thus, the operation alluded to in Pearl’s operationalization of causation is a formal operation, always available, regardless of whether it corresponds to any possible intervention event or not.

Interesting dissertation well worth reading for anyone interested in the ongoing debate on the reach of interventionist causal theories.

Framing all causal questions as questions of manipulation and intervention runs in to many problems, especially when we open up for “hypothetical” and “symbolic” interventions. Humans have few barriers to imagining things, but that often also makes it difficult to value the proposed thought experiments in terms of relevance. Performing “well-defined” interventions is one thing, but if we do not want to to give up searching for answers to the questions we are interested in and instead only search for answerable questions, interventionist studies is of limited applicability and value. Intervention effects in thought experiments are not self-evidently the causal effects we are looking for. Identifying causes (reverse causality) and measuring effects of causes (forward causality) is not the same. In social sciences, like economics, we standardly first try to identify the problem and why it occurred, and then afterwards look at the effects of the causes.

Leaning on the interventionist approach often means that instead of posing interesting questions on a social level, focus is on individuals. Instead of asking about structural socio-economic factors behind, e.g., gender or racial discrimination, the focus is on the choices individuals make (which — as I maintain in my book Ekonomisk teori och metod — also tends to make the explanations presented inadequately ‘deep’).  A typical example of the dangers of this limiting approach is ‘Nobel prize’ winner Esther Duflo , who thinks that economics should be based on evidence from randomised experiments and field studies. Duflo et consortes want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. Yours truly is far from sure that is the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today are not going to be solved by plumbers performing interventions or manipulations in the form of RCTs.

The RCT controversy

7 May, 2021 at 09:00 | Posted in Theory of Science & Methodology | Leave a comment

In Social Science and Medicine (December 2017), Angus Deaton & Nancy Cartwright argue that RCTs do not have any warranted special status. They are, simply, far from being the ‘gold standard’ they are usually portrayed as:

rctsRandomized Controlled Trials (RCTs) are increasingly popular in the social sciences, not only in medicine. We argue that the lay public, and sometimes researchers, put too much trust in RCTs over other methods of in- vestigation. Contrary to frequent claims in the applied literature, randomization does not equalize everything other than the treatment in the treatment and control groups, it does not automatically deliver a precise estimate of the average treatment effect (ATE), and it does not relieve us of the need to think about (observed or un- observed) covariates. Finding out whether an estimate was generated by chance is more difficult than commonly believed. At best, an RCT yields an unbiased estimate, but this property is of limited practical value. Even then, estimates apply only to the sample selected for the trial, often no more than a convenience sample, and justi- fication is required to extend the results to other groups, including any population to which the trial sample belongs, or to any individual, including an individual in the trial. Demanding ‘external validity’ is unhelpful because it expects too much of an RCT while undervaluing its potential contribution. RCTs do indeed require minimal assumptions and can operate with little prior knowledge. This is an advantage when persuading dis- trustful audiences, but it is a disadvantage for cumulative scientific progress, where prior knowledge should be built upon, not discarded. RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not ‘what works’, but ‘why things work’.

In a comment on Deaton & Cartwright, statistician Stephen Senn argues that on several issues concerning randomization Deaton & Cartwright “simply confuse the issue,” that their views are “simply misleading and unhelpful” and that they make “irrelevant” simulations:

My view is that randomisation should not be used as an excuse for ignoring what is known and observed but that it does deal validly with hidden confounders. It does not do this by delivering answers that are guaranteed to be correct; nothing can deliver that. It delivers answers about which valid probability statements can be made and, in an imperfect world, this has to be good enough. Another way I sometimes put it is like this: show me how you will analyse something and I will tell you what allocations are exchangeable. If you refuse to choose one at random I will say, “why? Do you have some magical thinking you’d like to share?”

Contrary to Senn, Andrew Gelman shares Deaton’s and Cartwright’s view that randomized trials often are overrated:

There is a strange form of reasoning we often see in science, which is the idea that a chain of reasoning is as strong as its strongest link. The social science and medical research literature is full of papers in which a randomized experiment is performed, a statistically significant comparison is found, and then story time begins, and continues, and continues—as if the rigor from the randomized experiment somehow suffuses through the entire analysis …

One way to get a sense of the limitations of controlled trials is to consider the conditions under which they can yield meaningful, repeatable inferences. The measurement needs to be relevant to the question being asked; missing data must be appropriately modeled; any relevant variables that differ between the sample and population must be included as potential treatment interactions; and the underlying effect should be large. It is difficult to expect these conditions to be satisfied without good substantive understanding. As Deaton and Cartwright put it, “when little prior knowledge is available, no method is likely to yield well-supported conclusions.” Much of the literature in statistics, econometrics, and epidemiology on causal identification misses this point, by focusing on the procedures of scientific investigation—in particular, tools such as randomization and p-values which are intended to enforce rigor—without recognizing that rigor is empty without something to be rigorous about.

My own view is that nowadays many social scientists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of models used in social sciences. In their view they are more or less tests of ‘an underlying model’ that enable them to make the right selection from the ever expanding ‘collection of potentially applicable models.’ When looked at carefully, however, there are in fact few real reasons to share this optimism.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real-world systems.

Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that are produced every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real-world target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science …

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling — by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance …

 David A. Freedman

What kind of evidence do RCTs provide?

6 May, 2021 at 14:01 | Posted in Theory of Science & Methodology | 3 Comments

Randomized Control Trials Feed Our Fetish for Single-Focus Interventions -  ICTworksPerhaps it is supposed that the assumptions for an RCT are generally more often met (or meetable) than those for other methods. What justifies that? Especially given that the easiest assumption to feel secure about for RCTs—that the assignment is done “randomly”—is far from enough to support orthogonality, which is itself only one among the assumptions that need support. I sometimes hear, “Only the RCT can control for unknown unknowns.” But nothing can control for unknowns that we know nothing about. There is no reason to suppose that, for a given conclusion, the causal knowledge that it takes to stop post‐randomization correlations in an RCT is always, or generally, more available or more reliable than the knowledge required for one or another of the other methods to be reliable.

It is also essential to be clear what the conclusion is. As with any study method, RCTS can only draw conclusions about the objects studied—for the RCT, the population enrolled in the trial, which is seldom the one we are interested in. The RCT method can be expanded of course to include among its assumptions that the trial population is a representative sample of the target. Then it follows deductively that the difference in mean outcomes between treatment and control groups is an unbiased estimate of the ATE of the target population. How often are we warranted in assuming that, though, and on what grounds? Without this assumption, an RCT is just a voucher for claims about any except the trial population. What then justifies placing it above methods that are clinchers for claims we are really interested in—about target populations?

Nancy Cartwright

Sex and the problem with interventionist definitions of causation

5 May, 2021 at 12:28 | Posted in Theory of Science & Methodology | Leave a comment

What is Causality | Explained in 2 min - YouTubeWe suggest that “causation” is not univocal. There is a counterfactual/interventionist notion of causation—of use when one is designing a public policy to intervene and solve a problem—and an historical, or more exactly, etiological notion—often of use when one is identifying a problem to solve …

Consider sex: Susan did not get the job she applied for because the prejudiced employer took her to be a woman; she presented as a woman because she was raised as a girl; she was raised as a girl because she was biologically female; and so on. The causation is palpable—Susan’s sex caused her not to get the job she applied for. The counterfactual, if Susan were male and had applied for the job, she would have gotten it, suggests a vague, miraculous transformation of Susan into some unspecified male (maybe one with the same qualifications, provided Susan did not attend any all-female schools)— but it makes no literal sense as a practical intervention. Suppose, however, a past intervention to make Susan male, say one of her X chromosomes was to be changed to some Y in utero.

To make the counterfactual come out true, the intervention must be expanded to also bring it about that in the course of life as an adult male she applies for the job. Pretty much all of the world history that would interact with her in the course of her male life would have to be intervened upon to bring it about that she, as a male, applied for the job. That would be a remarkably prescient intervention indeed and certainly not a reasonable one. The counterfactual, if Susan had been made a male in utero, Susan would have gotten the job, is almost certainly not true. Etiological causation does not direct us to practical interventions—for that, we need to focus on other causes that are feasibly and ethically manipulable. But it can provide us with a rationale for wanting to change outcomes: Susan did not get the job because of a biological fact about her that is irrelevant to her qualifications, and we think that is unjust.

Glymour & Glymour

Hunting for causes (wonkish)

30 Apr, 2021 at 11:37 | Posted in Theory of Science & Methodology | Leave a comment

Causality and CorrelationThere are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size.

Second, statistical assumptions can be expressed in the familiar language of probability calculus, and thus assume an aura of scholarship and scientific respectability. Causal assumptions, as we have seen before, are deprived of that honor, and thus become immediate suspect of informal, anecdotal or metaphysical thinking. Again, this difference becomes illuminated among Bayesians, who are accustomed to accepting untested, judgmental assumptions, and should therefore invite causal assumptions with open arms—they don’t. A Bayesian is prepared to accept an expert’s judgment, however esoteric and untestable, so long as the judgment is wrapped in the safety blanket of a probability expression. Bayesians turn extremely suspicious when that same judgment is cast in plain English, as in “mud does not cause rain” …

The third resistance to causal (vis-a-vis statistical) assumptions stems from their intimidating clarity. Assumptions about abstract properties of density functions or about conditional independencies among variables are, cognitively speaking, rather opaque, hence they tend to be forgiven, rather than debated. In contrast, assumptions about how variables cause one another are shockingly transparent, and tend therefore to invite counter-arguments and counter-hypotheses.

Judea Pearl

Pearl’s seminal contributions to this research field is well-known and indisputable. But on the ‘taming’ and ‘resolve’ of the issues, yours truly however has to admit that — under the influence of especially David Freedman and Nancy Cartwright — he still has some doubts on the reach, especially in terms of realism and relevance, of Pearl’s ‘do-calculus solutions’ for social sciences in general and economics in specific (see here, here, here and here). The distinction between the causal — ‘interventionist’ — E[Y|do(X)] and the more traditional statistical — ‘conditional expectationist’ — E[Y|X] is crucial, but Pearl and his associates, although they have fully explained why the first is so important, have to convince us that it (in a relevant way) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to socio-economic contexts where ‘surgery’, ‘hypothetical minimal interventions’, ‘manipulativity’, ‘faithfulness’, ‘stability’, and ‘modularity’ are not perhaps so universally at hand.

CAUSES on Twitter: ""Right now, whole genome testing is most useful for  helping unravel the mystery for parents of children with rare disorders; it  can provide an answer about the cause, butWhat capacity a treatment has to contribute to an effect for an individual depends on the underlying structures – physiological, material, psychological, cultural and economic – that makes some causal pathways possible for that individual and some not, some likely and some unlikely. This is a well recognised problem when it comes to making inferences from model organisms to people. But it is equally a problem in making inferences from one person to another or from one population to another. Yet in these latter cases it is too often downplayed. When the problem is explicitly noted, it is often addressed by treating the underlying structures as moderators in the potential outcomes equation: give a name to a structure-type – men/women, old/young, poor/well off, from a particular ethnic background, member of a particular religious or cultural group, urban/rural, etc. Then introduce a yes-no moderator variable for it. Formally this can be done, and sometimes it works well enough. But giving a name to a structure type does nothing towards telling us what the details of the structure are that matter nor how to identify them. In particular, the usual methods for hunting moderator variables, like subgroup analysis, are of little help in uncovering what the aspects of a structure are that afford the causal pathways of interest. Getting a grip on what structures support similar causal pathways is central to using results from one place as evidence about another, and a casual treatment of them is likely to lead to mistaken inferences. The methodology for how to go about this is under developed, or at best under articulated, in EBM, possibly because it cannot be well done with familiar statistical methods and the ways we use to do it are not manualizable. It may be that medicine has fewer worries here than do social science and social policy, due to the relative stability of biological structures and disease processes. But this is no excuse for undefended presumptions about structural similarity.

Nancy Cartwright

John von Neumann on mathematics

27 Mar, 2021 at 10:22 | Posted in Theory of Science & Methodology | 2 Comments

587aaabdfdf42f314b0da9f7fcf2a47d

Wissenschaftler irren

27 Mar, 2021 at 10:16 | Posted in Theory of Science & Methodology | Comments Off on Wissenschaftler irren

.

Bayesianism — a wrong-headed pseudoscience

24 Mar, 2021 at 15:00 | Posted in Theory of Science & Methodology | Comments Off on Bayesianism — a wrong-headed pseudoscience

✓ Frases y citas célebres de Mario Bunge 📖The occurrence of unknown prior probabilities, that must be stipulated arbitrarily, does not worry the Bayesian anymore than God’s inscrutable designs worry the theologian. Thus Lindley (1976), one of the leaders of the Bayesian school, holds that this difficulty has been ‘grossly exaggerated’. And he adds: ‘I am often asked if the [Bayesian] method gives the right answer: or, more particularly, how do you know if you have got the right prior [probability]. My reply is that I don’t know what is meant by ‘right’ in this context. The Bayesian theory is about coherence, not about right or wrong.’ Thus the Bayesian, along with the philosopher who only cares about the cogency of arguments, fits in with the reasoning madman …

One should not confuse the objective probabilities of random events with mere intuitive likelihoods of such events or the plausibility (or verisimilitude) of the corresponding hypotheses in the light of background knowledge. As Peirce (1935: p. 363) put it, this confusion ‘is a fertile source of waste of time and energy’. A clear case of such waste is the current proliferation of rational-choice theories in the social sciences, to model processes that are far from random, from marriage to crime to business transactions to political struggles.

Mario Bunge

On the poverty of deductivism

23 Mar, 2021 at 09:34 | Posted in Theory of Science & Methodology | 1 Comment

In mainstream macroeconomics, there has for long been an insistence on formalistic (mathematical) modelling, and to some economic methodologists (e.g. Lawson 2015, Syll 2016) this has forced economists to give up on realism and substitute axiomatics for real world relevance. According to the critique, the deductivist orientation has been the main reason behind the difficulty that mainstream economics has had in terms of understanding, explaining and predicting what takes place in modern economies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of — and justification for — the assumptions on which the deductivist foundation is erected.

The kind of formal-analytical and axiomatic-deductive mathematical modelling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant.

Although there has been a clearly discernible increase and focus on ‘empirical’ economics in recent decades, the results in these research fields have not fundamentally challenged the main deductivist direction of mainstream economics. They are still mainly framed and interpreted within the core ‘axiomatic’ assumptions of individualism, instrumentalism and equilibrium that make up even the ‘new’ mainstream economics. Although, perhaps, a sign of an increasing – but highly path-dependent — theoretical pluralism, mainstream economics is still, from a methodological point of view, mainly a deductive project erected on a formalist foundation.

If macroeconomic theories and models are to confront reality there are obvious limits to what can be said ‘rigorously’ in economics.  For although it is generally a good aspiration to search for scientific claims that are both rigorous and precise, the chosen level of precision and rigour must be relative to the subject matter studied.  An economics that is relevant to the world in which we live can never achieve the same degree of rigour and precision as in logic, mathematics or the natural sciences.

An example of a logically valid deductive inference (whenever ‘logic’ is used here it refers to deductive/analytical logic) may look like this:

Premise 1: All Chicago economists believe in the rational expectations hypothesis (REH)
Premise 2: Bob is a Chicago economist
—————————————————————–
Conclusion: Bob believes in REH

In a hypothetico-deductive reasoning — hypothetico-deductive confirmation in this case — we would use the conclusion to test the law-like hypothesis in premise 1 (according to the hypothetico-deductive model, a hypothesis is confirmed by evidence if the evidence is deducible from the hypothesis). If Bob does not believe in REH we have gained some warranted reason for non-acceptance of the hypothesis (an obvious shortcoming here being that further information beyond that given in the explicit premises might have given another conclusion).

The hypothetico-deductive method (in case we treat the hypothesis as absolutely sure/true, we should rather talk of an axiomatic-deductive method) basically means that we

•Posit a hypothesis
•Infer empirically testable propositions (consequences) from it
•Test the propositions through observation or experiment
•Depending on the testing results either find the hypothesis corroborated or falsified.

However, in science we regularly use a kind of ‘practical’ argumentation where there is little room for applying the restricted logical ‘formal transformations’ view of validity and inference. Most people would probably accept the following argument as a ‘valid’ reasoning even though from a strictly logical point of view it is non-valid:

Premise 1: Bob is a Chicago economist
Premise 2: The recorded proportion of Keynesian Chicago economists is zero
————————————————————————–
Conclusion: So, certainly, Bob is not a Keynesian economist

In science, contrary to what you find in most logic textbooks, only few argumentations are settled by showing that ‘All Xs are Ys.’ In scientific practice we instead present other-than-analytical explicit warrants and backings — data, experience, evidence, theories, models — for our inferences. As long as we can show that our ‘deductions’ or ‘inferences’ are justifiable and have well-backed warrants, other scientists will listen to us. That our scientific ‘deductions’ or ‘inferences’ are logical non-entailments simply is not a problem. To think otherwise is committing the fallacy of misapplying formal-analytical logic categories to areas where they are irrelevant or simply beside the point.

Scientific arguments are not analytical arguments, where validity is solely a question of formal properties. Scientific arguments are substantial arguments. Whether Bob is a Keynesian or not, is not something we can decide on formal properties of statements/propositions. We have to check out what he has actually been writing and saying to see if the hypothesis that he is a Keynesian is true or not.

In a deductive-nomological explanation — also known as a covering law explanation — we would try to explain why Bob believes in REH with the help of the two premises (in this case actually giving an explanation with only little explanatory value). These kinds of explanations — both in their deterministic and statistic/probabilistic versions — rely heavily on deductive entailment from premises that are assumed to be true. But they have precious little to say on where these assumed-to-be-true premises come from.

The deductive logic of confirmation and explanation may work well — given that they are used in deterministic closed models. In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in the science of economics. Applying the deductive-axiomatic method to real world systems immediately proves it to be excessively narrow and irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fail, since there is no way you can relevantly analyse confirmation or explanation as a purely logical relation between hypothesis and evidence, or between law-like rules and explananda. In science we argue and try to substantiate our beliefs and hypotheses with reliable evidence — propositional and predicate  deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Deduction — and the inferences that go with it — is an example of ‘explicative reasoning,’ where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning. But it is also its limitation, since truth in the deductive context does not refer to a real world ontology (only relating propositions as true or false within a formal-logic system) and as an argument scheme, deduction is totally non-ampliative: the output of the analysis is nothing else than the input.

Just to give an economics example, consider the following rather typical, but also uninformative and tautological, deductive inference:

Premise 1: The firm seeks to maximise its profits
Premise 2: The firm maximises its profits when marginal cost equals marginal income
——————————————————
Conclusion: The firm will operate its business at the equilibrium where marginal cost equals marginal income

This is as empty as deductive-nomological explanations of singular facts building on simple generalizations:

Premise 1: All humans are less than 20 feet tall
Premise 2: Bob is a human
——————————————————–
Conclusion: Bob is less than 20 feet tall

Although a logically valid inference, this is not much of an explanation (since we would still probably want to know why all humans are less than 20 feet tall).

Deductive-nomological explanations also often suffer from a kind of emptiness that emanates from a lack of real (causal) connection between premises and conclusions:

Premise 1: All humans that take birth control pills do not get pregnant
Premise 2: Bob took birth control pills
——————————————————–
Conclusion: Bob did not get pregnant

Most people would probably not consider this much of a real explanation.

Learning new things about reality demands something else than a reasoning where the knowledge is already embedded in the premises. These other kinds of reasoning — induction and abduction — may give good, but not conclusive, reasons. That is the price we have to pay if we want to have something substantial and interesting to say about the real world.

.

References

Lawson, Tony (2015): Essays on the nature and state of modern economics. Routledge.

Syll, Lars (2016): On the use and misuse of theories and models in economics. WEA Books.

Realism and antirealism in social science

22 Mar, 2021 at 16:19 | Posted in Theory of Science & Methodology | Comments Off on Realism and antirealism in social science

Scientific Realism: Selected Essays of Mario Bunge: Mario Bunge, Martin  Mahner: 9781573928922: Amazon.com: BooksThe situation started to change in the 1960s, when antirealism went
on the rampage in the social studies community as well as in Anglo-American philosophy. This movement seems to have had two sources, one philosophical, the other political. The former was a reaction against positivism, which was (mistakenly but conveniently) presented as objectivist simply because it shunned mental states.

I submit that the political source of contemporary antirealism was the rebellion of the Vietnam war generation against the ‘establishment’. The latter was (wrongly) identified with the power behind science and proscientific philosophy. So, fighting science and proscientific philosophy was taken to be part of the fight against the ‘establishment’. But, of course, the people who took this stand were shooting themselves in the foot, or rather in the head, for any successful political action, whether from below or from above, must assume that the adversary is real and can be known. Indeed, if the world were a figment of our imagination, we would people it only with friends …

Breit (1984, p. 20) asks why John K. Galbraith and Milton Friedman, two of the most distinguished social scientists of our time, could have arrived at conflicting views of economic reality. He answers: “there is no world out there which we can unambiguously compare with Friedman’s and Galbraith’s versions. Galbraith and Friedman did not discover the worlds they analyze; they decreed them”. He then compares economists to painters: “each offers a new way of seeing, of organizing experience”, of “imposing order on sensory data”. In this perspective the problems of objective truth and of the difference between science and nonscience do not arise. On the other hand we are left wondering
why on earth anyone should hire economists rather than painters to cope with economic issues.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events are​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

Dialektik der Aufklärung

20 Mar, 2021 at 09:31 | Posted in Theory of Science & Methodology | Comments Off on Dialektik der Aufklärung

.

Bayesianism — a scientific cul-de-sac

20 Mar, 2021 at 09:21 | Posted in Theory of Science & Methodology | 2 Comments

Fact and Method: Miller, Richard W.: 9780691020457: Amazon.com: BooksThe bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

Bayesianism — a patently absurd approach to science

16 Mar, 2021 at 16:12 | Posted in Theory of Science & Methodology | 17 Comments

Saturday Morning Breakfast Cereal - BayesianMainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatised by Ramsey (1931), de Finetti (1937) or Savage (1954)) — that is, they maximise expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately — via some “Dutch book” or “money pump” argument — susceptible to being ruined by some clever “bookie”.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but — even granted this questionable reductionism — do rational agents really have to be Bayesian? However, there are no strong warrants for believing so.

In many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1​ if you are rational. That is, in this case — and based on symmetry — a rational individual would have to assign probability 10% to become unemployed and 90% to become employed.

bayes_dog_tshirtThat feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

We live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we ‘simply do not know.’ There are no strong reasons why we should accept the Bayesian view of modern mainstream economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” As argued by Keynes, we rather base our expectations on the confidence or “weight” we put on different events and alternatives. Expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that standardly have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by mainstream economists.

Back in 1991, when yours truly earned his first PhD with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level, it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

This, of course, was like swearing in church. My mainstream colleagues were — to say the least — not exactly überjoyed.

One of my inspirations when working on that dissertation was Henry Kyburg, and I still think his critique is the ultimate take-down of Bayesian hubris:

bFrom the point of view of the “logic of consistency”, no set of beliefs is more rational than any other, so long as they both satisfy the quantitative relationships expressed by the fundamental laws of probability. Thus I am free to assign the number 1/3 to the probability that the sun will rise tomorrow; or, more cheerfully, to take the probability to be 9/10 that I have a rich uncle in Australia who will send me a telegram tomorrow informing me that he has made me his sole heir. Neither Ramsey, nor Savage, nor de Finetti, to name three leading figures in the personalistic movement, can find it in his heart to detect any logical shortcomings in anyone, or to find anyone logically culpable, whose degrees of belief in various propositions satisfy the laws of the probability calculus, however odd those degrees of belief may otherwise be …

Now this seems patently absurd. It is to suppose that even the most simple statistical inferences have no logical weight where my beliefs are concerned. It is perfectly compatible with these laws that I should have a degree of belief equal to 1/4 that this coin will land heads when next I toss it; and that I should then perform a long series of tosses (say, 1000), of which 3/4 should result in heads; and then that on the 1001st toss, my belief in heads should be unchanged at 1/4 …

Henry E. Kyburg

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.