Bayesian networks and causal diagrams

14 October, 2018 at 09:09 | Posted in Theory of Science & Methodology | 3 Comments

36393702Whereas a Bayesian network​ can only tell us how likely one event is, given that we observed another, causal diagrams can answer interventional and counterfactual questions. For example, the causal fork A <– B –> C tells us in no uncertain terms that wiggling A would have no effect on C, no matter how intense the wiggle. On the other hand, a Bayesian network is not equipped to handle a ‘wiggle,’ or to tell the difference between seeing and doing, or indeed to distinguish a fork from a chain [A –> B –> C]. In other words, both a chain and a fork would predict that observed changes in A are associated with changes in C, making no prediction about the effect of ‘wiggling’ A.

Advertisements

Is economic consensus a good thing?

7 October, 2018 at 13:51 | Posted in Theory of Science & Methodology | 1 Comment

No, it is not — and here’s one strong reason why:

rosenThe mere existence of consensus is not a useful guide. We should ask; Does a consensus have its origins and its ground in a rational and comprehensive appraisal of substantial evidence? Has the available evidence been open to vigorous challenge, and has it met each challenge? … A consensus that lacks these origins is of little consequence precisely because it lacks these origins. Knowing the current consensus is helpful in forecasting a vote; having substantial evidence​ is helpful in judging what​ is true. That something is standardly believed or assumed is not, by itself, a reason to believe or assume it. Error and confusion are standard conditions of the human mind.

The challenges of tractability in economic modelling

2 October, 2018 at 10:55 | Posted in Theory of Science & Methodology | Leave a comment

tractable-2There is a general sense in which the whole idea of model and model-based science derive from the need for tractability. The real world out there is far too intractable to be examined directly … therefore one directly examines the more tractable model worlds … Many of these tractability-enhancing assumptions are made because the math that is being used requires it. They enhance the mathematical tractability of models. This is not riskless … They can be quite harmful if tractability and negligibility do not go hand in hand, that is, if the unrealisticness of a tractability-enhancing assumption is not negligible. The dominance of mere tractability in models may have unfortunate consequences.

Uskali Mäki

Using ‘simplifying’ tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Take for example the ongoing discussion on rational expectations as a modelling assumption. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actors models. As yours truly has tried to show in On the use and misuse of theories and models in mainstream economics there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to real-world target systems.

Mainstream economists construct closed formalistic-mathematical theories and models for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their ‘laboratories’ they hope they can perform ‘thought experiments’ and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in economic models are rather based on formalistic mathematical tractability. In the models they often appear as unrealistic ‘tractability’ assumptions, usually playing a decisive role in getting the deductive machinery to deliver precise’ and ‘rigorous’ results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

If the ultimate criteria for success of a deductivist system are to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Real-world economic systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

What is wrong with mainstream economics is not that it employs models per se. What is wrong is that it employs poor models. They — and the tractability assumptions on which they to a large extent build on — are poor because they do not bridge to the real-world target system in which we live.

Tony Lawson vs Uskali Mäki

17 September, 2018 at 17:10 | Posted in Theory of Science & Methodology | 3 Comments

We are all realists and we all — Mäki, Cartwright, and I — self-consciously present ourselves as such. The most obvious research-guiding commonality, perhaps, is that we do all look at the ontological presuppositions of economics or economists.

title-methodology-image_tcm7-198540Where we part company, I believe, is that I want to go much further. I guess I would see their work as primarily analytical and my own as more critically constructive or dialectical. My goal is less the clarification of what economists are doing and presupposing as seeking to change the orientation of modern economics … Specifically, I have been much more prepared than the other two to criticise the ontological presuppositions of economists—at least publically. I think Mäki is probably the most guarded. I think too he is the least critical, at least of the state of modern economics …

One feature of Mäki’s work that I am not overly convinced by, but which he seems to value, is his method of theoretical isolation (Mäki 1992). If he is advocating it as a method for social scientific research, I doubt it will be found to have much relevance—for reasons I discuss in Economics and reality (Lawson 1997). But if he is just saying that the most charitable way of interpreting mainstream economists is that they are acting on this method, then fine. Sometimes, though, he seems to imply more …

I cannot get enthused by Mäki’s concern to see what can be justified in contemporary formalistic modelling endeavours. The insights, where they exist, seem so obvious, circumscribed, and tagged on anyway …

As I view things, anyway, a real difference between Mäki and me is that he is far less, or less openly, critical of the state and practices of modern economics … Mäki seems more inclined to accept mainstream economic contributions as largely successful, or anyway uncritically. I certainly do not think we can accept mainstream contributions as successful, and so I proceed somewhat differently …

So if there is a difference here it is that Mäki more often starts out from mainstream academic economic analyses accepted rather uncritically, whilst I prefer to start from those everyday practices widely regarded as successful.

Tony Lawson

Lawson and Mäki are both highly influential contemporary students of economic methodology and philosophy. Yours truly has learned a lot from both of them. Although to a certain degree probably also a question of ‘temperament,’ I find Lawson’s ‘critical realist’ critique of mainstream economic theories and models deeper and more convincing than Mäki’s more ‘distanced’ and less critical approach. Mäki’s ‘detached’ style probably reflects the fact that he is a philosopher with an interest in economics, rather than an economist. Being an economist it is easier to see the relevance of Lawson’s ambitious and far-reaching critique of mainstream economics than it is to value Mäki’s often rather arduous application of the analytic-philosophical tool-kit, typically less ambitiously aiming for mostly conceptual and terminological ‘clarifications.’

RCTs risk distorting our knowledge base

13 September, 2018 at 14:14 | Posted in Theory of Science & Methodology | Comments Off on RCTs risk distorting our knowledge base

Grade-4-RCTThe claimed hierarchy of methods, with randomized assignment being deemed inherently superior to observational studies, does not survive close scrutiny. Despite frequent claims to the contrary, an RCT does not equate counterfactual outcomes between treated and control units. The fact that systematic bias in estimating the mean impact vanishes in expectation (under ideal conditions) does not imply that the (unknown) experimental error in a one-off RCT is less than the (unknown) error in some alternative observational study. We obviously cannot know that. A biased observational study with a reasonably large sample size may well be closer to the truth in specific trials than an underpowered RCT …

The questionable claims made about the superiority of RCTs as the “gold standard” have had a distorting influence on the use of impact evaluations to inform development policymaking, given that randomization is only feasible for a non-random subset of policies. When a program is community- or economy-wide or there are pervasive spillover effects from those treated to those not, an RCT will be of little help, and may well be deceptive. The tool is only well suited to a rather narrow range of development policies, and even then it will not address many of the questions that policymakers ask. Advocating RCTs as the best, or even only, scientific method for impact evaluation risks distorting our knowledge base for fighting poverty.

Martin Ravaillon

Laplace and the principle of insufficient reason (wonkish)

3 September, 2018 at 17:04 | Posted in Theory of Science & Methodology | 1 Comment

After their first night in paradise, and having seen the sun rise in the morning, Adam and Eve were wondering if they were to experience another sunrise or not. Given the rather restricted sample of sunrises experienced, what could they expect? According to Laplace’s rule of succession, the probability of an event E happening after it has occurred n times is

p(E|n) = (n+1)/(n+2).

The probabilities can be calculated using Bayes’ rule, but to get the calculations going, Adam and Eve must have an a priori probability (a base rate) to start with. The Bayesian rule of thumb is to simply assume that all outcomes are equally likely. Applying this rule Adam’s and Eve’s probabilities become 1/2, 2/3, 3/4 …

Now this seems rather straight forward, but as already Keynes noted in Treatise on Probability (1921), there might be a problem here. The problem has to do with the prior probability and where it is assumed to come from. Is the appeal of the principle of insufficient reason – the principle of indifference – really warranted? Laborating on Keynes example, Gillies wine-water paradox in Philosophcal Theories of Probability (2000) shows it may not be so straight forward after all.

Assume there is a certain quantity of liquid containing wine and water mixed so that the ratio of wine to water (r) is between 1/3 and 3/1. What is then the probability that r ≤ 2? The principle of insufficient reason means that we have to treat all r-values as equiprobable, assigning a uniform probability distribution between 1/3 and 3/1, which gives the probability of r ≤ 2 = [(2-1/3)/(3-1/3)] = 5/8.

But to say r ≤ 2 is equivalent to saying that 1/r ≥ ½. Applying the principle now, however, gives the probability of 1/r ≥ 1/2 = [(3-1/2)/(3-1/3)]=15/16. So we seem to get two different answers that both follow from the same application of the principle of insufficient reason. Given this unsolved paradox, we have good reason to stick with Keynes (and be sceptical​ of Bayesianism).

Die Risikogesellschaft

30 August, 2018 at 13:06 | Posted in Theory of Science & Methodology | Comments Off on Die Risikogesellschaft

 

Donald Rubin on randomization and observational studies

21 August, 2018 at 12:48 | Posted in Theory of Science & Methodology | Comments Off on Donald Rubin on randomization and observational studies

 

The essence​ of scientific reasoning​

5 August, 2018 at 11:02 | Posted in Theory of Science & Methodology | Comments Off on The essence​ of scientific reasoning​

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmesFollowing the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (evidence only counts as evidence in relation to a specific hypothesis) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Allt — och lite till — du vill veta om kausalitet

12 July, 2018 at 18:00 | Posted in Theory of Science & Methodology | 1 Comment

KAUSALITETRolf Sandahls och Gustav Jakob Peterssons Kausalitet: i filosofi, politik och utvärdering är en synnerligen välskriven och läsvärd genomgång av de mest inflytelserika teorierna om kausalitet som används inom vetenskapen idag.

Tag och läs!

I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklaringsmodellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad.

Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle (1) ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, (2) ge en metod för testning av förklaringar, och (3) visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.

En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.

Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.

Hard and soft science — a flawed dichotomy

11 July, 2018 at 19:08 | Posted in Theory of Science & Methodology | 1 Comment

The distinctions between hard and soft sciences are part of our culture … But the important distinction is really not between the hard and the soft sciences. Rather, it is between the hard and the easy sciences. Easy-to-do science is what those in physics, chemistry, geology, and some other fields do. Hard-to-do science is what the social scientists do and, in particular, it is what we educational researchers do. In my estimation, we have the hardest-to-do science of them all! We do our science under conditions that physical scientists find intolerable. We face particular problems and must deal with local conditions that limit generalizations and theory building-problems that are different from those faced by the easier-to-do sciences …

Context-MAtters_Blog_Chip_180321_093400Huge context effects cause scientists great trouble in trying to understand school life … A science that must always be sure the myriad particulars are well understood is harder to build than a science that can focus on the regularities of nature across contexts …

Doing science and implementing scientific findings are so difficult in education because humans in schools are embedded in complex and changing networks of social interaction. The participants in those networks have variable power to affect each other from day to day, and the ordinary events of life (a sick child, a messy divorce, a passionate love affair, migraine headaches, hot flashes, a birthday party, alcohol abuse, a new principal, a new child in the classroom, rain that keeps the children from a recess outside the school building) all affect doing science in school settings by limiting the generalizability of educational research findings. Compared to designing bridges and circuits or splitting either atoms or genes, the science to help change schools and classrooms is harder to do because context cannot be controlled.

David Berliner

Amen!

When applying deductivist thinking to economics, mainstream economists set up their easy-to-do  ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be real-world relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not, and one of the main reasons for that is that context matters. When addressing real-world systems, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in an easy-to-do science like physics, but a poor guide for action in real-world systems in which concepts and entities are without clear boundaries and continually interact and overlap.

Uncertainty heuristics

27 June, 2018 at 09:56 | Posted in Theory of Science & Methodology | Comments Off on Uncertainty heuristics

 

Is 0.999… = 1?

1 June, 2018 at 09:11 | Posted in Theory of Science & Methodology | 5 Comments

What is 0.999 …, really? Is it 1? Or is it some number infinitesimally less than 1?

The right answer is to unmask the question. What is 0.999 …, really? It appears to refer to a kind of sum:

.9 + + 0.09 + 0.009 + 0.0009 + …

9781594205224M1401819961But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or a hundred numbers. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What’s the numerical value of an infinite sum? It doesn’t have one — until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

The British number theorist G. H. Hardy … explains it best: “It is broadly true to say that mathematicians before Cauchy asked not, ‘How shall we define 1 – 1 – 1 + 1 – 1 …’ but ‘What is 1 -1 + 1 – 1 + …?'”

No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply define the value of the infinite sum to be 1.

I have no problem with solving problems in mathematics by ‘defining’ them away. In pure mathematics — and logic — you are always allowed to take an epistemological view on problems and ‘axiomatically’ decide that 0.999… is 1. But how about the real world? In that world, from an ontological point of view, 0.999… is never 1! Although mainstream economics seems to take for granted that their epistemology based models rule the roost even in the real world, economists ought to do some ontological reflection when they apply their mathematical models to the real world, where indeed “you can never have infinitely many heaps.”

In econometrics we often run into the ‘Cauchy logic’ —the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Just imagine there could be more worlds than the one we live in and the problem is ‘fixed.’

Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ of which the actual data are regarded as constituting a random sample”, von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It is — just as the Cauchy mathematical logic of ‘defining’ away problems — not tenable.

In social sciences — including economics — it is always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Diversity bonuses — the idea

28 May, 2018 at 12:43 | Posted in Theory of Science & Methodology | Comments Off on Diversity bonuses — the idea


If you’d like to learn more on the issue, have a look at James Surowiecki’s The Wisdom of Crowds (Anchor Books, 2005) or Scott Page’s The Diversity Bonus (Princeton University Press, 2017). For an illustrative example, see here.

The evidential sine qua non

24 May, 2018 at 18:10 | Posted in Theory of Science & Methodology | Comments Off on The evidential sine qua non

612090-W-Edwards-Deming-Quote-In-God-we-trust-all-others-bring-data

The poverty of deductivism

17 March, 2018 at 17:52 | Posted in Theory of Science & Methodology | 4 Comments

guaThe idea that inductive support is a three-place relation among hypothesis H, evidence e, and background factors Ki rather than a two-place relation between H and e has some drastic philosophical implications, which partly explains why philosophers of science have been so reluctant to endorse it. The inductivist program … aimed at doing for inductive inferences what logicians had done for deductive ones … Once the Ki enter the picture, the issue of inductive support becomes contextualized: one cannot answer it by merely looking at the features of e and H. An empirical investigation is necessary in order to establish whether the context is ‘right’ for e to be truly confirming evidence for H or not … Scientists’ knowledge of the context and circumstances of research is required in order to assess the validity of scientific inferences​.

Scientific realism and inference​ to the best explanation

17 March, 2018 at 09:14 | Posted in Theory of Science & Methodology | 11 Comments

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is to reveal what this reality that is the object of science actually looks like.

darScience is made possible by the fact that there are structures that are durable and largely independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

Instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliative​ deductive reasoning — as in mainstream economic theory — it would be much more fruitful and relevant to apply inference to the best explanation.

People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths …

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Alan Musgrave

Abduction — the induction that constitutes the essence​ of scientific reasoning

15 March, 2018 at 17:15 | Posted in Theory of Science & Methodology | 3 Comments

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmesFollowing the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (e only counts as evidence in relation to a specific hypothesis H) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of cours, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

The problem of extrapolation

14 February, 2018 at 00:01 | Posted in Theory of Science & Methodology | 8 Comments

steelThere are two basic challenges that confront any account of extrapolation that seeks to resolve the shortcomings of simple induction. One challenge, which I call extrapolator’s circle, arises from the fact that extrapolation is worthwhile only when there are important limitations on what one can learn about the target by studying it directly. The challenge, then, is to explain how the suitability of the model as a basis for extrapolation can be established given only limited, partial information about the target … The second challenge is a direct consequence of the heterogeneity of populations studied in biology and social sciences. Because of this heterogeneity, it is inevitable there will be causally relevant differences between the model and the target population.

In economics — as a rule — we can’t experiment on the real-world target directly.  To experiment, economists therefore standardly construct ‘surrogate’ models and perform ‘experiments’ on them. To be of interest to us, these surrogate models have to be shown to be relevantly ‘similar’ to the real-world target, so that knowledge from the model can be exported to the real-world target. The fundamental problem highlighted by Steel is that this ‘bridging’ is deeply problematic​ — to show that what is true of the model is also true of the real-world target, we have to know what is true of the target, but to know what is true of the target we have to know that we have a good model  …

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built tractability assumptions — like, e. g., invariance, additivity, faithfulness, modularity, common knowledge, etc., etc. — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of surrogate models.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

The arrow of time in a non-ergodic world

13 February, 2018 at 09:00 | Posted in Theory of Science & Methodology | 3 Comments

an end of certaintyFor the vast majority of scientists, thermodynamics had to be limited strictly to equilibrium. That was the opinion of J. Willard Gibbs, as well as of Gilbert N. Lewis. For them, irreversibility associated with unidirectional time was anathema …

I myself experienced this type of hostility in 1946 … After I had presented my own lecture on irreversible thermodynamics, the greatest expert in the field of thermodynamics made the following comment: ‘I am astonished that this young man is so interested in nonequilibrium physics. Irreversible processes are transient. Why not wait and study equilibrium as everyone else does?’ I was so amazed at this response that I did not have the presence of mind to answer: ‘But we are all transient. Is it not natural to be interested in our common human condition?’

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and hence in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates real-world economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts — so let me try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student​ you probably would, because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – the ensemble average – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100.​ Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset price​ falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, ​the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility.

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages give​ a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, is​ of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so-called Kelly criterion) is to invest the fraction of  0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk-taking​.

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Irreversibility can no longer be identified with a mere appearance​ that would disappear if we had perfect knowledge … Figuratively speaking, matter at equilibrium, with no arrow of time, is ‘blind,’ but with the arrow of time, it begins to ‘see’ … The claim that the arrow of time is ‘only phenomenological​l,’ or subjective, is therefore absurd. We are actually the children of the arrow of time, of evolution, not its progenitors.

Ilya Prigogine

Hur skiljer man på vetenskap och trams?

3 February, 2018 at 16:01 | Posted in Theory of Science & Methodology | Comments Off on Hur skiljer man på vetenskap och trams?

fransEmma Frans’ med rätta prisbelönta bok Larmrapporten är en rolig, kunnig och ack så nödvändig uppgörelse med allehanda pseudo-vetenskapligt trams som sköljer över oss i media nuförtiden. Inte minst i sociala media sprids en massa ‘alternativa fakta’ och nonsens.

Även om jag varmt rekommederat studenter, vänner och bekanta att läsa boken, kan jag dock inte låta bli att här påpeka att det finns en liten svaghet i boken. Det gäller behandlingen av evidens-baserad kunskap och då speciellt bilden av det som brukar kallas den vetenskapliga evidensens ‘gold standard’ — randomiserade kontrollerade studier (RCT).

Frans skriver:

gold-standard RCT är den typ av studier som överlag anses ha högst bevisvärde. Detta beror på att slumpen avgör vem som utsätts för interventionen och vem som får vara kontroll. Om studien är tillräckligt stor kommer slumpen se till att den enda betydelsefulla skillnaden mellan grupperna som jämförs är om de utsatts för interventionen eller inte. Om det senare går att se en skillnad mellan grupperna med avseende på utfallet så kan vi känna oss säkra på att detta beror på interventionen.

Detta är en rätt standardmässig presentation av vilka (påstådda) fördelar RCT har (bland dess förespråkare).

Problemet är bara att det ur strikt vetenskaplig synpunkt är fel!

Låt mig förklara varför med ett belysande exempel.

Continue Reading Hur skiljer man på vetenskap och trams?…

Mainstream economics gets the priorities wrong

20 January, 2018 at 17:46 | Posted in Theory of Science & Methodology | Comments Off on Mainstream economics gets the priorities wrong

There is something about the way economists construct their models nowadays that obviously doesn’t sit right.

significance_cartoonThe one-sided, almost religious, insistence on axiomatic-deductivist modelling as the only scientific activity worthy of pursuing in economics still has not given way to methodological pluralism based on ontological considerations (rather than formalistic tractability). In their search for model-based rigour and certainty, ‘modern’ economics has turned out to be a totally hopeless project in terms of real-world relevance

If macroeconomic models – no matter of what ilk –  build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that model-based conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged to real-world target systems, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations microfoundations shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality. As Robert Gordon once had it:

Rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant.

The real harm done by Bayesianism

30 November, 2017 at 09:02 | Posted in Theory of Science & Methodology | 2 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

wpid-bilindustriella-a86478514bOne of yours truly’s favourite ‘problem situating lecture arguments’ against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sunrise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

For more on my own objections to Bayesianism, see my Bayesianism — a patently absurd approach to science and One of the reasons I’m a Keynesian and not a Bayesian.

Emma Frans och konsten att skilja vetenskap från trams

24 November, 2017 at 19:15 | Posted in Theory of Science & Methodology | 2 Comments

fransEmma Frans’ med rätta prisbelönta bok Larmrapporten är en rolig, kunnig och ack så nödvändig uppgörelse med allehanda pseudo-vetenskapligt trams som sköljer över oss i media nuförtiden. Inte minst i sociala media sprids en massa ‘alternativa fakta’ och nonsens.

Även om jag varmt rekommederat studenter, vänner och bekanta att läsa boken, kan jag dock inte låta bli att här — bland mestadels akademiskt skolade läsare — påpeka att det finns en liten svaghet i boken. Det gäller behandlingen av evidens-baserad kunskap och då speciellt bilden av det som brukar kallas den vetenskapliga evidensens ‘gold standard’ — randomiserade kontrollerade studier (RCT).

Frans skriver:

gold-standard RCT är den typ av studier som överlag anses ha högst bevisvärde. Detta beror på att slumpen avgör vem som utsätts för interventionen och vem som får vara kontroll. Om studien är tillräckligt stor kommer slumpen se till att den enda betydelsefulla skillnaden mellan grupperna som jämförs är om de utsatts för interventionen eller inte. Om det senare går att se en skillnad mellan grupperna med avseende på utfallet så kan vi känna oss säkra på att detta beror på interventionen.

Detta är en rätt standardmässig presentation av vilka (påstådda) fördelar RCT har (bland dess förespråkare).

Problemet är bara att det ur strikt vetenskaplig synpunkt är fel!

Låt mig förklara varför jag anser detta med ett belysande exempel från skolvärlden.

Continue Reading Emma Frans och konsten att skilja vetenskap från trams…

Randomization — a philosophical device gone astray

23 November, 2017 at 10:30 | Posted in Theory of Science & Methodology | 1 Comment

When giving courses in the philosophy of science yours truly has often had David Papineau’s book Philosophical Devices (OUP 2012) on the reading list. Overall it is a good introduction to many of the instruments used when performing methodological and science theoretical analyses of economic and other social sciences issues.

Unfortunately, the book has also fallen prey to the randomization hype that scourges sciences nowadays.

philosophical-devices-proofs-probabilities-possibilities-and-sets The hard way to show that alcohol really is a cause of heart disease is to survey the population … But there is an easier way … Suppose we are able to perform a ‘randomized experiment.’ The idea here is not to look at correlations in the population at large, but rather to pick out a sample of individuals, and arrange randomly for some to have the putative cause and some not.

The point of such a randomized experiment is to ensure that any correlation between the putative cause and effect does indicate a causal connection. This works​ because the randomization ensures that the putative cause is no longer itself systematically correlated with any other properties that exert a causal influence on the putative effect … So a remaining correlation between the putative cause and effect must mean that they really are causally connected.

The problem with this simplistic view on randomization is that the claims made by Papineau on behalf of randomization are both exaggerated and invalid:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization a fortiori does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea — it is not the best method for all questions and circumstances. Papineau and other proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Top 10 RCT critiques

22 November, 2017 at 15:17 | Posted in Theory of Science & Methodology | Comments Off on Top 10 RCT critiques

top-10-retail-news-thumb-610xauto-79997-600x240-1

•Basu, Kaushik (2014) Randomisation, Causality and the Role of Reasoned Intuition

Randomized experiments — a dangerous idolatry

21 November, 2017 at 19:08 | Posted in Theory of Science & Methodology | 1 Comment

Hierarchy-of-EvidenceNowadays many mainstream economists maintain that ‘imaginative empirical methods’ — especially randomized experiments (RCTs) — can help us to answer questions concerning the external validity of economic models. In their view, they are, more or less, tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

It is widely believed among economists that the scientific value of randomization — contrary to other methods — is totally uncontroversial and that randomized experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomization does not guarantee anything.

Assume that you are involved in an experiment where we examine how the work performance of Chinese workers (A) is affected by a specific ‘treatment’ (B). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P(A|B).

External validity and extrapolation are founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance or homogeneity, is, at least for an epistemological realist far from satisfactory.

Many ‘experimentalists claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific experiments to specific real-world structures and situations that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption.

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

the-right-toolRCTs usually do not provide evidence that the results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

Science and reason

29 October, 2017 at 09:44 | Posted in Theory of Science & Methodology | 2 Comments

scrivenTrue scientific method is open-minded, self-critical, flexible. Scientists are, in short, not as reasonable as they would like to ​think themselves. The great scientists are often true exceptions; they are nearly always attacked by their colleagues for their revolutionary ideas, not by using the standards of reason, but just by appealing​ to prejudices then current.​ Being reasonable takes great skill and great​ sensitivity to the difference between “well-supported” and “widely accepted.” It also takes great courage, because it seldom corresponds to being popular.

The one philosophy​ of science book every economist​ should read

23 October, 2017 at 18:19 | Posted in Theory of Science & Methodology | 5 Comments

bhaskIt is not the fact that science occurs that gives the world a structure such that it can be known by men. Rather, it is the fact that the world has such a structure that makes science, whether or not it actually occurs, possible. That is to say, it is not the character of science that imposes a determinate pattern or order on the world; but the order of the world that, under certain determinate conditions, makes possible the cluster of activities we call ‘science’. It does not follow from the fact that the nature of the world can only be known from (a study of) science, that its nature is determined by (the structure of) science. Propositions in ontology, i.e. about being, can only be established by reference to science. But this does not mean that they are disguised, veiled or otherwise elliptical propositions about science … The ‘epistemic fallacy’ consists in assuming that, or arguing as if, they are.

Why the ‘analytical’ method does not work in economics

22 October, 2017 at 19:38 | Posted in Theory of Science & Methodology | 2 Comments

To be ‘analytical’ is something most people find recommendable. The word ‘analytical’ has a positive connotation. Scientists think deeper than most other people because they use ‘analytical’ methods. In dictionaries, ‘analysis’ is usually defined as having to do with “breaking something down.”

anBut that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound as a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using the ‘analytical’ method in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In the society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that the social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is a strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop a new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies.

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.