Why all models are wrong

13 December, 2018 at 17:53 | Posted in Economics, Theory of Science & Methodology | 11 Comments

moModels share three common characteristics: First, they simplify, stripping away unnecessary details, abstracting from reality, or creating anew from whole cloth. Second, they formalize, making precise definitions. Models use mathematics, not words … Models create structures within which we can think logically … But the logic comes at a cost, which leads to their third characteristic: all models are wrong … Models are wrong because they simplify. They omit details. By considering many models, we can overcome the narrowing of rigor by crisscrossing the landscape of the possible.

To rely on a single  model is hubris. It invites disaster … We need many models to make sense of complex systems.

Yes indeed. To rely on a single mainstream economic theory and its models is hubris.  It certainly does invite disaster. To make sense of complex economic phenomena we need many theories and models. We need pluralism. Pluralism both in theories and methods.

Using ‘simplifying’ mathematical tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

The final court of appeal for models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, model building is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to the real world.

Mainstream economists construct closed formalistic-mathematical theories and models for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their ‘laboratories’ they hope they can perform ‘thought experiments’ and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in mainstream economic models are rather based on formalistic mathematical tractability. In the models they often appear as unrealistic ‘tractability’ assumptions, usually playing a decisive role in getting the deductive machinery to deliver precise’ and ‘rigorous’ results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

If the ultimate criteria for success of a deductivist system are to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Real-world economic systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

What is wrong with mainstream economics is not that it employs models per se. What is wrong is that it employs poor models. They — and the tractability assumptions on which they to a large extent build on — are poor because they do not bridge to the real world in which we live. And — as Page writes — “if a model cannot explain, predict, or help us reason, we must set it aside.”

Advertisements

Ten theory of science books that should be on every economist’s reading​ list

22 November, 2018 at 17:28 | Posted in Theory of Science & Methodology | Leave a comment

top-10-retail-news-thumb-610xauto-79997-600x240

• Archer, Margaret (1995). Realist social theory: the morphogenetic approach. Cambridge: Cambridge University Press

• Bhaskar, Roy (1978). A realist theory of science. Hassocks: Harvester

• Cartwright, Nancy (2007). Hunting causes and using them. Cambridge: Cambridge University Press

• Chalmers, Alan  (2013). What is this thing called science?. 4th. ed. Buckingham: Open University Press

• Garfinkel, Alan (1981). Forms of explanation: rethinking the questions in social theory. New Haven: Yale U.P.

• Harré, Rom (1960). An introduction to the logic of the sciences. London: Macmillan

• Lawson, Tony (1997). Economics and reality. London: Routledge

• Lieberson, Stanley (1987). Making it count: the improvement of social research and theory. Berkeley: Univ. of California Press

• Lipton, Peter (2004). Inference to the best explanation. 2. ed. London: Routledge

• Miller, Richard (1987). Fact and method: explanation, confirmation and reality in the natural and the social sciences. Princeton, N.J.: Princeton Univ. Press

Superficial ‘precision’

22 November, 2018 at 17:12 | Posted in Theory of Science & Methodology | Leave a comment

lieberOne problem among social researchers is the tendency to view specificity and concreteness as equal to Science and Rigorous Thinking … But this ‘precision’ is often achieved only by analyzing​ surface causes because some of them are readily measured and operationalized. This​ is hardly sufficient reason for turning away from broad generalizations and causal principles.

Truth and probability

11 November, 2018 at 13:05 | Posted in Theory of Science & Methodology | 1 Comment

uncertainty-7Truth exists, and so does uncertainty. Uncertainty acknowledges the existence of an underlying truth: you cannot be uncertain of nothing: nothing is the complete absence of anything. You are uncertain of something, and if there is some thing, there must be truth. At the very least, it is that this thing exists. Probability, which is the science of uncertainty, therefore aims at truth. Probability presupposes truth; it is a measure or characterization of truth. Probability is not necessarily the quantification of the uncertainty of truth, because not all uncertainty is quantifiable. Probability explains the limitations of our knowledge of truth, it never denies it. Probability is purely epistemological, a matter solely of individual understanding. Probability does not exist in things; it is not a substance. Without truth, there could be no probability.

William Briggs’ approach is — as he acknowledges in the preface of his interesting and thought-provoking book — “closely aligned to Keynes’s.”

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in statistics — and the axiomatic probability theory underlying it — is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues – ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different.’

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we ‘simply do not know.’ As Keynes writes in Treatise:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …  In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history onto the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different ‘variables’ is not enough. If they cannot get at the causal structure that generated the data, they are not really ‘identified.’

How strange that writers of statistics textbook, as a rule, do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical ‘probability.’ In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the statistics carpet.

It’s high time that statistics textbooks give Keynes his due.

Richard Feynman om mathematics

7 November, 2018 at 20:18 | Posted in Economics, Theory of Science & Methodology | 1 Comment

In a comment on one of yours truly’s posts last week, Jorge Buzaglo wrote this truly interesting comment:

Nobel prize winner Richard Feynman on the use of mathematics:

Mathematicians, or people who have very mathematical minds, are often led astray when “studying” economics because they lose sight of the economics. They say: ‘Look, these equations … are all there is to economics; it is admitted by the economists that there is nothing which is not contained in the equations.

510935-Richard-P-Feynman-Quote-If-all-of-mathematics-disappeared-physics

The equations are complicated, but after all they are only mathematical equations and if I understand them mathematically inside out, I will understand the economics inside out.’ Only it doesn’t work that way. Mathematicians who study economics with that point of view — and there have been many of them — usually make little contribution to economics and, in fact, little to mathematics. They fail because the actual economic situations in the real world are so complicated that it is necessary to have a much broader understanding of the equations.

I have replaced the word “physics” (and similar) by the word “economics” (and similar) in this quote from Page 2-1 in: R. Feynman, R. Leighton and M. Sands, The Feynman Lectures on Physics, Volume II, Addison-Wesley Publishing, Reading, 1964,

Bayesian networks and causal diagrams

14 October, 2018 at 09:09 | Posted in Theory of Science & Methodology | 11 Comments

36393702Whereas a Bayesian network​ can only tell us how likely one event is, given that we observed another, causal diagrams can answer interventional and counterfactual questions. For example, the causal fork A <– B –> C tells us in no uncertain terms that wiggling A would have no effect on C, no matter how intense the wiggle. On the other hand, a Bayesian network is not equipped to handle a ‘wiggle,’ or to tell the difference between seeing and doing, or indeed to distinguish a fork from a chain [A –> B –> C]. In other words, both a chain and a fork would predict that observed changes in A are associated with changes in C, making no prediction about the effect of ‘wiggling’ A.

Is economic consensus a good thing?

7 October, 2018 at 13:51 | Posted in Theory of Science & Methodology | 1 Comment

No, it is not — and here’s one strong reason why:

rosenThe mere existence of consensus is not a useful guide. We should ask; Does a consensus have its origins and its ground in a rational and comprehensive appraisal of substantial evidence? Has the available evidence been open to vigorous challenge, and has it met each challenge? … A consensus that lacks these origins is of little consequence precisely because it lacks these origins. Knowing the current consensus is helpful in forecasting a vote; having substantial evidence​ is helpful in judging what​ is true. That something is standardly believed or assumed is not, by itself, a reason to believe or assume it. Error and confusion are standard conditions of the human mind.

The challenges of tractability in economic modelling

2 October, 2018 at 10:55 | Posted in Theory of Science & Methodology | Comments Off on The challenges of tractability in economic modelling

tractable-2There is a general sense in which the whole idea of model and model-based science derive from the need for tractability. The real world out there is far too intractable to be examined directly … therefore one directly examines the more tractable model worlds … Many of these tractability-enhancing assumptions are made because the math that is being used requires it. They enhance the mathematical tractability of models. This is not riskless … They can be quite harmful if tractability and negligibility do not go hand in hand, that is, if the unrealisticness of a tractability-enhancing assumption is not negligible. The dominance of mere tractability in models may have unfortunate consequences.

Uskali Mäki

Using ‘simplifying’ tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Take for example the ongoing discussion on rational expectations as a modelling assumption. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actors models. As yours truly has tried to show in On the use and misuse of theories and models in mainstream economics there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to real-world target systems.

Mainstream economists construct closed formalistic-mathematical theories and models for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their ‘laboratories’ they hope they can perform ‘thought experiments’ and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in economic models are rather based on formalistic mathematical tractability. In the models they often appear as unrealistic ‘tractability’ assumptions, usually playing a decisive role in getting the deductive machinery to deliver precise’ and ‘rigorous’ results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

If the ultimate criteria for success of a deductivist system are to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Real-world economic systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

What is wrong with mainstream economics is not that it employs models per se. What is wrong is that it employs poor models. They — and the tractability assumptions on which they to a large extent build on — are poor because they do not bridge to the real-world target system in which we live.

Tony Lawson vs Uskali Mäki

17 September, 2018 at 17:10 | Posted in Theory of Science & Methodology | 3 Comments

We are all realists and we all — Mäki, Cartwright, and I — self-consciously present ourselves as such. The most obvious research-guiding commonality, perhaps, is that we do all look at the ontological presuppositions of economics or economists.

title-methodology-image_tcm7-198540Where we part company, I believe, is that I want to go much further. I guess I would see their work as primarily analytical and my own as more critically constructive or dialectical. My goal is less the clarification of what economists are doing and presupposing as seeking to change the orientation of modern economics … Specifically, I have been much more prepared than the other two to criticise the ontological presuppositions of economists—at least publically. I think Mäki is probably the most guarded. I think too he is the least critical, at least of the state of modern economics …

One feature of Mäki’s work that I am not overly convinced by, but which he seems to value, is his method of theoretical isolation (Mäki 1992). If he is advocating it as a method for social scientific research, I doubt it will be found to have much relevance—for reasons I discuss in Economics and reality (Lawson 1997). But if he is just saying that the most charitable way of interpreting mainstream economists is that they are acting on this method, then fine. Sometimes, though, he seems to imply more …

I cannot get enthused by Mäki’s concern to see what can be justified in contemporary formalistic modelling endeavours. The insights, where they exist, seem so obvious, circumscribed, and tagged on anyway …

As I view things, anyway, a real difference between Mäki and me is that he is far less, or less openly, critical of the state and practices of modern economics … Mäki seems more inclined to accept mainstream economic contributions as largely successful, or anyway uncritically. I certainly do not think we can accept mainstream contributions as successful, and so I proceed somewhat differently …

So if there is a difference here it is that Mäki more often starts out from mainstream academic economic analyses accepted rather uncritically, whilst I prefer to start from those everyday practices widely regarded as successful.

Tony Lawson

Lawson and Mäki are both highly influential contemporary students of economic methodology and philosophy. Yours truly has learned a lot from both of them. Although to a certain degree probably also a question of ‘temperament,’ I find Lawson’s ‘critical realist’ critique of mainstream economic theories and models deeper and more convincing than Mäki’s more ‘distanced’ and less critical approach. Mäki’s ‘detached’ style probably reflects the fact that he is a philosopher with an interest in economics, rather than an economist. Being an economist it is easier to see the relevance of Lawson’s ambitious and far-reaching critique of mainstream economics than it is to value Mäki’s often rather arduous application of the analytic-philosophical tool-kit, typically less ambitiously aiming for mostly conceptual and terminological ‘clarifications.’

RCTs risk distorting our knowledge base

13 September, 2018 at 14:14 | Posted in Theory of Science & Methodology | Comments Off on RCTs risk distorting our knowledge base

Grade-4-RCTThe claimed hierarchy of methods, with randomized assignment being deemed inherently superior to observational studies, does not survive close scrutiny. Despite frequent claims to the contrary, an RCT does not equate counterfactual outcomes between treated and control units. The fact that systematic bias in estimating the mean impact vanishes in expectation (under ideal conditions) does not imply that the (unknown) experimental error in a one-off RCT is less than the (unknown) error in some alternative observational study. We obviously cannot know that. A biased observational study with a reasonably large sample size may well be closer to the truth in specific trials than an underpowered RCT …

The questionable claims made about the superiority of RCTs as the “gold standard” have had a distorting influence on the use of impact evaluations to inform development policymaking, given that randomization is only feasible for a non-random subset of policies. When a program is community- or economy-wide or there are pervasive spillover effects from those treated to those not, an RCT will be of little help, and may well be deceptive. The tool is only well suited to a rather narrow range of development policies, and even then it will not address many of the questions that policymakers ask. Advocating RCTs as the best, or even only, scientific method for impact evaluation risks distorting our knowledge base for fighting poverty.

Martin Ravaillon

Laplace and the principle of insufficient reason (wonkish)

3 September, 2018 at 17:04 | Posted in Theory of Science & Methodology | 1 Comment

After their first night in paradise, and having seen the sun rise in the morning, Adam and Eve were wondering if they were to experience another sunrise or not. Given the rather restricted sample of sunrises experienced, what could they expect? According to Laplace’s rule of succession, the probability of an event E happening after it has occurred n times is

p(E|n) = (n+1)/(n+2).

The probabilities can be calculated using Bayes’ rule, but to get the calculations going, Adam and Eve must have an a priori probability (a base rate) to start with. The Bayesian rule of thumb is to simply assume that all outcomes are equally likely. Applying this rule Adam’s and Eve’s probabilities become 1/2, 2/3, 3/4 …

Now this seems rather straight forward, but as already Keynes noted in Treatise on Probability (1921), there might be a problem here. The problem has to do with the prior probability and where it is assumed to come from. Is the appeal of the principle of insufficient reason – the principle of indifference – really warranted? Laborating on Keynes example, Gillies wine-water paradox in Philosophcal Theories of Probability (2000) shows it may not be so straight forward after all.

Assume there is a certain quantity of liquid containing wine and water mixed so that the ratio of wine to water (r) is between 1/3 and 3/1. What is then the probability that r ≤ 2? The principle of insufficient reason means that we have to treat all r-values as equiprobable, assigning a uniform probability distribution between 1/3 and 3/1, which gives the probability of r ≤ 2 = [(2-1/3)/(3-1/3)] = 5/8.

But to say r ≤ 2 is equivalent to saying that 1/r ≥ ½. Applying the principle now, however, gives the probability of 1/r ≥ 1/2 = [(3-1/2)/(3-1/3)]=15/16. So we seem to get two different answers that both follow from the same application of the principle of insufficient reason. Given this unsolved paradox, we have good reason to stick with Keynes (and be sceptical​ of Bayesianism).

Die Risikogesellschaft

30 August, 2018 at 13:06 | Posted in Theory of Science & Methodology | Comments Off on Die Risikogesellschaft

 

Donald Rubin on randomization and observational studies

21 August, 2018 at 12:48 | Posted in Theory of Science & Methodology | Comments Off on Donald Rubin on randomization and observational studies

 

The essence​ of scientific reasoning​

5 August, 2018 at 11:02 | Posted in Theory of Science & Methodology | Comments Off on The essence​ of scientific reasoning​

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmesFollowing the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (evidence only counts as evidence in relation to a specific hypothesis) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Allt — och lite till — du vill veta om kausalitet

12 July, 2018 at 18:00 | Posted in Theory of Science & Methodology | 1 Comment

KAUSALITETRolf Sandahls och Gustav Jakob Peterssons Kausalitet: i filosofi, politik och utvärdering är en synnerligen välskriven och läsvärd genomgång av de mest inflytelserika teorierna om kausalitet som används inom vetenskapen idag.

Tag och läs!

I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklaringsmodellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad.

Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle (1) ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, (2) ge en metod för testning av förklaringar, och (3) visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.

En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.

Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.

Hard and soft science — a flawed dichotomy

11 July, 2018 at 19:08 | Posted in Theory of Science & Methodology | 1 Comment

The distinctions between hard and soft sciences are part of our culture … But the important distinction is really not between the hard and the soft sciences. Rather, it is between the hard and the easy sciences. Easy-to-do science is what those in physics, chemistry, geology, and some other fields do. Hard-to-do science is what the social scientists do and, in particular, it is what we educational researchers do. In my estimation, we have the hardest-to-do science of them all! We do our science under conditions that physical scientists find intolerable. We face particular problems and must deal with local conditions that limit generalizations and theory building-problems that are different from those faced by the easier-to-do sciences …

Context-MAtters_Blog_Chip_180321_093400Huge context effects cause scientists great trouble in trying to understand school life … A science that must always be sure the myriad particulars are well understood is harder to build than a science that can focus on the regularities of nature across contexts …

Doing science and implementing scientific findings are so difficult in education because humans in schools are embedded in complex and changing networks of social interaction. The participants in those networks have variable power to affect each other from day to day, and the ordinary events of life (a sick child, a messy divorce, a passionate love affair, migraine headaches, hot flashes, a birthday party, alcohol abuse, a new principal, a new child in the classroom, rain that keeps the children from a recess outside the school building) all affect doing science in school settings by limiting the generalizability of educational research findings. Compared to designing bridges and circuits or splitting either atoms or genes, the science to help change schools and classrooms is harder to do because context cannot be controlled.

David Berliner

Amen!

When applying deductivist thinking to economics, mainstream economists set up their easy-to-do  ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be real-world relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not, and one of the main reasons for that is that context matters. When addressing real-world systems, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in an easy-to-do science like physics, but a poor guide for action in real-world systems in which concepts and entities are without clear boundaries and continually interact and overlap.

Uncertainty heuristics

27 June, 2018 at 09:56 | Posted in Theory of Science & Methodology | Comments Off on Uncertainty heuristics

 

Is 0.999… = 1?

1 June, 2018 at 09:11 | Posted in Theory of Science & Methodology | 5 Comments

What is 0.999 …, really? Is it 1? Or is it some number infinitesimally less than 1?

The right answer is to unmask the question. What is 0.999 …, really? It appears to refer to a kind of sum:

.9 + + 0.09 + 0.009 + 0.0009 + …

9781594205224M1401819961But what does that mean? That pesky ellipsis is the real problem. There can be no controversy about what it means to add up two, or three, or a hundred numbers. But infinitely many? That’s a different story. In the real world, you can never have infinitely many heaps. What’s the numerical value of an infinite sum? It doesn’t have one — until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

The British number theorist G. H. Hardy … explains it best: “It is broadly true to say that mathematicians before Cauchy asked not, ‘How shall we define 1 – 1 – 1 + 1 – 1 …’ but ‘What is 1 -1 + 1 – 1 + …?'”

No matter how tight a cordon we draw around the number 1, the sum will eventually, after some finite number of steps, penetrate it, and never leave. Under those circumstances, Cauchy said, we should simply define the value of the infinite sum to be 1.

I have no problem with solving problems in mathematics by ‘defining’ them away. In pure mathematics — and logic — you are always allowed to take an epistemological view on problems and ‘axiomatically’ decide that 0.999… is 1. But how about the real world? In that world, from an ontological point of view, 0.999… is never 1! Although mainstream economics seems to take for granted that their epistemology based models rule the roost even in the real world, economists ought to do some ontological reflection when they apply their mathematical models to the real world, where indeed “you can never have infinitely many heaps.”

In econometrics we often run into the ‘Cauchy logic’ —the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Just imagine there could be more worlds than the one we live in and the problem is ‘fixed.’

Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ of which the actual data are regarded as constituting a random sample”, von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It is — just as the Cauchy mathematical logic of ‘defining’ away problems — not tenable.

In social sciences — including economics — it is always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Diversity bonuses — the idea

28 May, 2018 at 12:43 | Posted in Theory of Science & Methodology | Comments Off on Diversity bonuses — the idea


If you’d like to learn more on the issue, have a look at James Surowiecki’s The Wisdom of Crowds (Anchor Books, 2005) or Scott Page’s The Diversity Bonus (Princeton University Press, 2017). For an illustrative example, see here.

The evidential sine qua non

24 May, 2018 at 18:10 | Posted in Theory of Science & Methodology | Comments Off on The evidential sine qua non

612090-W-Edwards-Deming-Quote-In-God-we-trust-all-others-bring-data

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.