RCTs — assumptions, biases and limitations

25 Nov, 2019 at 15:01 | Posted in Theory of Science & Methodology | Leave a comment

Randomised experiments require much more than just randomising an experiment to identify a treatment’s effectiveness. They involve many decisions and complex steps that bring their own assumptions and degree of bias before, during and after randomisation …

rcSome researchers may respond, “are RCTs not still more credible than these other methods even if they may have biases?” For most questions we are interested in, RCTs cannot be more credible because they cannot be applied (as outlined above). Other methods (such as observational studies) are needed for many questions not amendable to randomisation but also at times to help design trials, interpret and validate their results, provide further insight on the broader conditions under which treatments may work, among other rea- sons discussed earlier. Different methods are thus complements (not rivals) in improving understanding.

Finally, randomisation does not always even out everything well at the baseline and it cannot control for endline imbalances in background influencers. No researcher should thus just generate a single randomisation schedule and then use it to run an experiment. Instead researchers need to run a set of randomisation iterations before conducting a trial and select the one with the most balanced distribution of background influencers between trial groups, and then also control for changes in those background influencers during the trial by collecting endline data. Though if researchers hold onto the belief that flipping a coin brings us closer to scientific rigour and understanding than for example systematically ensuring participants are distributed well at baseline and endline, then scientific understanding will be undermined in the name of computer-based randomisation.

Alexander Krauss

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

1855 — the birth of causal inference

22 Nov, 2019 at 22:14 | Posted in Theory of Science & Methodology | 2 Comments

 

If anything, Snow’s path-breaking research underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of statistics is actually zero — even though you’re making valid statistical inferences! Statistical models are no substitutes for doing real science. Or as a German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

We should never forget that the underlying parameters we use when performing statistical tests are model constructions. And if the model is wrong, the value of our calculations is nil. As ‘shoe-leather researcher’ David Freedman wrote in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Wann Wissenschaft wahr ist

21 Nov, 2019 at 16:47 | Posted in Theory of Science & Methodology | Leave a comment

 

The limits of extrapolation in economics

13 Oct, 2019 at 19:15 | Posted in Theory of Science & Methodology | 2 Comments

steelThere are two basic challenges that confront any account of extrapolation that seeks to resolve the shortcomings of simple induction. One challenge, which I call extrapolator’s circle, arises from the fact that extrapolation is worthwhile only when there are important limitations on what one can learn about the target by studying it directly. The challenge, then, is to explain how the suitability of the model as a basis for extrapolation can be established given only limited, partial information about the target … The second challenge is a direct consequence of the heterogeneity of populations studied in biology and social sciences. Because of this heterogeneity, it is inevitable there will be causally relevant differences between the model and the target population.

In economics — as a rule — we can’t experiment on the real-world target directly.  To experiment, economists therefore standardly construct ‘surrogate’ models and perform ‘experiments’ on them. To be of interest to us, these surrogate models have to be shown to be relevantly ‘similar’ to the real-world target, so that knowledge from the model can be exported to the real-world target. The fundamental problem highlighted by Steel is that this ‘bridging’ is deeply problematic​ — to show that what is true of the model is also true of the real-world target, we have to know what is true of the target, but to know what is true of the target we have to know that we have a good model  …

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built tractability assumptions — like, e. g., invariance, additivity, faithfulness, modularity, common knowledge, etc., etc. — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of surrogate models.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Do economic models actually explain anything?

11 Oct, 2019 at 13:53 | Posted in Theory of Science & Methodology | 4 Comments

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’ Galileo's falling bodies experimentIf we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world.

Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a more or less hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

The problem is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

The auxiliary assumptions matter crucially. So, what’s the problem? There is standardly no way the results we get in the mainstream models would happen in reality! Not even extreme idealizations in the form of invoking non-existent entities such as ‘actors maximizing expected utility,’ ‘rational expectations’ or ‘representative agents’ deliver. The models simply are not Galilean thought-experiments. Given the set of constraining assumptions, this happens. But change only one of these assumptions and something completely different may happen.

The lack of ‘robustness’ with respect to variation of the model assumptions underscores that this is not the kind of knowledge we are looking for. We want to know what happens to unemployment in general in the real world, not what might possibly happen in a model given a constraining set of known to be false assumptions. This should come as no surprise. How that model with all its more or less outlandishly looking assumptions ever should be able to connect with the real world is, to say the least, somewhat unclear. The total absence of strong empirical evidence and the lack of similarity between the heavily constrained model and the real world makes it even more difficult to see how there could ever be any inductive bridging between them. The assumptions are not only unrealistic. They are unrealistic in the wrong way.

Mainstream ‘as if’ models are based on the logic of idealization and a set of tight axiomatic and ‘structural’ assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the assumptions are true, the conclusions necessarily follow. But it is a poor guide for real-world systems.

Showing that something is possible in a ‘possible world’ doesn’t give us a justified license to infer that it therefore also is possible in the real world. ‘The Great Gatsby’ is a wonderful novel, but if you truly want to learn about what is going on in the world of finance, I would recommend rather reading Minsky or Keynes and directly confront real-world finance.

The assumptions and descriptions we use in our modelling have to be true — or at least ‘harmlessly’ false — and give a sufficiently detailed characterization of the mechanisms and forces at work. Models in mainstream economics do nothing of the kind.

Our aspirations have to be more far-reaching than just constructing coherent and ‘credible’ models about ‘possible worlds’. We want to understand and explain ‘difference-making’ in the real world and not just in some made-up fantasy world.  Science has to be something more than just more or less realistic ‘story-telling’ or ‘explanatory fictionalism’. You have to provide decisive empirical evidence that what you can infer in your model also helps us to uncover what actually goes on in the real world.  If you fail to support your models in that way,  you come up with nothing that holds as an explanation what goes on in the world in which we live.

Telling us that something is possible in a possible or ‘credible’ world is not enough. Showing us that something possibly can happen in a model world, is not enough to explain what actually happens in the real world.

Les théorèmes d’incomplétude de Gödel

24 Aug, 2019 at 00:02 | Posted in Theory of Science & Methodology | Comments Off on Les théorèmes d’incomplétude de Gödel

 

Roy Bhaskar

5 Aug, 2019 at 10:58 | Posted in Theory of Science & Methodology | 13 Comments

royWhat properties do societies possess that might make them possible objects of knowledge for us? My strategy in developing an answer to this question will be effectively based on a pincer movement. But in deploying the pincer I shall concentrate first on the ontological question of the properties that societies possess, before shifting to the epistemological question of how these properties make them possible objects of knowledge for us. This is not an arbitrary order of development. It reflects the condition that, for transcendental realism, it is the nature of objects that determines their cognitive possibilities for us; that, in nature, it is humanity that is contingent and knowledge, so to speak, accidental. Thus it is because sticks and stones are solid that they can be picked up and thrown, not because they can be picked up and thrown that they are solid (though that they can be handled in this sort of way may be a contingently necessary condition for our knowledge of their solidity).

No philosopher of science has influenced yours truly’s thinking more than Roy did, and in a time when scientific relativism is still on the march, it is important to keep up his claim for not reducing science to a pure discursive level.

Roy-Bhaskar-009Science is made possible by the fact that there exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything about​ what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events are​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

Minimal realism — much ado about nothing

8 Jul, 2019 at 17:40 | Posted in Theory of Science & Methodology | 3 Comments

ccTo generalise Mäki’s distinction between realism and realisticness, someone who believes that economic theories must or should include unrealistic assumptions is not necessarily a non-realist in the broader sense of philosophical realism: “A realist economist is permitted, indeed required, to use unrealistic assumptions in order to isolate what are believed to be the most essential features in a complex situation … To count as a minimal realist, an economist is required to believe that economic reality is unconstituted by his or her representations of it and that whatever truth value those representations have is independent of his or her or anybody else’s opinions of it” (Mäki 1994: 248).

Although Lawson would presumably not deny that orthodox economic theorists account as minimal realists in this sense, his concern is that orthodox economic theory is unrealistic in not representing the way things really are in that it does not refer factually and does not latch onto what is essential in the social domain … Lawson’s standpoint is that economic theory should strive for true explanations of social phenomena, hence Lawson is a methodological realist in this respect.

Duncan Hodge

The explanation paradox in economics

2 Jul, 2019 at 15:28 | Posted in Economics, Theory of Science & Methodology | 7 Comments

hotHotelling’s model, then, is false in all relevant senses … And yet, it is considered explanatory. Moreover, and perhaps more importantly, it feels explanatory. If we have not thought much about Hotelling’s kind of cases, it seems that we have genuinely learned something. We begin to see Hotelling situations all over the place. Why do electronics shops in London concentrate in Tottenham Court Road and music shops in Denmark Street? Why do art galleries in Paris cluster around Rue de Seine? Why have so many hi-fi-related retailers set up business in Calle Barquillo in Madrid such that it has come to be known as ‘Calle del Sonido’ (Street of Sound)? And why the heck are most political parties practically indistinguishable? But we do not only come to see that, we also intuitively feel that Hotelling’s model must capture something that is right.

We have now reached an impasse of the kind philosophers call a paradox: a set of statements, all of which seem individually acceptable or even unquestionable but which, when taken together, are jointly contradictory. These are the statements:

(1) Economic models are false.
(2) Economic models are nevertheless explanatory.
(3) Only true accounts can explain.

When facing a paradox, one may respond by either giving up one or more of the jointly contradictory statements or else challenge our logic. I have not found anyone writing on economic models who has explicitly challenged logic (though their writings sometimes suggest otherwise).

Julian Reiss

The logic of economic models

1 Jul, 2019 at 17:28 | Posted in Economics, Theory of Science & Methodology | 2 Comments

nancyAnalogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.

Nancy Cartwright

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’ Galileo's falling bodies experimentIf we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s experiments were according to Nancy Cartwright (Hunting Causes and Using Them, p. 223)

designed to find out what contribution the motion due to the pull of the earth will make, with the assumption that the contribution is stable across all the different kinds of situations falling bodies will get into … He eliminated (as far as possible) all other causes of motion on the bodies in his experiment so that he could see how they move when only the earth affects them. That is the contribution that the earth’s pull makes to their motion.

Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

The problem articulated by Cartwright (in the quote at the top of this post) is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

Let me just take one example to show that as a result of this the Galilean virtue is totally lost — there is no way the results achieved within the model can be exported to other circumstances.

When Pissarides — in his ‘Loss of Skill during Unemployment and the Persistence of Unemployment Shocks’ QJE (1992) —try to explain involuntary unemployment, he do so by constructing a model using assumptions such as e. g. ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, and ”job matching describable by a probability distribution.” The core assumption of expected utility maximizing agents doesn’t take the models anywhere, so to get some results Pissarides have to load his model with all these constraining auxiliary assumptions. Without those assumptions, the model would deliver nothing. The auxiliary assumptions matter crucially. So, what’s the problem? There is no way the results we get in that model would happen in reality! Not even extreme idealizations in the form of invoking non-existent entities such as ‘actors maximizing expected utility’ delivers. The model is not a Galilean thought-experiment. Given the set of constraining assumptions, this happens. But change only one of these assumptions and something completely different may happen.

Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they.

A. Alexandrova & R. Northcott

The lack of ‘robustness’ with respect to variation of the model assumptions underscores that this is not the kind of knowledge we are looking for. We want to know what happens to unemployment in general in the real world, not what might possibly happen in a model given a constraining set of known to be false assumptions. This should come as no surprise. How that model with all its more or less outlandishly looking assumptions ever should be able to connect with the real world is, to say the least, somewhat unclear. The total absence of strong empirical evidence and the lack of similarity between the heavily constrained model and the real world makes it even more difficult to see how there could ever be any inductive bridging between them. As Cartwright has it, the assumptions are not only unrealistic, they are unrealistic “in just the wrong way.”

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all int their models.

So why do mainstream economists keep on pursuing this modelling project?

Continue Reading The logic of economic models…

My philosophy of economics

18 Jun, 2019 at 13:45 | Posted in Economics, Theory of Science & Methodology | 4 Comments

A critique yours truly sometimes encounters is that as long as I cannot come up with some own alternative to the failing mainstream theory, I shouldn’t expect people to pay attention.

This is, however, to totally and utterly misunderstand the role of philosophy and methodology of economics!

As John Locke wrote in An Essay Concerning Human Understanding:

19557-004-21162361The Commonwealth of Learning is not at this time without Master-Builders, whose mighty Designs, in advancing the Sciences, will leave lasting Monuments to the Admiration of Posterity; But every one​e must not hope to be a Boyle, or a Sydenham; and in an Age that produces such Masters, as the Great-Huygenius, and the incomparable Mr. Newton, with some other of that Strain; ’tis Ambition enough to be employed as an Under-Labourer in clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge.

That’s what philosophy and methodology can contribute to economics — clearing obstacles to science by clarifying limits and consequences of choosing specific modelling strategies, assumptions, and ontologies.

respectEvery now and then I also get some upset comments from people wondering why I’m not always ‘respectful’ of people like Eugene Fama, Robert Lucas, Greg Mankiw, Paul Krugman, Simon Wren-Lewis, and others of the same ilk.

But sometimes it might actually, from a Lockean perspective, be quite appropriate to be disrespectful.

New Classical and ‘New Keynesian’ macroeconomics is rubbish that ‘lies in the way to Knowledge.’

And when New Classical and ‘New Keynesian’ economists resurrect fallacious ideas and theories that were proven wrong already in the 1930s, then I think a less respectful and more colourful language is called for.

The LOGIC of science vs the METHODS of science

10 Jun, 2019 at 10:02 | Posted in Theory of Science & Methodology | Comments Off on The LOGIC of science vs the METHODS of science

 

Postmodern mumbo jumbo

30 May, 2019 at 13:18 | Posted in Theory of Science & Methodology | 2 Comments

hall
Fyra viktiga drag är gemensamma för de olika rörelserna:

    1. Centrala idéer förklaras inte.
    2. Grunderna för en övertygelse anges inte.
    3. Framställningen av läran har en språklig stereotypi …
    4. När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn återkommer. Heidegger, Foucault, och Derrida kommer tillbaka, åter och åter …

Till de fyra punkterna vill jag emellertid … lägga till en femte:

5. Vederbörande har inte något väsentligen nytt att framföra.

Överdrivet? Elakt? Tja, smaken är olika. Men smaka på den här soppan och försök sen säga att det inte ligger något i den gamle lundaprofessorns karakteristik …

MUMBO-JUMBO1The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

Judith Butler

RCTs — gold standard or monster?

8 May, 2019 at 16:11 | Posted in Theory of Science & Methodology | Comments Off on RCTs — gold standard or monster?

tttOne important comment, repeated — but not unanimously — can perhaps be summarized as ‘All that said and done, RCTs are still generally the best that can be done in estimating average treatment effects and in warranting causal conclusions.’ It is this claim that is the monster that seemingly can never be killed, no matter how many stakes are driven through its heart. We strongly endorse Robert Sampson’s statement “That experiments have no special place in the hierarchy of scientific evidence seems to me to be clear.” Experiments are sometimes the best that can be done, but they are often not. Hierarchies that privilege RCTs over any other evidence irrespective of context or quality are indefensible and can lead to harmful policies. Different methods have different relative advantages and disadvantages.

Angus Deaton & Nancy Cartwright

Revisiting the foundations of randomness and probability

30 Apr, 2019 at 14:17 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 5 Comments

dudeRegarding models as metaphors leads to a radically different view regarding the interpretation of probability. This view has substantial advantages over conventional interpretations …

Probability does not exist in the real world. We must search for her in the Platonic world of ideals. We have shown that the interpretation of probability as a metaphor leads to several substantial changes in interpretations and justifications for conventional frequentist procedures. These changes remove several standard objections which have been made to these procedures. Thus our model seems to offer a good foundation for re-building our understanding of how probability should be interpreted in real world applications. More generally, we have also shown that regarding scientific models as metaphors resolves several puzzles in the philosophy of science.

Asad Zaman

Although yours truly has to confess of not being totally convinced that redefining​ probability as a metaphor is the right way to go forward on these foundational issues, Zaman’s article​ sure raises some very interesting questions on the way the concepts of randomness and probability are used in economics.

Modern mainstream economics relies to a large degree on the notion of probability. To at all be amenable to applied economic analysis, economic observations have to be conceived as random events that are analyzable within a probabilistic framework. But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

slide_1When attempting to convince us of the necessity of founding empirical economic analysis on probability models,  mainstream economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world (although I’m not sure Zaman agrees but rather puts also randomness in ‘the Platonic world of ideals’). Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events — in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment — there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures — something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences — and certainly in economics — are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data — whether cross-section, time series or panel data — as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes — at least outside of nomological machines like dice and roulette-wheels — are not self-evidently best modelled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks sound foundations.

When economists and econometricians — often uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations!​

Next Page »

Blog at WordPress.com.
Entries and comments feeds.