Nancy Cartwright on the limits of linear causal models

16 December, 2014 at 21:42 | Posted in Theory of Science & Methodology | 1 Comment

 

‘Uncertain’ knowledge

12 December, 2014 at 19:14 | Posted in Theory of Science & Methodology | 3 Comments


It is generally recognised that the Ricardian analysis was concerned with what we now call long-period equilibrium. Marshall’s contribution mainly consisted in grafting on to this the marginal principle and the principle of substitution, together with some discussion of the passage from one position of long-period equilibrium to another. But he assumed, as Ricardo did, that the amounts of the factors of production in use were given and that the problem was to determine the way in which they would be used and their relative rewards. Edgeworth and Professor Pigou and other later and contemporary writers have embroidered and improved this theory by considering how different peculiarities in the shapes of the supply functions of the factors of production would affect matters, what will happen in conditions of monopoly and imperfect competition, how far social and individual advantage coincide, what are the special problems of exchange in an open system and the like. But these more recent writers like their predecessors were still dealing with a system in which the amount of the factors employed was given and the other relevant facts were known more or less for certain. This does not mean that they were dealing with a system in which change was ruled out, or even one in which the disappointment of expectation was ruled out. But at any given time facts and expectations were assumed to be given in a definite and calculable form; and risks, of which, though admitted, not much notice was taken, were supposed to be capable of an exact actuarial computation. The calculus of probability, though mention of it was kept in the background, was supposed to be capable of reducing uncertainty to the same calculable status as that of certainty itself; just as in the Benthamite calculus of pains and pleasures or of advantage and disadvantage, by which the Benthamite philosophy assumed men to be influenced in their general ethical behaviour.

Actually, however, we have, as a rule, only the vaguest idea of any but the most direct consequences of our acts. Sometimes we are not much concerned with their remoter consequences, even though time and chance may make much of them. But sometimes we are intensely concerned with them, more so, occasionally, than with the immediate consequences. Now of all human activities which are affected by this remoter preoccupation, it happens that one of the most important is economic in character, namely, wealth. The whole object of the accumulation of wealth is to produce results, or potential results, at a comparatively distant, and sometimes indefinitely distant, date. Thus the fact that our knowledge of the future is fluctuating, vague and uncertain, renders wealth a peculiarly unsuitable subject for the methods of the classical economic theory. This theory might work very well in a world in which economic goods were necessarily consumed within a short interval of their being produced. But it requires, I suggest, considerable amendment if it is to be applied to a world in which the accumulation of wealth for an indefinitely postponed future is an important factor; and the greater the proportionate part played by such wealth accumulation the more essential does such amendment become.

By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of an European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters their is no scientific basis on which to form any calculable probability whatever. We simply do not know.

J M Keynes  “The General Theory of Employment” Quarterly Journal of Economics, February 1937.

Rigour vs. relevance

4 December, 2014 at 13:15 | Posted in Theory of Science & Methodology | Leave a comment

significance_cartoon

Bayesianism — a ‘patently absurd’ approach to science

2 December, 2014 at 19:49 | Posted in Theory of Science & Methodology | 3 Comments

Back in 1991, when I earned my first Ph.D. — with a dissertation on decision making and rationality in social choice theory and game theory — yours truly concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

steer-clear-of-scientologyThis, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed.

The decision theoretical approach I perhaps was most critical of, was the one building on the then reawakened Bayesian subjectivist interpretation of probability.

One of my inspirations when working on the dissertation was Henry E. Kyburg, and I still think his critique is the ultimate take-down of Bayesian hubris (emphasis added):

From the point of view of the “logic of consistency” (which for Ramsey includes the probability calculus), no set of beliefs is more rational than any other, so long as they both satisfy the quantitative relationships expressed by the fundamental laws of probability. Thus I am free to assign the number I/3 to the probability that the sun will rise tomorrow; or, more cheerfully, to take the probability to be 9/1io that I have a rich uncle in Australia who will send me a telegram tomorrow informing me that he has made me his sole heir. Neither Ramsey, nor Savage, nor de Finetti, to name three leading figures in the personalistic movement, can find it in his heart to detect any logical shortcomings in anyone, or to find anyone logically culpable, whose degrees of belief in various propositions satisfy the laws of the probability calculus, however odd those degrees of belief may otherwise be. Reasonableness, in which Ramsey was also much interested, he considered quite another matter. The connection between rationality (in the sense of conformity to the rules of the probability calculus) and reasonableness (in the ordinary inductive sense) is much closer for Savage and de Finetti than it was for Ramsey, but it is still not a strict connection; one can still be wildly unreasonable without sinning against either logic or probability.

kyburgNow this seems patently absurd. It is to suppose that even the most simple statistical inferences have no logical weight where my beliefs are concerned. It is perfectly compatible with these laws that I should have a degree of belief equal to 1/4 that this coin will land heads when next I toss it; and that I should then perform a long series of tosses (say, 1000), of which 3/4 should result in heads; and then that on the 1001st toss, my belief in heads should be unchanged at 1/4. It could increase to correspond to the relative frequency in the observed sample, or it could even, by the agency of some curious maturity-of-odds belief of mine, decrease to 1/8. I think we would all, or almost all, agree that anyone who altered his beliefs in the last-mentioned way should be regarded as irrational. The same is true, though perhaps not so seriously, of anyone who stuck to his beliefs in the face of what we would ordinarily call contrary evidence. It is surely a matter of simple rationality (and not merely a matter of instinct or convention) that we modify our beliefs, in some sense, some of the time, to conform to the observed frequencies of the corresponding events.

There is another argument against both subjestivistic and logical theories that depends on the fact that probabilities are represented by real numbers … The point can be brought out by considering an old fashioned urn containing black and white balls. Suppose that we are in an appropriate state of ignorance, so that, on the logical view, as well as on the subjectivistic view, the probability that the first ball drawn will be black, is a half. Let us also assume that the draws (with replacement) are regarded as exchangeable events, so that the same will be true of the i-th ball drawn. Now suppose that we draw a thousand balls from this urn, and that half of them are black. Relative to this information both the subjectivistic and the logical theories would lead to the assignment of a conditional probability of 1/2 to the statement that a black ball will be drawn on the 1001st draw …

Although it does seem perfectly plausible that our bets concerning black balls and white balls should be offered at the same odds before and after the extensive sample, it surely does not seem plausible to characterize our beliefs in precisely the same way in the two cases …

This is a strong argument, I think, for considering the measure of rational belief to be two dimensional; and some writers on probability have come to the verge of this conclusion. Keynes, for example, considers an undefined quantity he calls “weight” to reflect the distinction between probability-relations reflecting much relevant evidence, and those which reflect little evidence …

Though Savage distinguishes between these probabilities of which he is sure and those of which he is not so sure, there is no way for him to make this distinction within the theory; there is no internal way for him to reflect the distinction between probabilities which are based on many instances and those which are based on only a few instances, or none at all.

Henry E. Kyburg

The reference Kyburg makes to Keynes and his concept of “weight of argument” is extremely interesting.

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

treatprobThe standard view in statistics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that “more is better.” But as Keynes argues – “more of the same” is not what is important when making inductive inferences. It’s rather a question of “more but different.”

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w “irrelevant.” Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (“weight of argument”). Running 10 replicative experiments do not make you as “sure” of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.” As Keynes writes in Treatise:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on the future. Consequently we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different “variables” is not enough. If they cannot get at the causal structure that generated the data, they are not really “identified.”

How strange that writers of statistics textbook as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the statistics carpet.

It’s high time that statistics textbooks give Keynes his due — and to re-read Henry E. Kyburg!

Roy Bhaskar 1944 — 2014

21 November, 2014 at 15:04 | Posted in Theory of Science & Methodology | 1 Comment

Roy Bhaskar died at his home in Leeds on Wednesday 19 November 2014.

No philosopher of science has influenced my own thinking more than Roy did.

Rest in peace my dear friend.

royWhat properties do societies possess that might make them possible objects of knowledge for us? My strategy in developing an answer to this question will be effectively based on a pincer movement. But in deploying the pincer I shall concentrate first on the ontological question of the properties that societies possess, before shifting to the epistemological question of how these properties make them possible objects of knowledge for us. This is not an arbitrary order of development. It reflects the condition that, for transcendental realism, it is the nature of objects that determines their cognitive possibilities for us; that, in nature, it is humanity that is contingent and knowledge, so to speak, accidental. Thus it is because sticks and stones are solid that they can be picked up and thrown, not because they can be picked up and thrown that they are solid (though that they can be handled in this sort of way may be a contingently necessary condition for our knowledge of their solidity).

Testing hypotheses — data mining vs. prediction

3 November, 2014 at 13:35 | Posted in Theory of Science & Methodology | 2 Comments

In the case of “accommodation,” a hypothesis is constructed to fit an observation that has already been made. In the case of “prediction,” the hypothesis, though it may already be partially based on an existing data set, is formulated before the empirical claim in question is deduced and verified by observation …

It is surprisingly difficult to establish an advantage thesis: to show that predictions tend to provide stronger support than accommodations …

liptonThe two arguments I want to make for the advantage thesis make connections between the contrast between prediction and accommodation and these relatively uncontroversial evidential and theoretical virtues. The first and simpler of the two arguments is the argument from choice. Scientists can often choose their predictions in a way in which they cannot choose which data to accommodate. When it comes to prediction, they can pick their shots, deciding which predictions of the hypothesis to check. Accommodated data, by contrast, are already there, and scientists have to make out of them what they can …

Unfortunately, the argument from choice does not give a reason for the more ambitious claim — the strong advantage thesis — that a single, particular observation that was accommodated would have provided more support for the hypothesis in question if it had been predicted instead. The following analogy may help to clarify this distinction between the weak and strong advantage theses. The fact that I can choose what I eat in a restaurant but not when I am invited to someone’s home explains why I tend to prefer the dishes I eat in restaurants over those I eat in other people’s homes, but this obviously gives no reason to suppose that lasagna, say, tastes better in restaurants than in homes. Similarly, the argument from choice may show that predictions tend to provide stronger support than accommodations, but it does not show that the fact that a particular datum was predicted gives any more reason to believe the hypothesis than if that same datum had been accommodated. To defend this strong advantage thesis, we need another argument: the “fudging” argument …

Now for the fudging argument … The point is that the investigator may, sometimes without fully realizing it, fudge the hypothesis … to ensure that more of the data gets captured … The advantage that the fudging argument attributes to prediction is thus in some respects similar to the advantage of a double-blind medical experiment, in which neither the doctor nor the patients know which patients are getting the placebo and which are getting the drug being tested. The doctor’s ignorance makes her judgment more reliable, because she does not know what the “right” answer is supposed to be. The fudging argument makes an analogous suggestion about scientists generally. Not knowing the right answer in advance—the situation in prediction but not in accommodation—makes it less likely that the scientist will fudge the hypothesis in a way that makes for poor empirical support …

What the fudging argument shows is that we are sometimes justified in being more impressed by predictions than by accommodations.

Peter Lipton: Testing Hypotheses

Real world filters and economic models

1 November, 2014 at 18:10 | Posted in Economics, Theory of Science & Methodology | 4 Comments

chameleon-ipad-backgroundChameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty … Because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters …

cherry-pickOne can generally develop a theoretical model to produce any result within a wide range. Do you want a model that produces the result that banks should be 100% funded by deposits? Here is a set of assumptions and an argument that will give you that result. That such a model exists tells us very little. By claiming relevance without running it through the filter it becomes a chameleon …

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us genuine insights. To be taken seriously models should pass through the real world filter.

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

Paul Pfleiderer

Pfleiderer’s absolute gem of an article reminds me of what H. L. Mencken once famously said:

There is always an easy solution to every problem – neat, plausible and wrong.

Pfleiderer’s perspective may be applied to many of the issues involved when modeling complex and dynamic economic phenomena. Let me take just one example — simplicity.

When it comes to modeling I do see the point often emphatically made for simplicity among economists and econometricians — but only as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconom(etr)ic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconom(etr)ics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fulfill its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of  “simple” macroeconom(etr)ic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.

That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.

As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.

The insuperable problem with ‘objective’ Bayesianism

28 October, 2014 at 08:52 | Posted in Theory of Science & Methodology | 4 Comments

9780702249631 A major, and notorious, problem with this approach, at least in the domain of science, concerns how to ascribe objective prior probabilities to hypotheses. What seems to be necessary is that we list all the possible hypotheses in some domain and distribute probabilities among them, perhaps ascribing the same probability to each employing the principal of indifference. But where is such a list to come from? It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero
probability and Popper wins the day. How is some finite list of hypotheses enabling some objective distribution of nonzero prior probabilities to be arrived at? My own view is that this problem is insuperable, and I also get the impression from the current literature that most Bayesians are themselves
coming around to this point of view.

Slippery slope arguments

19 October, 2014 at 16:10 | Posted in Theory of Science & Methodology | Leave a comment

 

Causal inference and implicit superpopulations (wonkish)

13 October, 2014 at 10:01 | Posted in Theory of Science & Methodology | 1 Comment

morganThe most expedient population and data generation model to adopt is one in which the population is regarded as a realization of an infinite superpopulation. This setup is the standard perspective in mathematical statistics, in which random variables are assumed to exist with fixed moments for an uncountable and unspecified universe of events …

This perspective is tantamount to assuming a population machine that spawns individuals forever (i.e., the analog to a coin that can be flipped forever). Each individual is born as a set of random draws from the distributions of Y¹, Y°, and additional variables collectively denoted by S …

Because of its expediency, we will usually write with the superpopulation model in the background, even though the notions of infinite superpopulations and sequences of sample sizes approaching infinity are manifestly unrealistic.

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

The assumption of imaginary “superpopulations” is one of the many dubious assumptions used in modern econometrics.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

fraud-kit

In his great book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman also touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

freedLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

And as if this wasn’t enough, one could — as we’ve seen — also seriously wonder what kind of “populations” these statistical and econometric models ultimately are based on. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the “infinite superpopulations” used in e.g. the potential outcome framework that has become so popular lately in social sciences?

Of course one could treat  observational or experimental data as random samples from real populations. I have no problem with that. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of “parallel universes” and assume that our data are random samples from that kind of  “infinite superpopulations.”

But this is actually nothing else but hand-waving! And it is inadequate for real science. As David Freedman writes:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.