Roy Bhaskar 1944 — 2014

21 November, 2014 at 15:04 | Posted in Theory of Science & Methodology | 1 Comment

Roy Bhaskar died at his home in Leeds on Wednesday 19 November 2014.

No philosopher of science has influenced my own thinking more than Roy did.

Rest in peace my dear friend.

royWhat properties do societies possess that might make them possible objects of knowledge for us? My strategy in developing an answer to this question will be effectively based on a pincer movement. But in deploying the pincer I shall concentrate first on the ontological question of the properties that societies possess, before shifting to the epistemological question of how these properties make them possible objects of knowledge for us. This is not an arbitrary order of development. It reflects the condition that, for transcendental realism, it is the nature of objects that determines their cognitive possibilities for us; that, in nature, it is humanity that is contingent and knowledge, so to speak, accidental. Thus it is because sticks and stones are solid that they can be picked up and thrown, not because they can be picked up and thrown that they are solid (though that they can be handled in this sort of way may be a contingently necessary condition for our knowledge of their solidity).

Testing hypotheses — data mining vs. prediction

3 November, 2014 at 13:35 | Posted in Theory of Science & Methodology | 2 Comments

In the case of “accommodation,” a hypothesis is constructed to fit an observation that has already been made. In the case of “prediction,” the hypothesis, though it may already be partially based on an existing data set, is formulated before the empirical claim in question is deduced and verified by observation …

It is surprisingly difficult to establish an advantage thesis: to show that predictions tend to provide stronger support than accommodations …

liptonThe two arguments I want to make for the advantage thesis make connections between the contrast between prediction and accommodation and these relatively uncontroversial evidential and theoretical virtues. The first and simpler of the two arguments is the argument from choice. Scientists can often choose their predictions in a way in which they cannot choose which data to accommodate. When it comes to prediction, they can pick their shots, deciding which predictions of the hypothesis to check. Accommodated data, by contrast, are already there, and scientists have to make out of them what they can …

Unfortunately, the argument from choice does not give a reason for the more ambitious claim — the strong advantage thesis — that a single, particular observation that was accommodated would have provided more support for the hypothesis in question if it had been predicted instead. The following analogy may help to clarify this distinction between the weak and strong advantage theses. The fact that I can choose what I eat in a restaurant but not when I am invited to someone’s home explains why I tend to prefer the dishes I eat in restaurants over those I eat in other people’s homes, but this obviously gives no reason to suppose that lasagna, say, tastes better in restaurants than in homes. Similarly, the argument from choice may show that predictions tend to provide stronger support than accommodations, but it does not show that the fact that a particular datum was predicted gives any more reason to believe the hypothesis than if that same datum had been accommodated. To defend this strong advantage thesis, we need another argument: the “fudging” argument …

Now for the fudging argument … The point is that the investigator may, sometimes without fully realizing it, fudge the hypothesis … to ensure that more of the data gets captured … The advantage that the fudging argument attributes to prediction is thus in some respects similar to the advantage of a double-blind medical experiment, in which neither the doctor nor the patients know which patients are getting the placebo and which are getting the drug being tested. The doctor’s ignorance makes her judgment more reliable, because she does not know what the “right” answer is supposed to be. The fudging argument makes an analogous suggestion about scientists generally. Not knowing the right answer in advance—the situation in prediction but not in accommodation—makes it less likely that the scientist will fudge the hypothesis in a way that makes for poor empirical support …

What the fudging argument shows is that we are sometimes justified in being more impressed by predictions than by accommodations.

Peter Lipton: Testing Hypotheses

Real world filters and economic models

1 November, 2014 at 18:10 | Posted in Economics, Theory of Science & Methodology | 4 Comments

chameleon-ipad-backgroundChameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty … Because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters …

cherry-pickOne can generally develop a theoretical model to produce any result within a wide range. Do you want a model that produces the result that banks should be 100% funded by deposits? Here is a set of assumptions and an argument that will give you that result. That such a model exists tells us very little. By claiming relevance without running it through the filter it becomes a chameleon …

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us genuine insights. To be taken seriously models should pass through the real world filter.

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

Paul Pfleiderer

Pfleiderer’s absolute gem of an article reminds me of what H. L. Mencken once famously said:

There is always an easy solution to every problem – neat, plausible and wrong.

Pfleiderer’s perspective may be applied to many of the issues involved when modeling complex and dynamic economic phenomena. Let me take just one example — simplicity.

When it comes to modeling I do see the point often emphatically made for simplicity among economists and econometricians — but only as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconom(etr)ic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconom(etr)ics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fulfill its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of  “simple” macroeconom(etr)ic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.

That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.

As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.

The insuperable problem with ‘objective’ Bayesianism

28 October, 2014 at 08:52 | Posted in Theory of Science & Methodology | 4 Comments

9780702249631 A major, and notorious, problem with this approach, at least in the domain of science, concerns how to ascribe objective prior probabilities to hypotheses. What seems to be necessary is that we list all the possible hypotheses in some domain and distribute probabilities among them, perhaps ascribing the same probability to each employing the principal of indifference. But where is such a list to come from? It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero
probability and Popper wins the day. How is some finite list of hypotheses enabling some objective distribution of nonzero prior probabilities to be arrived at? My own view is that this problem is insuperable, and I also get the impression from the current literature that most Bayesians are themselves
coming around to this point of view.

Slippery slope arguments

19 October, 2014 at 16:10 | Posted in Theory of Science & Methodology | Leave a comment

 

Causal inference and implicit superpopulations (wonkish)

13 October, 2014 at 10:01 | Posted in Theory of Science & Methodology | 1 Comment

morganThe most expedient population and data generation model to adopt is one in which the population is regarded as a realization of an infinite superpopulation. This setup is the standard perspective in mathematical statistics, in which random variables are assumed to exist with fixed moments for an uncountable and unspecified universe of events …

This perspective is tantamount to assuming a population machine that spawns individuals forever (i.e., the analog to a coin that can be flipped forever). Each individual is born as a set of random draws from the distributions of Y¹, Y°, and additional variables collectively denoted by S …

Because of its expediency, we will usually write with the superpopulation model in the background, even though the notions of infinite superpopulations and sequences of sample sizes approaching infinity are manifestly unrealistic.

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

The assumption of imaginary “superpopulations” is one of the many dubious assumptions used in modern econometrics.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

fraud-kit

In his great book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman also touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

freedLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

And as if this wasn’t enough, one could — as we’ve seen — also seriously wonder what kind of “populations” these statistical and econometric models ultimately are based on. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the “infinite superpopulations” used in e.g. the potential outcome framework that has become so popular lately in social sciences?

Of course one could treat  observational or experimental data as random samples from real populations. I have no problem with that. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of “parallel universes” and assume that our data are random samples from that kind of  “infinite superpopulations.”

But this is actually nothing else but hand-waving! And it is inadequate for real science. As David Freedman writes:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

The riddle of induction

8 October, 2014 at 15:20 | Posted in Theory of Science & Methodology | 2 Comments

Recall [Russell's famous] turkey problem. You look at the past and derive some rule about the future. Well, the problems in projecting from the past can be even worse than what we have already learned, because the same past data can confirm a theory and also its exact opposite …

For the technical version of this idea, consider a series of dots on a page representing a number through time … Let’s say your high school teacher asks you to extend the series of dots. With a linear model, that is, using a ruler, you can run only a single straight line from the past to the future. The linear model is unique. There is one and only one straight line that can project a series of points …

grueThis is what philosopher Nelson Goodman called the riddle of induction: we project a straight line only because we have a linear model in our head — the fact that a number has risen for 1 000 days straight should make you more confident that it will rise in the future. But if you have a nonlinear model in your head, it might confirm that the number should decline on day 1 001 …

The severity of Goodman’s riddle of induction is as follows: if there is no longer even a single unique way to ‘generalize’ from what you see, to make an inference about the unknown, then how should you operate? The answer, clearly, will be that you should employ ‘common sense’.

Nassim Taleb

And economists standardly — and without even the slightest justification — assume linearity in their models …

‘Rigorous’ evidence can be worse than useless

7 October, 2014 at 10:09 | Posted in Theory of Science & Methodology | 2 Comments

So far we have shown that for two prominent questions in the economics of education, experimental and non-experimental estimates appear to be in tension. Furthermore, experimental results across different contexts are often in tension with each other. The first tension presents policymakers with a trade-off between the internal validity of estimates from the “wrong” context, and the greater external validity of observational data analysis from the “right” context. The second tension, between equally well-identifed results across contexts, suggests that the resolution of this trade-off is not trivial. There appears to be genuine heterogeneity in the true causal parameter across contexts.

8407_2008_58931_lThese findings imply that the common practice of ranking evidence by its level of “rigor”,
without respect to context, may produce misleading policy recommendations …

Despite the fact that we have chosen to focus on extremely well-researched literatures,
it is plausible that a development practitioner confronting questions related to class size, private schooling, or the labor-market returns to education would confront a dearth of well-identified, experimental or quasi-experimental evidence from the country or context in which they are working. They would instead be forced to choose between less internally valid OLS estimates, and more internally valid experimental estimates produced in a very different setting. For all five of the examples explored here, the literature provides a compelling case that policymakers interested in minimizing the error of their parameter estimates would do well to prioritize careful thinking about local evidence over rigorously-estimated causal effects from the wrong context.

Lant Pritchett & Justin Sandefur

Uncertainty and reflexivity — two things missing from Krugman’s economics

18 September, 2014 at 09:36 | Posted in Theory of Science & Methodology | 7 Comments

One thing that’s missing from Krugman’s treatment of useful economics is the explicit recognition of what Keynes and before him Frank Knight, emphasized: the persistent presence of enormous uncertainty in the economy. Most people most of the time don’t just face quantifiable risks, to be tamed by statistics and probabilistic reasoning. We have to take decisions in the prospect of events–big and small–we can’t predict even with probabilities.uncertainty Keynes famously argued that classical economics had no role for money just because it didn’t allow for uncertainty. Knight similarly noted that it made no room for the entrepreneur owing to the same reason. That to this day standard economic theory continues to rules out money and excludes entrepreneurs may strike the noneconomist as odd to say the least. But there it is. Why is uncertainty so important? Because the more of it there is in the economy the less scope for successful maximizing and the more unstable are the equilibria the economy exhibits, if it exhibits any at all. Uncertainty is just what the New Classical neglected when they endorsed the efficient market hypothesis and the Black-Scholes formulae for pumping returns out of well-behaved risks.

If uncertainty is an ever present, pervasive feature of the economy, then we can be confident, along with Krugman, that New Classical models wont be useful over the long haul. Even if people are perfectly rational too many uncertain, “exogenous” events will divert each new equilibrium path before it can even get started.

There is a second feature of the economy that Krugman’s useful economics needs to reckon with, one that Keynes and after him George Soros, emphasized. Along with uncertainty, the economy exhibits pervasive reflexivity: expectations about the economic future tend to actually shift that future. This will be true whether those expectations are those of speculators, regulators, even garden-variety consumers and producers. Reflexiveness is everywhere in the economy, though it is only easily detectable when it goes to extremes, as in bubbles and busts, or regulatory capture …

When combined uncertainty and reflexivity greatly limit the power of maximizing and equilibrium to do useful economics … Between them, they make the economy a moving target for the economist. Models get into people’s heads and change their behavior, usually in ways that undermine the model’s usefulness to predict.

Which models do this and how they work is not a matter of quantifiable risk, but radical uncertainty …

Between them reflexivity and uncertainty make economics into a retrospective, historical science, one whose models—simple or complex—are continually made obsolete by events, and so cannot be improved in the direction of greater predictive power, even by more complication. The way expectations reflexively drive future economic events, and are driven by past ones, is constantly being changed by the intervention of unexpected, uncertain, exogenous ones.

Alex Rosenberg

[h/t Jan Milch]

Rethinking methodology and theory in economics

16 September, 2014 at 07:59 | Posted in Theory of Science & Methodology | Leave a comment

 

Next Page »

Create a free website or blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.