Confusing statistics and research
29 Nov, 2015 at 15:08 | Posted in Statistics & Econometrics | 1 Comment
Coupled with downright incompetence in statistics, we often find the syndrome that I have come to call statisticism: the notion that computing is synonymous with doing research, the naïve faith that statistics is a complete or sufficient basis for scientific methodology, the superstition that statistical formulas exist for evaluating such things as the relative merits of different substantive theories or the “importance” of the causes of a “dependent variable”; and the delusion that decomposing the covariations of some arbitrary and haphazardly assembled collection of variables can somehow justify not only a “causal model” but also, praise a mark, a “measurement model.” There would be no point in deploring such caricatures of the scientific enterprise if there were a clearly identifiable sector of social science research wherein such fallacies were clearly recognized and emphatically out of bounds.
Economists — can-opener-assuming flimflammers
28 Nov, 2015 at 12:15 | Posted in Economics | Comments Off on Economists — can-opener-assuming flimflammersKids, somehow, seem to be more in touch with real science than can-opener-assuming economists …
A physicist, a chemist, and an economist are stranded on a desert island. One can only imagine what sort of play date went awry to land them there. Anyway, they’re hungry. Like, desert island hungry. And then a can of soup washes ashore. Progresso Reduced Sodium Chicken Noodle, let’s say. Which is perfect, because the physicist can’t have much salt, and the chemist doesn’t eat red meat.
But, famished as they are, our three professionals have no way to open the can. So they put their brains to the problem. The physicist says “We could drop it from the top of that tree over there until it breaks open.” And the chemist says “We could build a fire and sit the can in the flames until it bursts open.”
Those two squabble a bit, until the economist says “No, no, no. Come on, guys, you’d lose most of the soup. Let’s just assume a can opener.”
Neoliberalism
28 Nov, 2015 at 12:05 | Posted in Politics & Society | Comments Off on Neoliberalism
Neoliberal privatization — what a great idea …
Human capital and ‘bad taste in mouth’ models
26 Nov, 2015 at 18:02 | Posted in Economics | 1 CommentThe ever-growing literature on human capital has long recognized that the scope of the theory extends well beyond the traditional analysis of schooling and on-the-job training … Yet economists have ignored the analysis of an important class of activities which can and should be brought within the purview of the theory. A prime example of this class is brushing teeth.
The conventional analysis of toothbrushing has centered around two basic models. The “bad taste in mouth” model is based on the notion that each person has a “taste for brushing,” and the fact that brushing frequencies differ is “explained” by differences in tastes. Since any pattern of human behavior can be rationalized by such implicit theorizing, this model is devoid of empirically testable predictions, and hence uninteresting.
The “mother told me so” theory is based on differences in cultural upbringing. Here it is argued, for example, that thrice-a-day brushers brush three times daily because their mothers forced them to do so as children. Of course, this is hardly a complete explanation. Like most psychological theories, it leaves open the question of why mothers should want their children to brush after every meal …
In a survey of professors in a leading Eastern university it was found that assistant professors brushed 2.14 times daily on average, while associate professors brushed only 1.89 times and full professors only 1.47 times daily. The author, a sociologist, mistakenly attributed this finding to the fact that the higher-ranking professors were older and that hygiene standards in America had advanced steadily over time. To a human capital theorist, of course, this pattern is exactly what would be expected from the higher wages received in the higher professorial ranks, and from the fact that younger professors, looking for promotions, cannot afford to have bad breath.
Economic growth
25 Nov, 2015 at 17:55 | Posted in Economics | 10 CommentsI came to think about this dictum when reading Thomas Piketty’s Capital in the Twenty-First Century.
Piketty refuses to use the term ‘human capital’ in his inequality analysis.
I think there are many good reasons not to include ‘human capital’ in economic analyses. Let me just give one — perhaps analytically the most important one — reason and elaborate a little on that.
In modern endogenous growth theory knowledge (ideas) is presented as the locomotive of growth. But as Allyn Young, Piero Sraffa and others had shown already in the 1920s, knowledge is also something that has to do with increasing returns to scale and therefore not really compatible with neoclassical economics with its emphasis on constant returns to scale.
Increasing returns generated by non-rivalry between ideas is simply not compatible with pure competition and the simplistic invisible hand dogma. That is probably also the reason why neoclassical economists have been so reluctant to embrace the theory wholeheartedly.
Neoclassical economics has tried to save itself by blurring the distinction between ‘human capital’ and knowledge/ideas. But knowledge or ideas should not be confused with ‘human capital.’ Chad Jones & Paul Romer gives a succinct and accessible account of the difference:
Of the three statevariables that we endogenize, ideas have been the hardest to bring into the applied general equilibrium structure. The difficulty arises because of the defining characteristic of an idea, that it is a pure nonrival good. A given idea is not scarce in the same way that land or capital or other objects are scarce; instead, an idea can be used by any number of people simultaneously without congestion or depletion.
Because they are nonrival goods, ideas force two distinct changes in our thinking about growth, changes that are sometimes conflated but are logically distinct. Ideas introduce scale effects. They also change the feasible and optimal economic institutions. The institutional implications have attracted more attention but the scale effects are more important for understanding the big sweep of human history.
The distinction between rival and nonrival goods is easy to blur at the aggregate level but inescapable in any microeconomic setting. Picture, for example, a house that is under construction. The land on which it sits, capital in the form of a measuring tape, and the human capital of the carpenter are all rival goods. They can be used to build this house but not simultaneously any other. Contrast this with the Pythagorean Theorem, which the carpenter uses implicitly by constructing a triangle with sides in the proportions of 3, 4 and 5. This idea is nonrival. Every carpenter in the world can use it at the same time to create a right angle …
Ideas and human capital are fundamentally distinct. At the micro level, human capital in our triangle example literally consists of new connections between neurons in a carpenter’s head, a rival good. The 3-4-5 triangle is the nonrival idea. At the macro level, one cannot state the assertion that skill-biased technical change is increasing the demand for education without distinguishing between ideas and human capital.
In one way one might say that increasing returns is the darkness of the mainstream economics heart. And this is something most mainstream economists don’t really want to talk about. They prefer to look the other way and pretend that increasing returns are possible to seamlessly incorporate into the received paradigm — and talking about ‘human capital’ rather than knowledge/ideas makes this more easily digested.
Intolerance against intolerance
25 Nov, 2015 at 11:19 | Posted in Politics & Society | 3 CommentsWe teachers do our best to be Socratic, to get our job of re-education, secularization, and liberalization done by conversational exchange. That is true up to a point, but what about assigning books like Black Boy, The Diary of Anne Frank, and Becoming a Man? The racist or fundamentalist parents of our students say that in a truly democratic society the students should not be forced to read books by such people – black people, Jewish people, homosexual people. They will protest that these books are being jammed down their children’s throats. I cannot see how to reply to this charge without saying something like “There are credentials for admission to our democratic society, credentials which we liberals have been making more stringent by doing our best to excommunicate racists, male chauvinists, homo- phobes, and the like.
You have to be educated in order to be a citizen of our society, a participant in our conversation, someone with whom we can envisage merging our horizons. So we are going to go right on trying to discredit you in the eyes of your children, trying to strip your fundamentalist religious community of dignity, trying to make your views seem silly rather than discussable. We are not so inclusivist as to tolerate intolerance such as yours.”
I have no trouble offering this reply, since I do not claim to make the distinction between education and conversation on the basis of anything except my loyalty to a particular community, a community whose interests required re-educating the Hitler Youth in 1945 and required re-educating the bigoted students of Virginia in 1993. I don’t see anything herrschaftsfrei about my handling of my fundamentalist students. Rather, I think those students are lucky to find themselves under the benevolent Herrschaft of people like me, and to have escaped the grip of their frightening, vicious, dangerous parents.
Although Rorty’s view is pointing in the right direction re handling intolerance, his epistemization of the concept of truth makes the persuasive force of the argumentation weaker than necessary. Jürgen Habermas gives the reason why:
As soon as the concept of truth is eliminated in favor of a context-dependent epistemic validity-for-us, the normative reference point necessary to explain why a proponent should endeavor to seek agreement for ‘p’ beyond the boundaries of her own group is missing. The information that the agreement of an increasingly large audience gives us increasingly less reason to fear that we will be refuted presupposes the very interest that has to be explained: the desire for “as much intersubjective agreement as possible.” If something is ‘true’ if and only if it is recognized as justified “by us” because it is good “for us,” there is no rational motive for expanding the circle of members. No reason exists for the decentering expansion of the justification community especially since Rorty defines “my own ethnos” as the group in front of which I feel obliged to give an account of myself.
Offending offenders
25 Nov, 2015 at 09:49 | Posted in Varia | Comments Off on Offending offenders
Seems to be a universal trend. Even in Sweden we have lots of people who are offended by anything …
Econometrics and the ’empirical turn’ in economics
23 Nov, 2015 at 17:17 | Posted in Economics, Statistics & Econometrics | 7 CommentsThe increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led some economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.
In their plaidoyer for this view, the work of Joshua Angrist and Jörn-Steffen Pischke is often apostrophized, so lets start with one of their later books and see if there is any real reason to share the optimism on this ’empirical turn’ in economics.
In their new book, Mastering ‘Metrics: The Path from Cause to Effect, Angrist and Pischke write:
Our first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.
Angrist and Pischke may “dream of the trials we’d like to do” and consider “the notion of an ideal experiment” something that “disciplines our approach to econometric research,” but to maintain that ‘on average’ is “usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.
First of all it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.
Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.
In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:
Y = α + βX + ε,
where α is a constant intercept, β a constant “structural” causal effect and ε an error term.
The problem here is that although we may get an estimate of the “true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( X=1) may have causal effects equal to – 100 and those “not treated” (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.
Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.
Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.
Sam L. Savage The Flaw of Averages
When Joshua Angrist and Jörn-Steffen Pischke in an earlier article of theirs [“The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics,” Journal of Economic Perspectives, 2010] say that “anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future,” I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.
But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …
Absent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …
If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.
Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.
More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.
I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.
Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.
But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!
Angrist’s and Pischke’s “ideally controlled experiments” tell us with certainty what causes what effects — but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods — and ‘on-average-knowledge’ — is despairingly small.
Like us, you want evidence that a policy will work here, where you are. Randomized controlled trials (RCTs) do not tell you that.
They do not even tell you that a policy works. What they tell you is that a policy worked there, where the trial was carried out, in that population. Our argument is that the changes in tense – from “worked” to “work” – are not just a matter of grammatical detail. To move from one to the other requires hard intellectual and practical effort. The fact that it worked there is indeed fact. But for that fact to be evidence that it will work here, it needs to be relevant to that conclusion. To make RCTs relevant you need a lot more information and of a very different kind.
So, no, I find it hard to share the enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …
Vienna (personal)
23 Nov, 2015 at 15:53 | Posted in Varia | Comments Off on Vienna (personal)Back in the 80’s yours truly had the pleasure of studying at University of Vienna. I wish I could visit it again. A wonderful town full of history and Kaffeehäuser.
When not studying I used to listen to Leonard Cohen:
Science — a strange intoxication
23 Nov, 2015 at 12:58 | Posted in Varia | Comments Off on Science — a strange intoxication
Only by strict specialization can the scientific worker become fully conscious, for once and perhaps never again in his lifetime, that he has achieved something that will endure. A really definitive and good accomplishment is today always a specialized accomplishment. And whoever lacks the capacity to put on blinders, so to speak, and to come up to the idea that the fate of his soul depends upon whether or not he makes the correct conjecture at this passage of this manuscript may as well stay away from science. He will never have what one may call the ‘personal experience’ of science. Without this strange intoxication, ridiculed by every outsider; without this passion, this ‘thousands of years must pass before you enter into life and thousands more wait in silence’ — according to whether or not you succeed in making this conjecture; without this, you have no calling for science and you should do something else. For nothing is worthy of man as man unless he can pursue it with passionate devotion.
How to get published in ‘top’ economics journals
23 Nov, 2015 at 09:49 | Posted in Economics | Comments Off on How to get published in ‘top’ economics journals
By the early 1980s it was already common knowledge among people I hung out with that the only way to get non-crazy macroeconomics published was to wrap sensible assumptions about output and employment in something else, something that involved rational expectations and intertemporal stuff and made the paper respectable. And yes, that was conscious knowledge, which shaped the kinds of papers we wrote.
More or less says it all, doesn’t it?
And for those of you who do not want to play according these sickening hypocritical rules — well, here’s one good alternative.
Rom i regnet (privat)
22 Nov, 2015 at 15:57 | Posted in Varia | Comments Off on Rom i regnet (privat)
Till Anna — som fick mig att överleva gymnasieåren på Linnéskolan.
Uptown funk
21 Nov, 2015 at 18:55 | Posted in Varia | Comments Off on Uptown funk
[h/t Jeanette Meyer]
Blog at WordPress.com.
Entries and Comments feeds.