To love somebody

8 December, 2016 at 23:03 | Posted in Varia | Leave a comment

 

RCTs in the Garden of Eden

8 December, 2016 at 15:29 | Posted in Statistics & Econometrics | 1 Comment

Suppose researchers come to a town and do an RCT on the town population to check whether the injection of a green chemical improves memory and has adverse side effects. Suppose it is found that it has no side effects and improves memory greatly in 95% of cases. If the study is properly done and the random draw is truly random, it is likely to be treated as an important finding and will, in all likelihood, be published in a major scientific journal.

imageNow consider a particular woman called Eve who lives in this town and is keen to enhance her memory. Can she, on the basis of this scientific study, deduce that there is a probability of 0.95 that her memory will improve greatly if she takes this injection? The answer is no, because she is not a random draw of an individual from this town. All we do know from the law of large numbers is that for every randomly drawn person from this population the probability that the injection will enhance memory is 0.95. But this would not be true for a specially chosen person in the same way that this would not be true of someone chosen from another town or another time.

To see this more clearly, permit me to alter the scenario in a statistically neutral way. Suppose that what I called the town in the above example is actually the Garden of Eden, which is inhabited by snakes and other similar creatures, and Eve and Adam are the only human beings in this place. Suppose now the same experiment was carried out in the Garden of Eden. That is, randomisers came, drew a large random sample of creatures, and administered the green injection and got the same result as described above. It works in 95% of cases. Clearly, Eve will have little confidence, on the basis of this, to expect that this treatment will work on her. I am assuming that the random draw of creatures on which the injection was tested did not include Eve and Adam. Eve will in all likelihood flee from anyone trying to administer this injection to her because she would have plainly seen that what the RCT demonstrates is that it works in the case of snakes and other such creatures, and the fact that she is part of the population from which the random sample was drawn is in no way pertinent.

Indeed, and the importance of this will become evident later, suppose in a neighbouring garden, where all living creatures happen to be humans, there was a biased-sample (meaning non-random8) trial of this injection, and it was found that the injection does not enhance memory and, in fact, gives a throbbing headache in a large proportion of cases, it is likely that Eve would be tempted to go along with this biased-sample study done on another population rather than the RCT conducted on her own population in drawing conclusions about what the injection might do to her. There is as little hard reason for Eve to reach this conclusion as it would be for her to conclude that the RCT result in her own Garden of Eden would work on her. I am merely pointing to a propensity of the human mind whereby certain biased trials may appear more relevant to us than certain perfectly controlled ones.

Kaushik Basu

Basu’s reasoning confirms what yours truly has repeatedly argued on this blog and in On the use and misuse of theories and models in mainstream economics  — RCTs usually do not provide evidence that the results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

On a winter’s day (personal)

8 December, 2016 at 08:59 | Posted in Varia | Leave a comment

 

Yes, indeed, looking out my library windows, the sky is grey, rain keeps falling, and spring seems to be far, far, away. Although it’s more than thirty years now since I was a student at the University of California, on a day like this, I sure wish I was there again …

Tony Lawson — en presentation

8 December, 2016 at 08:46 | Posted in Economics | Leave a comment

maxresdefaultEn av de ledande metodologerna inom den ekonomiska vetenskapen i dag heter Tony Lawson. Denne engelske ekonom har i flera böcker på djupet undersökt och kritiserat de vetenskapsteoretiska och metodologiska fundamenten för den moderna nationalekonomin.

Ekonomer uppvisar ofta ett svalt intresse inte bara för det egna ämnets idéhistoria, utan också för vetenskapsteoretiska reflektioner kring förutsättningarna för och antagandena bakom den egna kunskapsproduktionen. De nöjer sig oftast med att konstatera något i stil med att ”nationalekonomi är vad nationalekonomerna gör” och att ”nationalekonomer är de som sysslar med nationalekonomi”. Något djupare filosoferande anses inte vara nödvändigt. Detta är en föga upplysande inställning som inte är hållbar. Vetenskapsteoretiska, filosofiska och metodologiska analyser av den ekonomiska vetenskapen är både viktiga och nödvändiga. Metodologisk kunskap fungerar som en ‘vägkarta.’ När vi upptäcker att vi inte befinner oss på vägen, måste vi då och då kasta en blick på den karta eller modell av vetenskapligt framåtskridande som vi alla bär med oss, medvetet eller omedvetet.

fronesis-nr-54-55-omslag-mediumOavsett intresse för metodologi bygger ekonomiska teorier alltid – medvetet eller omedvetet – på metodologiska teorier och ställningstaganden. Frågan är därför inte om man som ekonom bör syssla med metodologi eller ej, utan snarare hur det bäst ska ske. Metodologisk analys är både önskvärd och oundviklig i den ekonomiska vetenskapen. Som Lawson ofta påpekat kan den inte minst fylla en kritisk funktion genom att göra ekonomen medveten om att den ekonomiska teorins grundläggande brister kan bero på att de begrepp, teorier och modeller som man använder är oförenliga med själva undersökningsobjektet. De verktyg som man lånat in från framför allt fysiken och matematiken konstruerades med tanke på helt andra uppgifter och problem, och kan ha bidragit till en icke-korrespondens mellan den ekonomiska vetenskapens struktur och den ekonomiska verklighetens struktur. Detta kan i sin tur ha lett ekonomerna till tvivelaktiga förenklingar och generaliseringar.

För mer om Lawson — se min artikel i Fronesis 54-55.

Machine learning and causal inference

7 December, 2016 at 21:49 | Posted in Statistics & Econometrics | 1 Comment

 

Taking uncertainty seriously

6 December, 2016 at 18:46 | Posted in Economics | Leave a comment

Conventional thinking about financial markets begins with the idea that security prices always accurately reflect all available information; it ends with the belief that price changes come about only when there is new information. Markets are supposed to reflect new information quickly and efficiently, albeit with a few anomalies.

emotion-pendulum-pictureIn 2007, I interviewed over 50 investment managers mainly in New York, Boston, London, and Edinburgh. Talking to them I came to the conclusion that conventional theories of finance miss the essence of market dynamics. Information is usually ambiguous and its value uncertain. When information is ambiguous and outcomes are fundamentally uncertain, decisions are not clear cut. They necessarily rely on human imagination and judgment, not simply calculation. Human imagination and judgment are impossible without human emotion. Conventional theories of finance, which ignore emotion, are therefore a very poor basis for understanding and policy.

“As long as we neglect emotion’s role in financial markets, and fail to understand and adapt to dimensions of human social and mental life that influence judgement, financial markets will be inherently unstable.”

Uncertainty and ambiguity are what make financial markets interesting and possible. They generate feelings that manifest in exciting stories, problematic mental states and strange group processes. As long as we neglect emotion’s role in financial markets, and fail to understand and adapt to dimensions of human social and mental life that influence judgement, financial markets will be inherently unstable. They will also be likely to create poor outcomes for ordinary savers and significant distortions in capital allocation – exactly what we have been witnessing in the market today.

The uncertainty to which I refer can be termed radical, fundamental, Knightian or Keynesian uncertainty. I use these descriptions to stress the fact that, although we can imagine the future, we cannot know it in advance. My interviewees collectively risked over $500 billion every day. Every one of the positions they took depended on interpreting ambiguous information and each would be vulnerable to unforeseen events. Consider the present possibilities that the Euro crisis will lead to a return to national currencies, and that disputes in the US congress will lead to problems meeting US debt obligations. What the existence of such possibilities will do to the prices of commodities, currencies and securities over the next thirty-six months, and what different financial decision-makers will think about it, is not knowable – and there will be many more unexpected developments with significant ramifications.

Decisions made in a radically uncertain context are totally different in their implications from decisions made in conditions of risk modelled as a Gaussian probability distribution. A Gaussian model constrains future probabilities, thereby creating known unknowns. The outcome of decisions becomes predictable and what is rational becomes clear. Under radical uncertainty this is not the case. What will happen tomorrow involves far more complexity and interaction than can be captured by analogies to games of chance. Taking radical uncertainty seriously, therefore, changes everything.

David Tuckett

 

Aleppo

6 December, 2016 at 15:53 | Posted in Varia | Leave a comment


This one is for you — brothers and sisters, struggling to survive and risking your lives fighting for liberation. May God be with you.

Econometric causality and Simpson’s paradox

5 December, 2016 at 18:20 | Posted in Statistics & Econometrics | Leave a comment

Which causal relationships we see depend on which model we use and its conceptual/causal articulation; which model is bestdepends on our purposes and pragmatic interests.

simpsons_paradox_by_insecondsflat-d37lk7yTake the case of Simpson’s paradox, which can be described as the situation in which conditional probabilities (often related to causal relations) are opposite for subpopulations than for the whole population. Let academic salaries be higher for economists than for sociologists, and let salaries within each group be higher for women than for men. But let there be twice as many men than women in economics and twice as many women than men in sociology. By construction, the average salary of women is higher than that for men in each group; yet, for the right values of the different salaries, women are paid less on average, taking both groups together. [Example: Economics — 2 men earn 100$, 1 woman 101$; Sociology — 1 man earn 90$, 2 women 91$. Average female earning: (101 + 2×91)/3 = 94.3; Average male earning: (2×100 + 90)/3 = 96.6 — LPS]

An aggregate model leads to the conclusion that that being female causes a lower salary. We might feel an uneasiness with such a model, since I have already filled in the details that show more precisely why the result comes about. The temptation is to say that the aggregate model shows that being female apparently causes lower salaries; but the more refined description of a disaggregated model shows that really being female causes higher salaries. A true paradox, however, is not a contradiction, but a seeming contradiction. Another way to look at it is to say that the aggregate model is really true at that level of aggregation and is useful for policy and that equally true more disaggregated model gives an explanation of the mechanism behind the true aggregate model.

It is not wrong to take an aggregate perspective and to say that being female causes a lower salary. We may not have access to the refined description. Even if we do, we may as matter of policy think (a) that the choice of field is not susceptible to useful policy intervention, and (b) that our goal is to equalize income by sex and not to enforce equality of rates of pay. That we may not believe the factual claim of (a) nor subscribe to the normative end of (b) is immaterial. The point is that that they mark out a perspective in which the aggregate model suits both our purposes and the facts: it tells the truth as seen from a particular perspective.

Kevin Hoover

Simpson’s paradox is an interesting paradox in itself. But it can also highlight a deficiency in the traditional econometric approach towards causality. Say you have 1000 observations on men and an equal amount of  observations on women applying for admission to university studies, and that 70% of men are admitted, but only 30% of women. Running a logistic regression to find out the odds ratios (and probabilities) for men and women on admission, females seem to be in a less favourable position (‘discriminated’ against) compared to males (male odds are 2.33, female odds are 0.43, giving an odds ratio of 5.44). But once we find out that males and females apply to different departments we may well get a Simpson’s paradox result where males turn out to be ‘discriminated’ against (say 800 male apply for economics studies (680 admitted) and 200 for physics studies (20 admitted), and 100 female apply for economics studies (90 admitted) and 900 for physics studies (210 admitted) — giving odds ratios of 0.62 and 0.37).

Econometric patterns should never be seen as anything else than possible clues to follow. From a critical realist perspective it is obvious that behind observable data there are real structures and mechanisms operating, things that are  — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.

Math cannot establish the truth value of a fact. Never has. Never will.

Paul Romer

Mainstream macro modeling — nothing but smoke and mirrors

5 December, 2016 at 14:42 | Posted in Economics | Leave a comment

Those of us in the economics community who are impolite enough to dare question the preferred methods and models applied in mainstream macroeconomics, are as a rule met with disapproval. But although people seem to get very agitated and upset by the critique, defenders of ‘received theory’ always say that the critique is “nothing new”, that they have always been “well aware” of the problems, that “the models, whether false or not, help us understand what’s going on,” that the theory “helps us organise our thoughts,” and so on, and so on.

So, for the benefit of all you mindless practitioners of mainstream macroeconomic modeling who defend mainstream macroeconomics with arguments like “the speed with which macro has put finance at the center of its theories of the business cycle has been nothing less than stunning,” and re the patently ridiculous representative-agent modeling, maintains that there “have been efforts to put heterogeneity into big DSGE-type models” but that these models “didn’t get quite as far, because this kind of thing is very technically difficult to model,” and as for rational expectations admits that “so far, macroeconomists are still very timid about abandoning this pillar of the Lucas/Prescott Revolution,” but that “there’s no clear alternative” — and who don’t want to be disturbed in your doings — here’s David Freedman’s very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:

We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

Wage discrimination

5 December, 2016 at 09:25 | Posted in Economics | 1 Comment

screen-shot-2011-10-10-at-2-34-03-pm-1So let’s say a woman faces discrimination by this definition – she loses out to a man with weaker credentials. “Loses out” itself is pretty vague and could reasonably be consistent with several different observed labor market outcomes, two of which are:

Outcome A: She gets hired to the same job as the man but at lower pay, and
Outcome B: She doesn’t get the job and instead takes her next best offer in a different occupation at lower pay. Let’s further say that she is paid her real productivity in this job.

Let’s say the woman’s wage in Outcome A and the wage in Outcome B is exactly the same.

Under Outcome A, a wage regression with occupational dummies and a gender dummy is going reliably report the magnitude of the discrimination in the gender dummy. Under Outcome B, a wage regression with occupational dummies and a gender dummy is going to report all of the discrimination under the occupational dummies. If you interpret the results thinking that “discrimination” as Scott D defines it is only in the gender coefficient, you would say there is discrimination in the case of Outcome A, but that there’s no discrimination in the case of Outcome B.

It would be one thing if these were very, very different sorts of discrimination but these are two reasonable outcomes from the exact same act of discrimination.

This is why people like Claudia Goldin see occupational dummies as describing the components of the wage gap and not as some way of eliminating part of the gap that isn’t really about gender.

“Equal pay for equal work” is a principle that I should hope everyone can agree on. It’s great stuff. And I for one think the courts might have some role to play in ensuring the principle is abided by in our society. But it’s a pretty vacuous phrase when it comes to economic science. It’s not entirely clear what it means or how it can be operationalized. Outcome A is clearly not equal pay for equal work, but what about Outcome B? After all the woman is being paid “fairly” for the work she ended up doing. Is that equal pay for equal work? You could make the argument but it doesn’t feel right and in any case it’s clearly incommensurate with the data analysis we’re doing. When two things are incommensurate it’s typically a good idea to keep them separate. Let “equal pay for equal work” ring out as a rallying call for a basic point of fairness and don’t act like you can either affirm it or refute it with economic science. As far as I can tell you can’t.

Daniel Kuehn

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.