English for beginners

9 December, 2016 at 19:06 | Posted in Varia | Leave a comment

 

Statistical significance is not real-world significance

9 December, 2016 at 18:32 | Posted in Statistics & Econometrics | Leave a comment

ad11As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

statistical-models-sdl609573791-1-42fd0In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.
So?

Kitchen sink regression

9 December, 2016 at 16:48 | Posted in Statistics & Econometrics | Leave a comment

When I present this argument … one or more scholars say, “But shouldn’t I control for everything I can in my regressions? If not, aren’t my coefficients biased due to excluded variables?” This argument is not as persuasive as it may seem initially. First of all, if what you are doing is misspecified already, then adding or excluding other variables has no tendency to make things consistently better or worse … The excluded variable argument only works if you are sure your specification is precisely correct with all variables included. But no one can know that with more than a handful of explanatory variables.

piled-up-dishes-in-kitchen-sinkStill more importantly, big, mushy linear regression and probit equations seem to need a great many control variables precisely because they are jamming together all sorts of observations
that do not belong together. Countries, wars, racial categories, religious preferences, education levels, and other variables that change people’s coefficients are “controlled” with dummy variables that are completely inadequate to modeling their effects. The result is a long list of independent variables, a jumbled bag of nearly unrelated observations, and often a hopelessly bad specification with meaningless (but statistically significant with several
asterisks!) results.

A preferable approach is to separate the observations into meaningful subsets—internally compatible statistical regimes … If this can’t be done, then statistical analysis can’t be done. A researcher claiming that nothing else but the big, messy regression is possible because, after all, some results have to be produced, is like a jury that says, “Well, the evidence was weak, but somebody had to be convicted.”

Christopher H. Achen

To love somebody

8 December, 2016 at 23:03 | Posted in Varia | Leave a comment

 

RCTs in the Garden of Eden

8 December, 2016 at 15:29 | Posted in Statistics & Econometrics | 3 Comments

Suppose researchers come to a town and do an RCT on the town population to check whether the injection of a green chemical improves memory and has adverse side effects. Suppose it is found that it has no side effects and improves memory greatly in 95% of cases. If the study is properly done and the random draw is truly random, it is likely to be treated as an important finding and will, in all likelihood, be published in a major scientific journal.

imageNow consider a particular woman called Eve who lives in this town and is keen to enhance her memory. Can she, on the basis of this scientific study, deduce that there is a probability of 0.95 that her memory will improve greatly if she takes this injection? The answer is no, because she is not a random draw of an individual from this town. All we do know from the law of large numbers is that for every randomly drawn person from this population the probability that the injection will enhance memory is 0.95. But this would not be true for a specially chosen person in the same way that this would not be true of someone chosen from another town or another time.

To see this more clearly, permit me to alter the scenario in a statistically neutral way. Suppose that what I called the town in the above example is actually the Garden of Eden, which is inhabited by snakes and other similar creatures, and Eve and Adam are the only human beings in this place. Suppose now the same experiment was carried out in the Garden of Eden. That is, randomisers came, drew a large random sample of creatures, and administered the green injection and got the same result as described above. It works in 95% of cases. Clearly, Eve will have little confidence, on the basis of this, to expect that this treatment will work on her. I am assuming that the random draw of creatures on which the injection was tested did not include Eve and Adam. Eve will in all likelihood flee from anyone trying to administer this injection to her because she would have plainly seen that what the RCT demonstrates is that it works in the case of snakes and other such creatures, and the fact that she is part of the population from which the random sample was drawn is in no way pertinent.

Indeed, and the importance of this will become evident later, suppose in a neighbouring garden, where all living creatures happen to be humans, there was a biased-sample (meaning non-random8) trial of this injection, and it was found that the injection does not enhance memory and, in fact, gives a throbbing headache in a large proportion of cases, it is likely that Eve would be tempted to go along with this biased-sample study done on another population rather than the RCT conducted on her own population in drawing conclusions about what the injection might do to her. There is as little hard reason for Eve to reach this conclusion as it would be for her to conclude that the RCT result in her own Garden of Eden would work on her. I am merely pointing to a propensity of the human mind whereby certain biased trials may appear more relevant to us than certain perfectly controlled ones.

Kaushik Basu

Basu’s reasoning confirms what yours truly has repeatedly argued on this blog and in On the use and misuse of theories and models in mainstream economics  — RCTs usually do not provide evidence that the results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

On a winter’s day (personal)

8 December, 2016 at 08:59 | Posted in Varia | Leave a comment

 

Yes, indeed, looking out my library windows, the sky is grey, rain keeps falling, and spring seems to be far, far, away. Although it’s more than thirty years now since I was a student at the University of California, on a day like this, I sure wish I was there again …

Tony Lawson — en presentation

8 December, 2016 at 08:46 | Posted in Economics | Leave a comment

maxresdefaultEn av de ledande metodologerna inom den ekonomiska vetenskapen i dag heter Tony Lawson. Denne engelske ekonom har i flera böcker på djupet undersökt och kritiserat de vetenskapsteoretiska och metodologiska fundamenten för den moderna nationalekonomin.

Ekonomer uppvisar ofta ett svalt intresse inte bara för det egna ämnets idéhistoria, utan också för vetenskapsteoretiska reflektioner kring förutsättningarna för och antagandena bakom den egna kunskapsproduktionen. De nöjer sig oftast med att konstatera något i stil med att ”nationalekonomi är vad nationalekonomerna gör” och att ”nationalekonomer är de som sysslar med nationalekonomi”. Något djupare filosoferande anses inte vara nödvändigt. Detta är en föga upplysande inställning som inte är hållbar. Vetenskapsteoretiska, filosofiska och metodologiska analyser av den ekonomiska vetenskapen är både viktiga och nödvändiga. Metodologisk kunskap fungerar som en ‘vägkarta.’ När vi upptäcker att vi inte befinner oss på vägen, måste vi då och då kasta en blick på den karta eller modell av vetenskapligt framåtskridande som vi alla bär med oss, medvetet eller omedvetet.

fronesis-nr-54-55-omslag-mediumOavsett intresse för metodologi bygger ekonomiska teorier alltid – medvetet eller omedvetet – på metodologiska teorier och ställningstaganden. Frågan är därför inte om man som ekonom bör syssla med metodologi eller ej, utan snarare hur det bäst ska ske. Metodologisk analys är både önskvärd och oundviklig i den ekonomiska vetenskapen. Som Lawson ofta påpekat kan den inte minst fylla en kritisk funktion genom att göra ekonomen medveten om att den ekonomiska teorins grundläggande brister kan bero på att de begrepp, teorier och modeller som man använder är oförenliga med själva undersökningsobjektet. De verktyg som man lånat in från framför allt fysiken och matematiken konstruerades med tanke på helt andra uppgifter och problem, och kan ha bidragit till en icke-korrespondens mellan den ekonomiska vetenskapens struktur och den ekonomiska verklighetens struktur. Detta kan i sin tur ha lett ekonomerna till tvivelaktiga förenklingar och generaliseringar.

För mer om Lawson — se min artikel i Fronesis 54-55.

Machine learning and causal inference

7 December, 2016 at 21:49 | Posted in Statistics & Econometrics | 1 Comment

 

Taking uncertainty seriously

6 December, 2016 at 18:46 | Posted in Economics | Leave a comment

Conventional thinking about financial markets begins with the idea that security prices always accurately reflect all available information; it ends with the belief that price changes come about only when there is new information. Markets are supposed to reflect new information quickly and efficiently, albeit with a few anomalies.

emotion-pendulum-pictureIn 2007, I interviewed over 50 investment managers mainly in New York, Boston, London, and Edinburgh. Talking to them I came to the conclusion that conventional theories of finance miss the essence of market dynamics. Information is usually ambiguous and its value uncertain. When information is ambiguous and outcomes are fundamentally uncertain, decisions are not clear cut. They necessarily rely on human imagination and judgment, not simply calculation. Human imagination and judgment are impossible without human emotion. Conventional theories of finance, which ignore emotion, are therefore a very poor basis for understanding and policy.

“As long as we neglect emotion’s role in financial markets, and fail to understand and adapt to dimensions of human social and mental life that influence judgement, financial markets will be inherently unstable.”

Uncertainty and ambiguity are what make financial markets interesting and possible. They generate feelings that manifest in exciting stories, problematic mental states and strange group processes. As long as we neglect emotion’s role in financial markets, and fail to understand and adapt to dimensions of human social and mental life that influence judgement, financial markets will be inherently unstable. They will also be likely to create poor outcomes for ordinary savers and significant distortions in capital allocation – exactly what we have been witnessing in the market today.

The uncertainty to which I refer can be termed radical, fundamental, Knightian or Keynesian uncertainty. I use these descriptions to stress the fact that, although we can imagine the future, we cannot know it in advance. My interviewees collectively risked over $500 billion every day. Every one of the positions they took depended on interpreting ambiguous information and each would be vulnerable to unforeseen events. Consider the present possibilities that the Euro crisis will lead to a return to national currencies, and that disputes in the US congress will lead to problems meeting US debt obligations. What the existence of such possibilities will do to the prices of commodities, currencies and securities over the next thirty-six months, and what different financial decision-makers will think about it, is not knowable – and there will be many more unexpected developments with significant ramifications.

Decisions made in a radically uncertain context are totally different in their implications from decisions made in conditions of risk modelled as a Gaussian probability distribution. A Gaussian model constrains future probabilities, thereby creating known unknowns. The outcome of decisions becomes predictable and what is rational becomes clear. Under radical uncertainty this is not the case. What will happen tomorrow involves far more complexity and interaction than can be captured by analogies to games of chance. Taking radical uncertainty seriously, therefore, changes everything.

David Tuckett

 

Aleppo

6 December, 2016 at 15:53 | Posted in Varia | Leave a comment


This one is for you — brothers and sisters, struggling to survive and risking your lives fighting for liberation. May God be with you.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.