The connection between cause and probability

18 Oct, 2018 at 15:07 | Posted in Statistics & Econometrics | 2 Comments

hunt Causes can increase the probability​ of their effects; but they need not. And for the other way around: an increase in probability can be due to a causal connection; but lots of other things can be responsible as well …

The connection between causes and probabilities is like the connection between a disease and one of its symptoms: The disease can cause the symptom, but it need not; and the same symptom can result from a great many different diseases …

If you see a probabilistic dependence and are inclined to infer a causal connection from it, think hard about all the other possible reasons that that dependence might occur and eliminate them one by one. And when you are all done, remember — your conclusion is no more certain than your confidence that you really have eliminated all​ the possible alternatives.

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in-depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

mediator“Mediation analysis” is this thing where you have a treatment and an outcome and you’re trying to model how the treatment works: how much does it directly affect the outcome, and how much is the effect “mediated” through intermediate variables …

In the real world, it’s my impression that almost all the mediation analyses that people actually fit in the social and medical sciences are misguided: lots of examples where the assumptions aren’t clear and where, in any case, coefficient estimates are hopelessly noisy and where confused people will over-interpret statistical significance …

More and more I’ve been coming to the conclusion that the standard causal inference paradigm is broken … So how to do it? I don’t think traditional path analysis or other multivariate methods of the throw-all-the-data-in-the-blender-and-let-God-sort-em-out variety will do the job. Instead we need some structure and some prior information.

Andrew Gelman

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real world, well, then we haven’t really obtained the causation we are looking for.

If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship should exist between statistical and subject-matter concerns …
introduction-to-statistical-inferenceThe idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods and, especially to ‘practical science’ where the central concern is indeed with ‘the consequences of performing particular acts’. The development of this idea in the context of medical and agricultural research is as understandable as the development of that of causation as robust dependence within applied econometrics. However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that​ under the — highly anthropocentric — principle of ‘no causation without manipulation’, the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.

John H. Goldthorpe

Je größer der Dachschaden desto besser der Blick auf die Sterne …

18 Oct, 2018 at 11:22 | Posted in Politics & Society | Comments Off on Je größer der Dachschaden desto besser der Blick auf die Sterne …

 

When odds ratios mislead (wonkish)

17 Oct, 2018 at 19:10 | Posted in Statistics & Econometrics | Comments Off on When odds ratios mislead (wonkish)

A few years ago, some researchers from Georgetown University published in the New England Journal of Medicine a study that demonstrated systematic race and sex bias in the behavior of America’s doctors. Needless to say, this finding was widely reported in the media:

Relative_Risk_and_Odds_RatioWashington Post: “Physicians said they would refer blacks and women to heart specialists for cardiac catheterization tests only 60 percent as often as they would prescribe the procedure for white male patients.”

N.Y. Times: “Doctors are only 60% as likely to order cardiac catheterization for women and blacks as for men and whites.”

Now let’t try a little test of reading comprehension. The study found that the referral rate for white men was 90.6%. What was the referral rate for blacks and women?

If you’re like most literate and numerate people, you’ll calculate 60% of 90.6%, and come up with .6*.906 = .5436. So, you’ll reason, the referral rate for blacks and women was about 54.4 %.

But in fact, what the study found was a referral rate for blacks and women of 84.7%.

What’s going on?

It’s simple — the study reported an “odds ratio”. The journalists, being as ignorant as most people are about odds and odds ratios, reported these numbers as if they were ratios of rates rather than ratios of odds.

Let’s go through the numbers. If 90.6% of white males were referred, then 9.4% were not referred, and so a white male’s odds of being referred were 90.6/9.4, or about 9.6 to 1. Since 84.7% of blacks and women were referred, 13.3% were not referred, and so for these folks, the odds of referral were 84.7/15.3 ≅ 5.5 to 1. The ratio of odds was thus about 5.5/9.6, or about 0.6 to 1. Convert to a percentage, and you’ve got “60% as likely” or “60 per cent as often”.

The ratio of odds (rounded to the nearest tenth) was truly 0.6 to 1. But when you report this finding by saying that “doctors refer blacks and women to heart specialists 60% as often as they would white male patients”, normal readers will take “60% as often” to describe a ratio of rates — even though in this case the ratio of rates (the “relative risk”) was 84.7/90.6, or (in percentage terms) about 93.5%.

Mark Liberman

Ricardian vice

17 Oct, 2018 at 18:50 | Posted in Economics | 1 Comment

history-of-economic-analysis-schumpeter-first-edition Ricardo’s … interest was in the clear-cut result of direct, practical significance. In order to get this he cut that general system to pieces, bundled up as large parts of it as possible, and put them in cold storage — so that as many things as possible should be frozen and ‘given.’ He then piled one simplifying assumption upon another until, having really settled everything by theses assumptions, he was left with only a few aggregative variables between which, he set up simple one-way relations so that, in the end, the desire results emerged almost as tautologies … It is an excellent theory that can never be refuted and lacks nothing save sense. The habit of applying results of this character to the solution of practical problems we shall call the Ricardian Vice.

Sounds familiar, doesn’t it?

Only difference is that today it is seen as a virtue rather than a vice …

Ketchup economics

16 Oct, 2018 at 20:09 | Posted in Economics | Comments Off on Ketchup economics

ketchup-food-768x1366

The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …

If  these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …

The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …

My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …

Prescott’s growth model is not an inconceivable representation of reality. But to claim that its parameters are securely tied down by growth and  micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …

In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …

Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.

Yet Prescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …

Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.

Lawrence SummersSkeptical Observations on Real Business Cycle Theory 

I den stora sorgens famn (personal)

15 Oct, 2018 at 17:48 | Posted in Varia | Comments Off on I den stora sorgens famn (personal)


Med värme och kärlek tillägnad Ingrid, Anton och Iskra.

Vän! I förödelsens stund, när ditt inre av mörker betäckes,
När i ett avgrundsdjup minne och aning förgå,
Tanken famlar försagd bland skugggestalter och irrbloss,
Hjärtat ej sucka kan, ögat ej gråta förmår;
När från din nattomtöcknade själ eldvingarne falla,
Och du till intet, med skräck, känner dig sjunka på nytt,
Säg, vem räddar dig då?- Vem är den vänliga ängel,
Som åt ditt inre ger ordning och skönhet igen,
Endast det heliga Ord, som ropte åt världarna: “Bliven!”
Och i vars levande kraft världarna röras ännu.
Därföre gläds, o vän, och sjung i bedrövelsens mörker:
Natten är dagens mor, Kaos är granne med Gud.

Halcyon days (personal)

15 Oct, 2018 at 09:56 | Posted in Varia | 1 Comment

 

Spending some lovely Indian​ Summer days at our summer residence in the Karlskrona​ archipelago. Pure energy for the soul.

Oft Gefragt

14 Oct, 2018 at 23:13 | Posted in Varia | Comments Off on Oft Gefragt

 

Zu Hause bist immer nur du
Zu Hause bist immer nur du
Du hast mich abgeholt und hingebracht
Bist mitten in der Nacht wegen mir aufgewacht
Ich hab in letzter Zeit so oft daran gedacht
Hab keine Heimat, ich hab nur dich
Du bist zu Hause für immer und mich

Too much of ‘we controlled for’

14 Oct, 2018 at 12:12 | Posted in Statistics & Econometrics | Comments Off on Too much of ‘we controlled for’

The gender pay gap is a fact that, sad to say, to a non-negligible extent is the result of discrimination. And even though many women are not deliberately discriminated against, but rather self-select into lower-wage jobs, this in no way magically explains away the discrimination gap. As decades of socialization research has shown, women may be ‘structural’ victims of impersonal social mechanisms that in different ways aggrieve them. Wage discrimination is unacceptable. Wage discrimination is a shame.

You see it all the time in studies. “We controlled for…” And then the list starts. The longer the better. Income. Age. Race. Religion. Height. Hair color. Sexual preference. Crossfit attendance. Love of parents. Coke or Pepsi. The more things you can control for, the stronger your study is — or, at least, the stronger your study seems. Controls give the feeling of specificity, of precision. But sometimes, you can control for too much. Sometimes you end up controlling for the thing you’re trying to measure …

paperAn example is research around the gender wage gap, which tries to control for so many things that it ends up controlling for the thing it’s trying to measure. As my colleague Matt Yglesias wrote:

“The commonly cited statistic that American women suffer from a 23 percent wage gap through which they make just 77 cents for every dollar a man earns is much too simplistic. On the other hand, the frequently heard conservative counterargument that we should subject this raw wage gap to a massive list of statistical controls until it nearly vanishes is an enormous oversimplification in the opposite direction. After all, for many purposes gender is itself a standard demographic control to add to studies — and when you control for gender the wage gap disappears entirely!” …

Take hours worked, which is a standard control in some of the more sophisticated wage gap studies. Women tend to work fewer hours than men. If you control for hours worked, then some of the gender wage gap vanishes. As Yglesias wrote, it’s “silly to act like this is just some crazy coincidence. Women work shorter hours because as a society we hold women to a higher standard of housekeeping, and because they tend to be assigned the bulk of childcare responsibilities.”

Controlling for hours worked, in other words, is at least partly controlling for how gender works in our society. It’s controlling for the thing that you’re trying to isolate.

Ezra Klein

Trying to reduce the risk of having established only ‘spurious relations’ when dealing with observational data, statisticians and econometricians standardly add control variables. The hope is that one thereby will be able to make more reliable causal inferences. But — as Keynes showed already back in the 1930s when criticizing statistical-econometric applications of regression analysis — if you do not manage to get hold of all potential confounding factors, the model risks producing estimates of the variable of interest that are even worse than models without any control variables at all. Conclusion: think twice before you simply include ‘control variables’ in your models!

Bayesian networks and causal diagrams

14 Oct, 2018 at 09:09 | Posted in Theory of Science & Methodology | 11 Comments

36393702Whereas a Bayesian network​ can only tell us how likely one event is, given that we observed another, causal diagrams can answer interventional and counterfactual questions. For example, the causal fork A <– B –> C tells us in no uncertain terms that wiggling A would have no effect on C, no matter how intense the wiggle. On the other hand, a Bayesian network is not equipped to handle a ‘wiggle,’ or to tell the difference between seeing and doing, or indeed to distinguish a fork from a chain [A –> B –> C]. In other words, both a chain and a fork would predict that observed changes in A are associated with changes in C, making no prediction about the effect of ‘wiggling’ A.

Healing my wounded soul (personal)

13 Oct, 2018 at 23:11 | Posted in Varia | Comments Off on Healing my wounded soul (personal)

 

Mobile detox

13 Oct, 2018 at 17:10 | Posted in Varia | Comments Off on Mobile detox

Eton College is the latest in a series of schools to crack down on mobile phone use among their pupils. Last year, the £39,000-a-year Brighton College started forcing students to hand in their mobile phones at the beginning of each day in an effort to wean them off their “addiction” to technology.

An Overview Of Eton CollegeStudents in year seven, eight and nine are now required to hand in their mobile phones at the beginning of the day to teachers who will lock it away, ready for collection when they are about the go home.

Students in year ten are allowed their phones, but must subscribe to three “detox” days a week where they hand it in, with year elevens having one “detox” day.

At Wimbledon High School, a fee-paying day school in south-west London, all children and parents are given a copy of the schools’ digital rules, one of which is “put your phone away at meals and leave your phone downstairs at bedtime – try and be screen free at least an hour before bed”.

The Telegraph

In my days it used to be sex, drugs, and rock ‘n’ roll. And now we have to protect our kids from the dangers of mobile phone addiction …

Does using models really make economics a science?​

12 Oct, 2018 at 17:25 | Posted in Economics | Comments Off on Does using models really make economics a science?​

The model has more and more become the message in modern mainstream economics. Formal models are said to help achieve ‘clarity’ and ‘consistency.’ Dani Rodrik — just to take one prominent example — even​ says, in his Economics Rules, that “models make economics a science.”

bbEconomics is more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.

Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).

Modern mainstream economists ground their models on a set of core assumptions — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions. Together they make up the base model of all mainstream economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by the set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives she is facing, evaluates them carefully, calculates the consequences and chooses the one — given her preferences — that she believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing choice and acts accordingly (given the set of auxiliary assumptions that specify the kind of social interaction between ‘rational actors’ that can take place in the model).

So — mainstream economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act. The list of assumptions can never be complete since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

Economics — in contradistinction to logic and mathematics — ought to be an empirical science, and empirical testing of ‘axioms’ ought to be self-evidently relevant for such a discipline. For although the mainstream economist himself (implicitly) claims that his axioms are universally accepted as true and in no need of proof, that is in no way a justified reason for the rest of us to simpliciter accept the claim.

When applying deductivist thinking to economics, mainstream economists usually set up ‘as if’ models based on the logic of idealization and a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is that if the axiomatic premises are true, the conclusions necessarily follow. But — although the procedure is a marvellous tool in mathematics and axiomatic-deductivist systems, it is a poor guide for the real world.

The way axioms and theorems are formulated in mainstream economics standardly leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may, of course, be very ‘handy’, but totally void of any empirical value.

Mainstream economic models are nothing but broken pieces models. That kind of models can’t make economics a science

Paul Romer — a flamboyant and hot-headed economist

12 Oct, 2018 at 09:12 | Posted in Economics | Comments Off on Paul Romer — a flamboyant and hot-headed economist

L’Américain Paul Romer, qui s’est vu décerner lundi le prestigieux prix Nobel d’économie aux côtés d’un de ses compatriotes William Nordhaus, est un économiste flamboyant à la carrière mouvementée, connu pour ses travaux mesurant la part de l’innovation dans la croissance.

mathiness-in-the-theory-of-economic-growth-paul-romer2A 62 ans, il est actuellement professeur à l’Université de New York … Il avait quitté en octobre 2016 le monde universitaire pour occuper le poste de chef économiste de la Banque mondiale (BM). Mais ses critiques à peine voilées de l’institution de Washington l’ont contraint à démissionner en janvier dernier et il est retourné à ses travaux académiques à New York.

En cause, ses prises de position sur un rapport phare de la BM, “Doing business”, publié chaque année, qui passe au crible le cadre réglementaire s’appliquant aux PME dans 190 économies pour évaluer quels sont les pays les plus favorables au lancement d’une entreprise.

Paul Romer laisse entendre que ce classement est influencé par des considérations politiques, citant un changement de méthodologie pénalisant par exemple le Chili, qui, depuis 2013, dégringole dans le classement uniquement par un effet mécanique.

Avant cela, le bouillant économiste avait déjà suscité la polémique avec un article retentissant, “The trouble with macroeconomics”, dans lequel il critiquait ses collègues macroéconomistes, leur reprochant de “faire tourner” des modèles mathématiques sans rapport avec le réel.

Selon lui, considérer la connaissance et l’information comme une ressource crée de la croissance économique. Contrairement aux autres ressources, la connaissance n’est pas seulement abondante, elle est infinie …

Bien que son nom ait été cité plusieurs fois parmi les potentiels lauréats du Nobel, Paul Romer a expliqué qu’il n’avait pas décroché son téléphone aux premiers coups de fil reçus au petit matin, croyant à des appels commerciaux. C’était l’Académie royale des sciences.

“Je ne le voulais pas, mais je l’accepte”, a-t-il alors déclaré, dans un sourire.

La Croix

Som sommaren

11 Oct, 2018 at 23:25 | Posted in Varia | Comments Off on Som sommaren

 

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.