Heterogeneity and the flaw of averages
29 Jun, 2017 at 00:11 | Posted in Statistics & Econometrics | Comments Off on Heterogeneity and the flaw of averages
With interactive confounders explicitly included, the overall treatment effect β0 + β′zt is not a number but a variable that depends on the confounding effects. Absent observation of the interactive compounding effects, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt , the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them. Less elliptically, absent observation of z, the estimated treatment effect should be transferred only into those settings in which the confounding interactive variables have values close to the mean values in the experiment. If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.
Yes, indeed, regression-based averages is something we have reasons to be cautious about.
Suppose we want to estimate the average causal effect of a dummy variable (T) on an observed outcome variable (O). In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:
O = α + βT + ε,
where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.
The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ ( T=1) may have causal effects equal to -100 and those ‘not treated’ (T=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.
The heterogeneity problem does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of OLS estimates that economists produce every year.
Marketization undermining the welfare system
27 Jun, 2017 at 09:56 | Posted in Economics | 2 CommentsSweden has during the last couple of decades tried to marketize the public welfare sector. The prime mover behind the marketization has (allegedly) been the urge for cost-minimization, freedom of choice, and improved quality. The results have (unsurprisingly) been far from successful.
In a recent dissertation presented at Uppsala University, Linda Moberg summarizes her findings on the implications of the marketization trend for the Swedish eldercare system:
The overall aim of this dissertation has been to investigate what implications marketization has had for the organization of Swedish eldercare. In particular, it has asked how marketization, in the form of privatized provision, increased competition, and user choice, has transformed the relationship be- tween service users, professionals, and the state …
Previous research has indicated … that municipalities’ ability to write monitorable contracts often is inadequate and that the requirements often are formulated in such a way that it cannot be retrospectively assessed whether the providers have adhered to them … In addition, scholars have also found that few municipalities audit and evaluate their eldercare on a regular basis … Taken together, this indicates that it is not unproblematic for the municipalities to take on the altered regulatory role that the marketization reforms have assigned to them. Furthermore, the lack of direct public control over the quality in the system may result in, if quality differences between different providers become too wide, an undermining of the long standing goal of social equality in the Swedish eldercare system. An apparent risk, given the difficulty in obtaining information about quality differences between provid- ers documented in the dissertation, is that better-educated or more resourceful users gain an advantage in making informed choices and thereby get access to the best services …
The increased reliance on marketization has not only altered the regulatory relationship between the users and the municipalities, it has also contributed to a system where the ability of the staff to control and enforce service quality within eldercare risks being reduced.
Neoliberals and libertarians have always provided a lot of ideologically founded ideas and ‘theories’ to underpin their Panglossian view on markets. But when they are tested against reality they usually turn out to be wrong. The promised results are simply not to be found. And that goes for privatized eldercare too.
The neoliberal argument behind marketization of public welfare systems is that it not only decreases the role of government, but also increases freedom of choice and improves quality. This has not happened. As has proved to be the case with other neoliberal ideas, privatization—when tested—has not been able to deliver the results promised by empty speculation.
No one should be surprised!
Kenneth Arrow explained it all already back in 1963:
Under ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. In its absence he wants to have some guarantee that at leats the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.
One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act as thoroughly in the buyer’s behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.
Kenneth Arrow, “Uncertainty and the Welfare Economics of Medical Care”. American Economic Review, 53 (5).
Keynes & MMT
25 Jun, 2017 at 18:40 | Posted in Economics | 6 Comments[Bendixen says the] old ‘metallist’ view of money is superstitious, and Dr. Bendixen trounces it with the vigour of a convert.
Money is the creation of the State; it is not true to say that gold is international currency, for international contracts are never made in terms of gold, but always in terms of some national monetary unit; there is no essential or important distinction between notes and metallic money; money is the measure of value, but to regard it as having value itself is a relic of the view that the value of money is regulated by the value of the substance of which it is made, and is like confusing a theatre ticket with the performance. With the exception of the last, the only true interpretation of which is purely dialectical, these ideas are undoubtedly of the right complexion. It is probably true that the old ‘metallist’ view and the theories of regulation of note issue based on it do greatly stand in the way of currency reform, whether we are thinking of economy and elasticity or of a change in the standard; and a gospel which can be made the basis of a crusade on these lines is likely to be very useful to the world, whatever its crudities or terminology.
J. M. Keynes, “Theorie des Geldes und der Umlaufsmittel, by Ludwig von Mises; Geld und Kapital, by Friedrich Bendixen” (review), Economic Journal, 1914
Panis Angelicus (personal)
25 Jun, 2017 at 18:28 | Posted in Varia | Comments Off on Panis Angelicus (personal)
What is a statistical model?
24 Jun, 2017 at 14:05 | Posted in Statistics & Econometrics | 1 CommentMy critique is that the currently accepted notion of a statistical model is not scientific; rather, it is a guess at what might constitute (scientific) reality without the vital element of feedback, that is, without checking the hypothesized, postulated, wished-for, natural-looking (but in fact only guessed) model against that reality. To be blunt, as far as is known today, there is no such thing as a concrete i.i.d. (independent, identically distributed) process, not because this is not desirable, nice, or even beautiful, but because Nature does not seem to be like that … As Bertrand Russell put it at the end of his long life devoted to philosophy, “Roughly speaking, what we know is science and what we don’t know is philosophy.” In the scientific context, but perhaps not in the applied area, I fear statistical modeling today belongs to the realm of philosophy.
To make this point seem less erudite, let me rephrase it in cruder terms. What would a scientist expect from statisticians, once he became interested in statistical problems? He would ask them to explain to him, in some clear-cut cases, the origin of randomness frequently observed in the real world, and furthermore, when this explanation depended on the device of a model, he would ask them to continue to confront that model with the part of reality that the model was supposed to explain. Something like this was going on three hundred years ago … But in our times the idea somehow got lost when i.i.d. became the pampered new baby.
Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of — and actually, to be strict, do not at all exist — without specifying such system-contexts. Accepting Haavelmo’s domain of probability theory and sample space of infinite populations — just as Fisher’s ‘hypothetical infinite population,’ von Mises’ ‘collective’ or Gibbs’ ‘ensemble’ — also implies that judgments are made on the basis of observations that are actually never made!
Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable. And so the way social scientists — including economists and econometricians — often uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research, is not acceptable.
This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine — including e. g. the distribution of the deviations corresponding to a normal curve — then the statistical inferences used, lack sound foundations.
Trying to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels, scientists run into serious problems, the greatest being the need for lots of more or less unsubstantiated — and sometimes wilfully hidden — assumptions to be able to make any sustainable inferences from the models. Much of the results that economists and other social scientists present with their statistical/econometric models depend to a substantial part on the use of mostly unfounded ‘technical’ assumptions.
Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science. It is rather a recipe for producing fiction masquerading as science.
Mainstream monetary theory — neat, plausible, and utterly wrong
22 Jun, 2017 at 16:21 | Posted in Economics | 10 CommentsIn modern times legal currencies are totally based on fiat. Currencies no longer have intrinsic value (as gold and silver). What gives them value is basically the legal status given to them by government and the simple fact that you have to pay your taxes with them. That also enables governments to run a kind of monopoly business where it never can run out of money. Hence spending becomes the prime mover and taxing and borrowing is degraded to following acts. If we have a depression, the solution, then, is not austerity. It is spending. Budget deficits are not the major problem, since fiat money means that governments can always make more of them.
Financing quantitative easing, fiscal expansion, and other similar operations, is made possible by simply crediting a bank account and thereby – by a single keystroke – actually creating money. One of the most important reasons why so many countries are still stuck in depression-like economic quagmires is that people in general – including most mainstream economists – simply don’t understand the workings of modern monetary systems. The result is totally and utterly wrong-headed austerity policies, emanating out of a groundless fear of creating inflation via central banks printing money, in a situation where we rather should fear deflation and inadequate effective demand.
The mainstream neoclassical textbook concept of money multiplier assumes that banks automatically expand the credit money supply to a multiple of their aggregate reserves. If the required currency-deposit reserve ratio is 5%, the money supply should be about twenty times larger than the aggregate reserves of banks. In this way the money multiplier concept assumes that the central bank controls the money supply by setting the required reserve ratio.
In his Macroeconomics – just to take an example – Greg Mankiw writes:
We can now see that the money supply is proportional to the monetary base. The factor of proportionality … is called the money multiplier … Each dollar of the monetary base produces m dollars of money. Because the monetary base has a multiplied effect on the money supply, the monetary base is called high-powered money.
The money multiplier concept is – as can be seen from the quote above – nothing but one big fallacy. This is not the way credit is created in a monetary economy. It’s nothing but a monetary myth that the monetary base can play such a decisive role in a modern credit-run economy with fiat money.
In the real world banks first extend credits and then look for reserves. So the money multiplier basically also gets the causation wrong. At a deep fundamental level the supply of money is endogenous.
One may rightly wonder why on earth this pet mainstream neoclassical fairy tale is still in the textbooks and taught to economics undergraduates. Giving the impression that banks exist simply to passively transfer savings into investment, it is such a gross misrepresentation of what goes on in the real world, that there is only one place for it — and that is in the …
The American carnage
22 Jun, 2017 at 13:38 | Posted in Economics | Comments Off on The American carnagePresident Trump, in his inaugural address and elsewhere, rightly says that over the decades since 1980 American household distributions of income and wealth became strikingly unequal. But if recent budget and legislative proposals from Trump and the House of Representatives come into effect, today’s distributional mess would become visibly worse.
I will sketch how the mess happened, then I will propose some ideas about how it might be cleaned up. I will show that even with lucky institutional changes and good policy, it would take several more decades to undo the “American carnage” that the president described …
Trump and the Congress’s budget and legislative proposals could only work for his “struggling families” and “forgotten people” if they would generate strong trickle-down growth. Structural constraints on income distribution and wealth dynamics won’t let trickle-down happen. His slogan about “America First” is for the top one percent of income distribution – effectively a “capitalist” class – not for “workers” in the middle of the income distribution or the struggling, forgotten households further down.
I have outlined a feasible progressive alternative, which would generate broad-based progress. Progressive changes may not take hold. If not, and if Trump-style interventions materialize, the distributional mess and “American carnage” will only get worse.
Simpson’s paradox
21 Jun, 2017 at 08:29 | Posted in Statistics & Econometrics | Comments Off on Simpson’s paradox
From a more theoretical perspective, Simpson’s paradox importantly shows that causality can never be reduced to a question of statistics or probabilities, unless you are — miraculously — able to keep constant all other factors that influence the probability of the outcome studied.
To understand causality we always have to relate it to a specific causal structure. Statistical correlations are never enough. No structure, no causality.
Simpson’s paradox is an interesting paradox in itself, but it can also highlight a deficiency in the traditional econometric approach towards causality. Say you have 1000 observations on men and an equal amount of observations on women applying for admission to university studies, and that 70% of men are admitted, but only 30% of women. Running a logistic regression to find out the odds ratios (and probabilities) for men and women on admission, females seem to be in a less favourable position (‘discriminated’ against) compared to males (male odds are 2.33, female odds are 0.43, giving an odds ratio of 5.44). But once we find out that males and females apply to different departments we may well get a Simpson’s paradox result where males turn out to be ‘discriminated’ against (say 800 male apply for economics studies (680 admitted) and 200 for physics studies (20 admitted), and 100 female apply for economics studies (90 admitted) and 900 for physics studies (210 admitted) — giving odds ratios of 0.62 and 0.37).
Econometric patterns should never be seen as anything else than possible clues to follow. From a critical realist perspective it is obvious that behind observable data there are real structures and mechanisms operating, things that are — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.
Math cannot establish the truth value of a fact. Never has. Never will.
Logistic regression (student stuff)
21 Jun, 2017 at 08:25 | Posted in Statistics & Econometrics | Comments Off on Logistic regression (student stuff)
And in the video below (in Swedish) yours truly shows how to perform a logit regression using Gretl:
Ekonomi och ojämlikhet
20 Jun, 2017 at 14:22 | Posted in Economics | Comments Off on Ekonomi och ojämlikhetFörra hösten arrangerade Malmö högskola ett samtal om ekonomi och ojämlikhet i dagens Sverige. Under Cecilia Nebels kompetenta ledning samtalade serietecknaren Sara Granér, professor Tapio Salonen och yours truly om vad de växande inkomst- och förmögenhetsklyftorna gör med vårt samhälle.
Ni som inte hade möjlighet vara där, kan följa samtalet här.
Do you want to get a Nobel prize? Eat chocolate and move to Chicago!
20 Jun, 2017 at 12:53 | Posted in Varia | 2 CommentsAs we’ve noticed, again and again, correlation is not the same as causation …
If you want to get the prize in economics — and want to be on the sure side — yours truly would suggest you complement your intake of chocolate with a move to Chicago.
Out of the 78 laureates that have been awarded “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,” 28 have been affiliated to The University of Chicago — that is 36%. The world is really a small place when it comes to economics …
Causality matters!
20 Jun, 2017 at 10:29 | Posted in Statistics & Econometrics | 1 Comment
Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.
Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.
For more on these issues — see the chapter “Capturing causality in economics and the limits of statistical inference” in my On the use and misuse of theories and models in economics.
In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.
Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.
Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real casuality we are searching for is the one existing in the real-world around us. If their is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.
Blog at WordPress.com.
Entries and Comments feeds.