Why every econ paper should come with a warning label!

29 Feb, 2020 at 20:43 | Posted in Economics | 3 Comments

e8f9b7fec248157b1989085deaa05dde-d7bu3k6It should be part of the academic competences of trained economists to be able to be clear about what their models are for; what the models are about; what the models are capable of doing, and what not; how reliable the models are; what sorts of criticisms have been levelled against the models and how the criticisms have been responded; what alternative models there are; etc. The challenge is not easy, and it is clear that it has not been met with sufficient exuberance and success. The capacity of writing “warning labels” would be part of the needed professional competence. Such warning labels would alert the relevant audiences to the capabilities and limitations of the models …

Exceptional amongst the social sciences is the role of the economics discipline in contemporary society, the intellectual and political authority economics enjoys regardless of its failures. Above, I cited Colander’s confession, “we pretend we understand more than we do” and we could add that economists do so in order to – or with the consequence of – protecting and promoting their socially acknowledged authority. In the worst case, there is a nightmarish scenario on which the more economists are consulted for policy advice, the more they need to pretend to know, and so the higher the likelihood of policies going astray. Avoiding the nightmare would require some smart restructuring of the institutions of the economics discipline.

Uskali Mäki

The history of econometrics

29 Feb, 2020 at 12:40 | Posted in Statistics & Econometrics | Comments Off on The history of econometrics

9780199679348There have been over four decades of econometric research on business cycles …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective …

The wide conviction of the superiority of the methods of the science has converted the econometric community largely to a group of fundamentalist guards of mathematical rigour … So much so that the relevance of the research to business cycles is reduced to empirical illustrations. To that extent, probabilistic formalisation has trapped econometric business cycle research in the pursuit of means at the expense of ends.

The limits of econometric forecasting have, as noted by Qin, been critically pointed out many times before. Trygve Haavelmo assessed the role of econometrics — in an article from 1958 — and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a sceptic of the pretences and aspirations of econometrics. The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it ‘fruitful to believe’ in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are skating on thin ice.

A rigorous application of econometric methods in economics presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there, really, is no other ground than hope itself.

Avant toi

29 Feb, 2020 at 10:44 | Posted in Varia | Comments Off on Avant toi

 

Exile

28 Feb, 2020 at 19:42 | Posted in Varia | Comments Off on Exile

 

Margaret Thatcher

28 Feb, 2020 at 18:14 | Posted in Politics & Society | Comments Off on Margaret Thatcher

thatcher-signMargaret Thatcher väckte med sin nyliberala politik upprörda känslor. I UR Bildningsbyrån diskuterar yours truly och Gunnela Björk järnladyn och hennes intellektuella och politiska arv. Här kan du lyssna på programmet i sin helhet.

I Sverige har vi idag en centerpartistisk partiledare med en meritlista som säkert skulle imponerat på Thatcher. Ett axplock av vad fru Lööf argumenterat och motionerat för under åren:

Inför plattskatt (lägre skatt för höginkomsttagare)
Avskaffa lagen om anställningsskydd
Inskränk strejkrätten
Inför marknadshyror
Sälj ut SvT och SR
Bygg ut kärnkraften

Och nej, det här var inte Timbros eller Johan Norbergs önskelista, utan som sagt, centerledarens.

CDU-Hoffnung Friedrich Merz …

28 Feb, 2020 at 15:41 | Posted in Politics & Society | Comments Off on CDU-Hoffnung Friedrich Merz …

Vor 14 Jahren soll Friedrich Merz seinen Laptop am Berliner Ostbahnhof verloren haben. Doch der damalige Unions-Fraktionsvize hatte Glück, ein Obdachloser fand das Gerät und gab es zurück. Nun erinnert er sich …

In der „Taz“ erzählt der heute 53-Jährige nun, was er und sein Kumpel Micha sahen, als sie den Laptop einschalteten … Er hoffte auf einen angemessenen Finderlohn. Friedrich Merz hatte eine andere Idee.

merzEnrico J. verkaufte damals die Obdachlosenzeitung „Straßenfeger“. Der Zeitungsverkauf im Bahnhofsgebäude selbst ist verboten, daher stand er regelmäßig auf dem Parkplatz davor. Über den Laptop sagt er heute: „Ich hätte das Ding auch auf dem Schwarzmarkt verkaufen können, da waren sämtliche Daten der Bundesregierung drauf.“ Stattdessen gaben er und Kumpel Micha das Gerät beim Bundesgrenzschutz ab, der damals noch im Bahnhof stationiert war. Als Adresse hinterließ er die Anschrift der damaligen Obdachlosenhilfe. Vier Wochen später bekam er laut eigener Aussage von einer Sozialarbeiterin als Dank das neue Buch von Friedrich Merz in die Hand gedrückt.

Der Titel: „Nur wer sich ändert, wird bestehen. Vom Ende der Wohlstandsillusion – Kursbestimmung für unsere Zukunft“. Dazu die Widmung: „Vielen Dank an den ehrlichen Finder“.

Nicht das, worauf Enrico J. gehofft hatte: „Das fand ich echt total unverschämt. Ich habe das Buch sofort in die Spree geschmissen. Er wusste ja von der angegebenen Adresse genau, dass ich obdachlos war, doch ihm war das nicht mal einen Cent wert.“

Die Welt

Econometrics — the scientific illusion of an empirical failure

27 Feb, 2020 at 19:03 | Posted in Statistics & Econometrics | Comments Off on Econometrics — the scientific illusion of an empirical failure

Ed Leamer’s Tantalus on the Road to Asymptopia is one of yours truly’s favourite critiques of econometrics, and for the benefit of those who are not versed in the econometric jargon, this handy summary gives the gist of it in plain English:

noahtantalus

Most work in econometrics and regression analysis is made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

economWhen things sound to good to be true, they usually aren’t. And that goes for econometric wet dreams too. The snag is, as Leamer convincingly argues, that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, but a belief impossible to support. The theories we work with when building our econometric models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables we choose to put into our models.

Every econometric model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and parameter estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

A rigorous application of econometric methods presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are, strictly seen, invalid.

Econometrics is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises. That also applies to econometrics.

How economists reshaped the world

27 Feb, 2020 at 17:43 | Posted in Economics | Comments Off on How economists reshaped the world


To yours truly, the real value of Appelbaum’s meticulously researched history of how economists have come to increasingly influence public policies in modern societies, is that it shows how fundamentally ideological economics is. Of course, you never hear anyone at our seminars telling the lecturer that the assumptions on which his models are built are only made for ideological reasons. But that does not necessarily mean — whether on the surface or not — that “academic analysis is judged on its merits”. What it means is that we have a catechism that no one dares to question. And that catechism has become hegemonic for particular reasons, one of which may very well be of an ideological nature.

The models and assumptions mainstream economics builds on standardly have a neoliberal or market-friendly bias. I guess that is also one of the — ideological — reasons those models and theories are so dear to many economists.

The alternative is to make honesty and humility prerequisites for membership in the community of economists. The easy part is to challenge the pretenders. The hard part is to say no when government officials look to economists for an answer to a normative question. Scientific authority never conveys moral authority. No economist has a privileged insight into questions of right and wrong, and none deserves a special say in fundamental decisions about how society should operate. Economists who argue otherwise and exert undue influence in public debates about right and wrong should be exposed for what they are: frauds.

Paul Romer

And yet the menace of the years shall find me unafraid

26 Feb, 2020 at 17:36 | Posted in Varia | Comments Off on And yet the menace of the years shall find me unafraid

Invictus

A great poem.

It helped one of the greatest men that has walked this earth keep hope alive during twenty-seven years in prison.

Long live his unconquerable soul.

Via con me

26 Feb, 2020 at 17:08 | Posted in Varia | Comments Off on Via con me

 

Non perderti per niente al mondo lo spettacolo di arte varia di un uomo innamorato di te.

Why I am not a Bayesian

26 Feb, 2020 at 15:50 | Posted in Statistics & Econometrics | 1 Comment

Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and so P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but — even granted this questionable reductionism — do rational agents really have to be Bayesian?

The nodal point here is — of course — that although Bayes’ Rule is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. As one of my favourite statistics bloggers —  Andrew Gelman — puts it:

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that different methods work well in different settings … Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective belief, and second, that it’s not clear how to assess subjective knowledge in any case.

bayesfunBeyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an all-purpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for a generation of statistics to be ignorant of experimental design and analysis of variance, instead of becoming experts on the convergence of the Gibbs sampler. In the short term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer scientists, and others to move in and fill in the gap …

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence …

Amen hayr sourp

25 Feb, 2020 at 17:27 | Posted in Varia | Comments Off on Amen hayr sourp

 

E penso a te

25 Feb, 2020 at 16:35 | Posted in Varia | Comments Off on E penso a te

 

Adorno on the philosophy of ‘alternative facts’

25 Feb, 2020 at 16:03 | Posted in Politics & Society | 1 Comment

Something repellent clings to the lie, and though the consciousness of this was indeed beaten into one with the old whip, this simultaneously said something about the master of the dungeon. The mistake lies in all too much honesty. trumpliesWhoever lies, is ashamed, because in every lie they must experience what is degrading in the existing state of the world … Such shame saps the energy of the lies of those who are more subtly organized. They do it badly, and only thereby does the lie come to be genuinely unmoral for others. It suggests the former think the latter are stupid, and serves to express disrespect. Among today’s cunning practitioners, the lie has long since lost its honest function, of concealing something real. No-one believes anyone, everyone is in the loop. Lies are told only when someone wants others to know they aren’t important, that the former does not need the latter, and does not care what they think. Today the lie, once a liberal means of communication, has become one of the techniques of brazenness, with whose help every single person spreads the iciness, in whose shelter they thrive.

Theodor W. Adorno

‘Alternative facts’ people like Trump, Erdogan, Orban and Putin sure make​ reading Minima Moralia a worthwhile effort …

Econometrics — a crooked path from cause to effect

24 Feb, 2020 at 19:48 | Posted in Statistics & Econometrics | 1 Comment

 

In their book Mastering ‘Metrics: The Path from Cause to Effect Joshua Angrist and Jörn-Steffen Pischke write:

masteringOur first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.

Angrist and Pischke may “dream of the trials we’d like to do” and consider “the notion of an ideal experiment” something that “disciplines our approach to econometric research,” but to maintain that ‘on average’ is “usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.

First of all, it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

notes7-2Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant “structural” causal effect and ε an error term.

The problem here is that although we may get an estimate of the “true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( X=1) may have causal effects equal to – 100 and those “not treated” (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of mainstream economic theoretical modelling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

When Joshua Angrist and Jörn-Steffen Pischke in an earlier article of theirs [“The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics,” Journal of Economic Perspectives, 2010] say that

anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future

I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …

randomizeAbsent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …

If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.

Ed Leamer

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence-based on randomized experiments, therefore, is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Angrist’s and Pischke’s “ideally controlled experiments,” tell us with certainty what causes what effects — but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods — and ‘on-average-knowledge’ — is despairingly small.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.