Mainstream economics gets the priorities wrong

20 January, 2018 at 17:46 | Posted in Theory of Science & Methodology | Leave a comment

There is something about the way economists construct their models nowadays that obviously doesn’t sit right.

significance_cartoonThe one-sided, almost religious, insistence on axiomatic-deductivist modelling as the only scientific activity worthy of pursuing in economics still has not given way to methodological pluralism based on ontological considerations (rather than formalistic tractability). In their search for model-based rigour and certainty, ‘modern’ economics has turned out to be a totally hopeless project in terms of real-world relevance

If macroeconomic models – no matter of what ilk –  build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that model-based conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged to real-world target systems, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations microfoundations shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality. As Robert Gordon once had it:

Rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant.

 

Advertisements

Für Lennart in memoriam

19 January, 2018 at 05:16 | Posted in Varia | Leave a comment

 

The real flimflam man

18 January, 2018 at 16:16 | Posted in Economics | 6 Comments

flimflamAs yours truly noted the other day, Oxford professor Simon Wren-Lewis has been relentless in his efforts to defend orthodox macroeconomic theory against attacks from pluralist rethinking economics students and ‘heterodox’ critics like yours truly.

Answering to this critique, Wren-Lewis now says he finds my view that he “obviously shares the view that there is nothing basically wrong with ‘standard’ theory” nothing but rather ‘pathetic’ flimflam.

Well — why not look into some of the stuff Wren-Lewis has written and then decide who is a ‘pathetic’ flimflammer here.
Continue Reading The real flimflam man…

Mark Lilla and the identity politics trap

18 January, 2018 at 13:18 | Posted in Politics & Society | Leave a comment

ZEIT: Warum führt Identitätspolitik überhaupt zu dieser Stärkung des Identitätsbegriffes? Lief die Ursprungsidee nicht auf das Gegenteil hinaus – auf die Dekonstruktion von Identität?

lillaLilla: Man findet beides. Wobei “Dekonstruktion” natürlich nur was für reiche Kinder aus der Bourgeoisie ist, die an wahnsinnig teuren Eliteuniversitäten studieren. Die sind die Einzigen auf der Welt, die sich für Dekonstruktion interessieren – und sie gehören zu den privilegiertesten Menschen auf Gottes grüner Erde. Daneben gibt es dann noch einen affirmativen Umgang mit dem Identitätsbegriff. Aber das ist schlicht die amerikanische Variante der französischen Dekonstruktion: I can make myself! Statt Dekonstruktion erschafft man sich hier seine eigene Identität. Selbstverständlich ist das für Studenten reizvoll. Wenn Sie jung sind, und jemand fragt Sie: “Willst du, dass ich dir mikroökonomische Theorien erkläre? Oder willst du lieber, dass wir ein bisschen über dich sprechen?” – dann ist doch klar, was Sie antworten. Klar wollen die Studenten lieber über sich selbst sprechen. Das ist ganz natürlich.

Lars Weisbrod/Die Zeit

Jämviktsarbetslöshet — ett farligt och missvisande mått med kända brister

18 January, 2018 at 10:26 | Posted in Economics | 2 Comments

Trots, eller kanske tack vare, de goda utsikterna varnar flera bankekonomer för att ekonomin nu är på väg in i en överhettning. Konjunkturinstitutet (KI) uttrycker oro över att finanspolitiken riskerar bli för expansiv. Myndigheten menar att budgetmålet riskerar att inte hålla samtidigt som det råder brist på strukturella reformer för att minska arbetslösheten på sikt. Trots sjunkande arbetslöshet i dag ser KI därmed framför sig att arbetslösheten kommer att stiga om några år.

slide_9En grund för denna farhåga är KIs beräkningar av jämviktsarbetslösheten. Jämviktsarbetslöshet är ett mått på den nivå på arbetslösheten som återspeglar långsiktig balans i ekonomin, dvs. den nivå på arbetslösheten som är förenlig med att lönerna ökar i lagom takt och inflationen därmed hålls stabilt nära inflationsmålet. Om arbetslösheten sjunker under den nivån uppstår obalanser som på sikt gör att den börjar stiga igen. Den tydligaste effekten av en arbetslöshet under jämviktsnivån är en stigande inflationstakt. Jämviktsarbetslöshet är ett teoretiskt begrepp och nivån räknas fram på ett komplicerat sätt och med många osäkra antaganden. KI skriver själva att deras beräkning ska tolkas som ungefärlig eftersom bedömningen är osäker. Att osäkerheten kring beräkningar av jämviktsarbetslösheten är stor och att dess användbarhet som vägledare i policybeslut därmed är begränsad har varit känt länge. Den faktiska jämviktsarbetslösheten är inte observerbar ens i efterhand …

Bedömningar av jämviktsarbetslösheten riskerar att få en alltför stor roll i politiken, sett till de stora osäkerheter som är förknippade med beräkningarna. En syn att en kommande överhettning på arbetsmarknaden riskerar att bli skadlig de närmaste åren och att finanspolitikens inriktning 2018 därmed måste innebära åtstramning framöver kan i stället anta formen av en självuppfyllande profetia. Så länge finanspolitiken kan vara expansiv utan att Riksbanken höjer räntan kan den sänka arbetslösheten. Att diskutera åtstramning redan i dag innebär att vi riskerar att parkera arbetslösheten på dagens 7 procent utan att ta reda på om det är möjligt att sänka den ytterligare. Fokus för finans- och penningpolitiken bör snarare vara hur de kan bidra till möjligheter för en ännu bättre utveckling på arbetsmarknaden än den som ligger i prognoserna.

Niklas Blomqvist & Åsa-Pia Järliden Bergström

Med tanke på det nästintill obefintliga empisk-statistiska stöd som jämviktsarbetslöshets-begreppet — med rötter i Milton Friedmans ‘naturliga arbetslöshet’ och NAIRU — har, är det minst sagt anmärkningsvärt att mainstream ekonomer fortsätter använda det. Det är svårt att undgå misstanken att det till stor del handlar om ideologiska motiv. Med inställningen att full sysselsättning är omöjlig att uppnå kan högerpolitiska förslag om fortsatt nedmontering av det svenska välfärdssamhället och åtstramningspolitik ges ett skimmer av vetenskaplighet.

Om man skalar bort den övliga tekniska mumbo jumbon som vi nationalekonomer så ofta tyvärr hänger oss åt, kvarstår det enkla faktum att det som verkligen styr den på vaga och ifrågasättbara grunder beräknade ‘jämviktsarbetslösheten’ är den faktiskt uppmätta arbetslösheten. Varför Konjunkturinstitutet, Riksbanken och Finansdepartementet då ska lägga en massa pengar och datortid på att kalkylera denna ‘jämviktsarbetslöshet’ är svårt att förstå. Varför inte lägg resurserna på att få ner den verkliga arbetslösheten istället?

Bland de som likväl försvarar jämviktsarbetsbegreppet — inklusive en del LO-ekonomer — brukar det ofta framföras att det trots sina brister är en “bra tankeram.”

Verkligen?

NAIRU och andra jämviktsarbetslöshetsbegrepp fungerar i själva verket inte alls bra som tankeram. I själva verket fungerar de — som inte minst senare tids forskning av exempelvis Engelbert Stockhammer (2011), Özlem Onaran (2012), Roger Farmer (2010), Storm & Naastepad (2012) och andra övertygande visat — monumentalt dåligt som just tankeram.

Så varför detta fasthållande vid ett begrepp som alla — inklusive dess försvarare — medger är dåligt teoretiskt underbyggt och empiriskt nästintill omöjligt att skatta?

I vanliga fall brukar vi inom vetenskapen kassera icke-falsifierbara teorier. Det är hög tid att också göra det med jämviktsarbetslöshetsbegreppet.

CNN on the real ‘shithole’ and his allies

17 January, 2018 at 10:07 | Posted in Politics & Society | Leave a comment

 

What should we do with econometrics?

17 January, 2018 at 09:37 | Posted in Statistics & Econometrics | Leave a comment

Econometrics … is an undoubtedly flawed paradigm. Even putting aside the myriad of technical issues with misspecification and how these can yield results that are completely wrong, after seeing econometric research in practice I have become skeptical of the results it produces.

deb6e811f2b49ceda8cc2a2981e309f39e3629d8ae801a7088bf80467303077bReading an applied econometrics paper could leave you with the impression that the economist (or any social science researcher) first formulated a theory, then built an empirical test based on the theory, then tested the theory. But in my experience what generally happens is more like the opposite: with some loose ideas in mind, the econometrician runs a lot of different regressions until they get something that looks plausible, then tries to fit it into a theory (existing or new) … Statistical theory itself tells us that if you do this for long enough, you will eventually find something plausible by pure chance!

This is bad news because as tempting as that final, pristine looking causal effect is, readers have no way of knowing how it was arrived at. There are several ways I’ve seen to guard against this:

(1) Use a multitude of empirical specifications to test the robustness of the causal links, and pick the one with the best predictive power …

(2) Have researchers submit their paper for peer review before they carry out the empirical work, detailing the theory they want to test, why it matters and how they’re going to do it …

(3) Insist that the paper be replicated. Firstly, by having the authors submit their data and code and seeing if referees can replicate it (think this is a low bar? Most empirical research in ‘top’ economics journals can’t even manage it). Secondly — in the truer sense of replication — wait until someone else, with another dataset or method, gets the same findings in at least a qualitative sense …

All three of these should, in my opinion, be a prerequisite for research that uses econometrics (and probably statistics more generally … Naturally, this would result in a lot more null findings and probably a lot less research. Perhaps it would also result in fewer attempts at papers which attempt to tell the entire story: that is, which go all the way from building a new model to finding (surprise!) that even the most rigorous empirical methods support it.

Unlearning Economics

Good advise, underlining the importance of never letting our admiration for technical virtuosity blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts.

Science should help us disclose causal forces behind apparent ‘facts.’ We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatiotemporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are seldom governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations between entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of mainstream economics – rather useless.

Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot see that it has yielded much in terms of relevant, interesting economic knowledge. Over all the results have been bleak indeed.

Steven Pinker on political correctness

16 January, 2018 at 16:16 | Posted in Politics & Society | 4 Comments

 

It really says something truly depressing about our times when this kind of self-evident things have to be said …

Deaton-Cartwright-Senn-Gelman on the limited value of randomization

15 January, 2018 at 20:08 | Posted in Statistics & Econometrics | Leave a comment

In Social Science and Medicine (December 2017), Angus Deaton & Nancy Cartwright argue that RCTs do not have any warranted special status. They are, simply, far from being the ‘gold standard’ they are usually portrayed as:

rctsRandomized Controlled Trials (RCTs) are increasingly popular in the social sciences, not only in medicine. We argue that the lay public, and sometimes researchers, put too much trust in RCTs over other methods of in- vestigation. Contrary to frequent claims in the applied literature, randomization does not equalize everything other than the treatment in the treatment and control groups, it does not automatically deliver a precise estimate of the average treatment effect (ATE), and it does not relieve us of the need to think about (observed or un- observed) covariates. Finding out whether an estimate was generated by chance is more difficult than commonly believed. At best, an RCT yields an unbiased estimate, but this property is of limited practical value. Even then, estimates apply only to the sample selected for the trial, often no more than a convenience sample, and justi- fication is required to extend the results to other groups, including any population to which the trial sample belongs, or to any individual, including an individual in the trial. Demanding ‘external validity’ is unhelpful because it expects too much of an RCT while undervaluing its potential contribution. RCTs do indeed require minimal assumptions and can operate with little prior knowledge. This is an advantage when persuading dis- trustful audiences, but it is a disadvantage for cumulative scientific progress, where prior knowledge should be built upon, not discarded. RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not ‘what works’, but ‘why things work’.

In a comment on Deaton & Cartwright, statistician Stephen Senn argues that on several issues concerning randomization Deaton & Cartwright “simply confuse the issue,” that their views are “simply misleading and unhelpful” and that they make “irrelevant” simulations:

My view is that randomisation should not be used as an excuse for ignoring what is known and observed but that it does deal validly with hidden confounders. It does not do this by delivering answers that are guaranteed to be correct; nothing can deliver that. It delivers answers about which valid probability statements can be made and, in an imperfect world, this has to be good enough. Another way I sometimes put it is like this: show me how you will analyse something and I will tell you what allocations are exchangeable. If you refuse to choose one at random I will say, “why? Do you have some magical thinking you’d like to share?”

Contrary to Senn, Andrew Gelman shares Deaton’s and Cartwright’s view that randomized trials often are overrated:

There is a strange form of reasoning we often see in science, which is the idea that a chain of reasoning is as strong as its strongest link. The social science and medical research literature is full of papers in which a randomized experiment is performed, a statistically significant comparison is found, and then story time begins, and continues, and continues—as if the rigor from the randomized experiment somehow suffuses through the entire analysis …

One way to get a sense of the limitations of controlled trials is to consider the conditions under which they can yield meaningful, repeatable inferences. The measurement needs to be relevant to the question being asked; missing data must be appropriately modeled; any relevant variables that differ between the sample and population must be included as potential treatment interactions; and the underlying effect should be large. It is difficult to expect these conditions to be satisfied without good substantive understanding. As Deaton and Cartwright put it, “when little prior knowledge is available, no method is likely to yield well-supported conclusions.” Much of the literature in statistics, econometrics, and epidemiology on causal identification misses this point, by focusing on the procedures of scientific investigation—in particular, tools such as randomization and p-values which are intended to enforce rigor—without recognizing that rigor is empty without something to be rigorous about.

My own view is that nowadays many social scientists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions conerning the external validity of models used in social sciences. In their view they are more or less tests of ‘an underlying model’ that enable them to make the right selection from the ever expanding ‘collection of potentially applicable models.’ When looked at carefully, however, there are in fact few real reasons to share this optimism.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real-world systems.

Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that are produced every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real-world target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science …

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling — by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance …

David A. Freedman Statistical Models and Causal Inference

Neoclassical economics is great — if it wasn’t for all the caveats!

15 January, 2018 at 15:26 | Posted in Economics | Leave a comment

I think that modern neoclassical economics is in fine shape as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism. As long as we have relatively-self-interested liberal individuals who have relatively-strong beliefs that things are theirs, the competitive market in equilibrium is an absolutely wonderful mechanism for achieving truly extraordinary degree of societal coordination and productivity. We need to understand that. We need to value that. And that is what neoclassical economics does, and does well.

Of course, there are all the caveats to Arrow-Debreu-Mackenzie:

adb_poster_red_kickitover1   The market must be in equilibrium.
2   The market must be competitive.
3   The goods traded must be excludable.
4   The goods traded must be non-rival.
5   The quality of goods traded and of effort delivered must be known, or at least bonded, for adverse selection and moral hazard are poison.
6   Externalities must be corrected by successful Pigovian taxes or successful Coaseian carving of property rights at the joints.
7   People must be able to accurately calculate their own interests.
8   People must not be sadistic–the market does not work well if participating agents are either the envious or the spiteful.
9   The distribution of wealth must correspond to the societal consensus of need and desert.
10 The structure of debt and credit must be sound, or if it is not sound we need a central bank or a social-credit agency to make it sound and so make Say’s Law true in practice even though we have no reason to believe Say’s Law is true in theory.

Brad DeLong

An impressive list of caveats indeed. Not very much value left of “modern neoclassical economics” if you ask me …

what ifStill — almost a century and a half after Léon Walras founded neoclassical general equilibrium theory — “modern neoclassical economics” hasn’t been able to show that markets move economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. One however has to ask oneself — what good does that do?

As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is negligible. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up.

Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away) is a gross misallocation of intellectual resources and time.

And then, of course, there is Sonnenschein-Mantel-Debreu!

So what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why ‘modern neoclassical economics’ — New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and ‘New Keynesian’ — with its microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Instead of real maturity, we see that general equilibrium theory possesses only pseudo-maturity.kornai For the description of the economic system, mathematical economics has succeeded in constructing a formalized theoretical structure, thus giving an impression of maturity, but one of the main criteria of maturity, namely, verification, has hardly been satisfied. In comparison to the amount of work devoted to the construction of the abstract theory, the amount of effort which has been applied, up to now, in checking the assumptions and statements seems inconsequential.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.