Heterogeneity and the flaw of averages

29 June, 2017 at 00:11 | Posted in Statistics & Econometrics | Comments Off on Heterogeneity and the flaw of averages

randWith interactive confounders explicitly included, the overall treatment effect β0 + β′zt is not a number but a variable that depends on the confounding effects. Absent observation of the interactive compounding effects, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt , the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them. Less elliptically, absent observation of z, the estimated treatment effect should be transferred only into those settings in which the confounding interactive variables have values close to the mean values in the experiment. If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.

Ed Leamer

Yes, indeed, regression-based averages is something we have reasons to be cautious about.

Suppose we want to estimate the average causal effect of a dummy variable (T) on an observed outcome variable (O). In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

O = α + βT + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ ( T=1) may have causal effects equal to -100 and those ‘not treated’ (T=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

The heterogeneity problem does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of OLS estimates that economists produce every year.

Advertisements

Marketization undermining the welfare system

27 June, 2017 at 09:56 | Posted in Economics | 2 Comments

Sweden has during the last couple of decades tried to marketize the public welfare sector. The prime mover behind the marketization has (allegedly) been the urge for cost-minimization, freedom of choice, and improved quality. The results have (unsurprisingly) been far from successful.

In a recent dissertation presented at Uppsala University, Linda Moberg summarizes her findings on the implications of the marketization trend for the Swedish eldercare system:

mobergThe overall aim of this dissertation has been to investigate what implications marketization has had for the organization of Swedish eldercare. In particular, it has asked how marketization, in the form of privatized provision, increased competition, and user choice, has transformed the relationship be- tween service users, professionals, and the state …

Previous research has indicated … that municipalities’ ability to write monitorable contracts often is inadequate and that the requirements often are formulated in such a way that it cannot be retrospectively assessed whether the providers have adhered to them … In addition, scholars have also found that few municipalities audit and evaluate their eldercare on a regular basis … Taken together, this indicates that it is not unproblematic for the municipalities to take on the altered regulatory role that the marketization reforms have assigned to them. Furthermore, the lack of direct public control over the quality in the system may result in, if quality differences between different providers become too wide, an undermining of the long standing goal of social equality in the Swedish eldercare system. An apparent risk, given the difficulty in obtaining information about quality differences between provid- ers documented in the dissertation, is that better-educated or more resourceful users gain an advantage in making informed choices and thereby get access to the best services …

The increased reliance on marketization has not only altered the regulatory relationship between the users and the municipalities, it has also contributed to a system where the ability of the staff to control and enforce service quality within eldercare risks being reduced.

Neoliberals and libertarians have always provided a lot of ideologically founded ideas and ‘theories’ to underpin their Panglossian view on markets. But when they are tested against reality they usually turn out to be wrong. The promised results are simply not to be found. And that goes for privatized eldercare too.

The neoliberal argument behind marketization of public welfare systems is that it not only decreases the role of government, but also increases freedom of choice and improves quality. This has not happened. As has proved to be the case with other neoliberal ideas, privatization—when tested—has not been able to deliver the results promised by empty speculation.

No one should be surprised!

Kenneth Arrow explained it all already back in 1963:

Kenneth Arrow, Stanford economics professorUnder ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. In its absence he wants to have some guarantee that at leats the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.

One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if  he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act  as thoroughly in the buyer’s  behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.

Kenneth Arrow, “Uncertainty and the Welfare Economics of Medical Care”. American Economic Review, 53 (5).

Bank of England goes MMT

26 June, 2017 at 17:25 | Posted in Economics | 2 Comments

 

Keynes & MMT

25 June, 2017 at 18:40 | Posted in Economics | 6 Comments

[Bendixen says the] old ‘metallist’ view of money is superstitious, and Dr. Bendixen trounces it with the vigour of a convert. Money is the creation of the State; it is not true to say that gold is international currency, for international contracts are never made in terms of gold, but always in terms of some national monetary unit; there is no essential or important distinction between notes and metallic money; money is the measure of value, but to regard it as having value itself is a relic of the view that the value of money is regulated by the value of the substance of which it is made, and is like confusing a theatre ticket with the performance. With the exception of the last, the only true interpretation of which is purely dialectical, these ideas are undoubtedly of the right complexion. It is probably true that the old ‘metallist’ view and the theories of regulation of note issue based on it do greatly stand in the way of currency reform, whether we are thinking of economy and elasticity or of a change in the standard; and a gospel which can be made the basis of a crusade on these lines is likely to be very useful to the world, whatever its crudities or terminology.

J. M. Keynes, “Theorie des Geldes und der Umlaufsmittel, by Ludwig von Mises; Geld und Kapital, by Friedrich Bendixen” (review), Economic Journal, 1914

Panis Angelicus (personal)

25 June, 2017 at 18:28 | Posted in Varia | Comments Off on Panis Angelicus (personal)

 

Visa från Utanmyra (personal)

25 June, 2017 at 11:30 | Posted in Varia | 1 Comment

 

What is a statistical model?

24 June, 2017 at 14:05 | Posted in Statistics & Econometrics | 1 Comment

My critique is that the currently accepted notion of a statistical model is not scientific; rather, it is a guess at what might constitute (scientific) reality without the vital element of feedback, that is, without checking the hypothesized, postulated, wished-for, natural-looking (but in fact only guessed) model against that reality. To be blunt, as far as is known today, there is no such thing as a concrete i.i.d. (independent, identically distributed) process, not because this is not desirable, nice, or even beautiful, but because Nature does not seem to be like that … As Bertrand Russell put it at the end of his long life devoted to philosophy, “Roughly speaking, what we know is science and what we don’t know is philosophy.” In the scientific context, but perhaps not in the applied area, I fear statistical modeling today belongs to the realm of philosophy.

science_medal9_hTo make this point seem less erudite, let me rephrase it in cruder terms. What would a scientist expect from statisticians, once he became interested in statistical problems? He would ask them to explain to him, in some clear-cut cases, the origin of randomness frequently observed in the real world, and furthermore, when this explanation depended on the device of a model, he would ask them to continue to confront that model with the part of reality that the model was supposed to explain. Something like this was going on three hundred years ago … But in our times the idea somehow got lost when i.i.d. became the pampered new baby.

Rudolf Kalman

Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of — and actually, to be strict, do not at all exist — without specifying such system-contexts. Accepting Haavelmo’s domain of probability theory and sample space of infinite populations — just as Fisher’s ‘hypothetical infinite population,’ von Mises’ ‘collective’ or Gibbs’ ‘ensemble’ — also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable. And so the way social scientists — including economists and econometricians — often uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research, is not acceptable.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine — including e. g. the distribution of the deviations corresponding to a normal curve — then the statistical inferences used, lack sound foundations.

Trying to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels, scientists run into serious problems, the greatest being the need for lots of more or less unsubstantiated — and sometimes wilfully hidden — assumptions to be able to make any sustainable inferences from the models. Much of the results that economists and other social scientists present with their statistical/econometric models depend to a substantial part on the use of mostly unfounded ‘technical’ assumptions.

Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science. It is rather a recipe for producing fiction masquerading as science.

Mainstream monetary theory — neat, plausible, and utterly wrong

22 June, 2017 at 16:21 | Posted in Economics | 10 Comments

In modern times legal currencies are totally based on fiat. Currencies no longer have intrinsic value (as gold and silver). What gives them value is basically the legal status given to them by government and the simple fact that you have to pay your taxes with them. That also enables governments to run a kind of monopoly business where it never can run out of money. Hence spending becomes the prime mover and taxing and borrowing is degraded to following acts. If we have a depression, the solution, then, is not austerity. It is spending. Budget deficits are not the major problem, since fiat money means that governments can always make more of them.

Financing quantitative easing, fiscal expansion, and other similar operations, is made possible by simply crediting a bank account and thereby – by a single keystroke – actually creating money. One of the most important reasons why so many countries are still stuck in depression-like economic quagmires is that people in general – including most mainstream economists – simply don’t understand the workings of modern monetary systems. The result is totally and utterly wrong-headed austerity policies, emanating out of a groundless fear of creating inflation via central banks printing money, in a situation where we rather should fear deflation and inadequate effective demand.

The mainstream neoclassical textbook concept of money multiplier assumes that banks automatically expand the credit money supply to a multiple of their aggregate reserves.  If the required currency-deposit reserve ratio is 5%, the money supply should be about twenty times larger than the aggregate reserves of banks.  In this way the money multiplier concept assumes that the central bank controls the money supply by setting the required reserve ratio.

In his Macroeconomics – just to take an example – Greg Mankiw writes:
 

We can now see that the money supply is proportional to the monetary base. The factor of proportionality … is called the money multiplier … Each dollar of the monetary base produces m dollars of money. Because the monetary base has a multiplied effect on the money supply, the monetary base is called high-powered money.

The money multiplier concept is – as can be seen from the quote above – nothing but one big fallacy. This is not the way credit is created in a monetary economy. It’s nothing but a monetary myth that the monetary base can play such a decisive role in a modern credit-run economy with fiat money.

In the real world banks first extend credits and then look for reserves. So the money multiplier basically also gets the causation wrong. At a deep fundamental level the supply of money is endogenous.

garbageOne may rightly wonder why on earth this pet mainstream neoclassical fairy tale is still in the textbooks and taught to economics undergraduates. Giving the impression that banks exist simply to passively transfer savings into investment, it is such a gross misrepresentation of what goes on in the real world, that there is only one place for it — and that is in the …

Den blomstertid

22 June, 2017 at 16:03 | Posted in Varia | Comments Off on Den blomstertid

 

The American carnage

22 June, 2017 at 13:38 | Posted in Economics | Comments Off on The American carnage

President Trump, in his inaugural address and elsewhere, rightly says that over the decades since 1980 American household distributions of income and wealth became strikingly unequal. But if recent budget and legislative proposals from Trump and the House of Representatives come into effect, today’s distributional mess would become visibly worse.

I will sketch how the mess happened, then I will propose some ideas about how it might be cleaned up. I will show that even with lucky institutional changes and good policy, it would take several more decades to undo the “American carnage” that the president described …

trickleTrump and the Congress’s budget and legislative proposals could only work for his “struggling families” and “forgotten people” if they would generate strong trickle-down growth. Structural constraints on income distribution and wealth dynamics won’t let trickle-down happen. His slogan about “America First” is for the top one percent of income distribution – effectively a “capitalist” class – not for “workers” in the middle of the income distribution or the struggling, forgotten households further down.

I have outlined a feasible progressive alternative, which would generate broad-based progress. Progressive changes may not take hold. If not, and if Trump-style interventions materialize, the distributional mess and “American carnage” will only get worse.

Lance Taylor

Simpson’s paradox

21 June, 2017 at 08:29 | Posted in Statistics & Econometrics | Comments Off on Simpson’s paradox


From a more theoretical perspective, Simpson’s paradox importantly shows that causality can never be reduced to a question of statistics or probabilities, unless you are — miraculously — able to keep constant all other factors that influence the probability of the outcome studied.

To understand causality we always have to relate it to a specific causal structure. Statistical correlations are never enough. No structure, no causality.

Simpson’s paradox is an interesting paradox in itself, but it can also highlight a deficiency in the traditional econometric approach towards causality. Say you have 1000 observations on men and an equal amount of  observations on women applying for admission to university studies, and that 70% of men are admitted, but only 30% of women. Running a logistic regression to find out the odds ratios (and probabilities) for men and women on admission, females seem to be in a less favourable position (‘discriminated’ against) compared to males (male odds are 2.33, female odds are 0.43, giving an odds ratio of 5.44). But once we find out that males and females apply to different departments we may well get a Simpson’s paradox result where males turn out to be ‘discriminated’ against (say 800 male apply for economics studies (680 admitted) and 200 for physics studies (20 admitted), and 100 female apply for economics studies (90 admitted) and 900 for physics studies (210 admitted) — giving odds ratios of 0.62 and 0.37).

Econometric patterns should never be seen as anything else than possible clues to follow. From a critical realist perspective it is obvious that behind observable data there are real structures and mechanisms operating, things that are  — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.

Math cannot establish the truth value of a fact. Never has. Never will.

Paul Romer

Logistic regression (student stuff)

21 June, 2017 at 08:25 | Posted in Statistics & Econometrics | Comments Off on Logistic regression (student stuff)

 

And in the video below (in Swedish) yours truly shows how to perform a logit regression using Gretl:

Ekonomi och ojämlikhet

20 June, 2017 at 14:22 | Posted in Economics | Comments Off on Ekonomi och ojämlikhet

chartFörra hösten arrangerade Malmö högskola ett samtal om ekonomi och ojämlikhet i dagens Sverige. Under Cecilia Nebels kompetenta ledning samtalade serietecknaren Sara Granér, professor Tapio Salonen och yours truly om vad de växande inkomst- och förmögenhetsklyftorna gör med vårt samhälle.

Ni som inte hade möjlighet vara där, kan följa samtalet här.

Do you want to get a Nobel prize? Eat chocolate and move to Chicago!

20 June, 2017 at 12:53 | Posted in Varia | 2 Comments

chocolateSource

As we’ve noticed, again and again, correlation is not the same as causation …

If you want to get the prize in economics — and want to be on the sure side — yours truly would suggest you complement  your intake of chocolate with a move to Chicago.

Out of the 78 laureates that have been awarded “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,” 28 have been affiliated to The University of Chicago — that is 36%. The world is really a small place when it comes to economics …

Causality matters!

20 June, 2017 at 10:29 | Posted in Statistics & Econometrics | 1 Comment

 

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.
 

5cd674ec7348d0620e102a79a71f0063Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

For more on these issues — see the chapter “Capturing causality in economics and the limits of statistical inference” in my On the use and misuse of theories and models in economics.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real casuality we are searching for is the one existing in the real-world around us. If their is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

Hauptstadt Hamburg (personal)

19 June, 2017 at 15:31 | Posted in Varia | 1 Comment

Yours truly is heading (again) for Hamburg this summer, so of course I just had to buy this …

spiegel

Leontief on the dismal state of economics

18 June, 2017 at 19:25 | Posted in Economics | 1 Comment

Much of current academic teaching and research has been criticized for its lack of relevance, that is, of immediate practical impact … I submit that the consistently indifferent performance in practical applications is in fact a symptom of a fundamental imbalance in the present state of our discipline. The weak and all too slowly growing empirical foundation clearly cannot support the proliferating superstructure of pure, or should I say, speculative economic theory …

leontif_nobel_fullUncritical enthusiasm for mathematical formulation tends often to conceal the ephemeral substantive content of the argument behind the formidable front of algebraic signs … In the presentation of a new model, attention nowadays is usually centered on a step-by-step derivation of its formal properties. But if the author — or at least the referee who recommended the manuscript for publication — is technically competent, such mathematical manipulations, however long and intricate, can even without further checking be accepted as correct. Nevertheless, they are usually spelled out at great length. By the time it comes to interpretation of the substantive conclusions, the assumptions on which the model has been based are easily forgotten. But it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.

What is really needed, in most cases, is a very difficult and seldom very neat assessment and verification of these assumptions in terms of observed facts. Here mathematics cannot help and because of this, the interest and enthusiasm of the model builder suddenly begins to flag: “If you do not like my set of assumptions, give me another and I will gladly make you another model; have your pick.” …

But shouldn’t this harsh judgment be suspended in the face of the impressive volume of econometric work? The answer is decidedly no. This work can be in general characterized as an attempt to compensate for the glaring weakness of the data base available to us by the widest possible use of more and more sophisticated statistical techniques. Alongside the mounting pile of elaborate theoretical models we see a fast-growing stock of equally intricate statistical tools. These are intended to stretch to the limit the meager supply of facts … Like the economic models they are supposed to implement, the validity of these statistical tools depends itself on the acceptance of certain convenient assumptions pertaining to stochastic properties of the phenomena which the particular models are intended to explain; assumptions that can be seldom verified.

Wassily Leontief

Dievaines

18 June, 2017 at 19:06 | Posted in Varia | Comments Off on Dievaines

 

Absolutely fabulous!

Der Wind hat sich gedreht

18 June, 2017 at 16:21 | Posted in Varia | Comments Off on Der Wind hat sich gedreht

 

Back in 1980 yours truly had the pleasure of studying at University of Vienna. When not studying or paying weekly visits to Berggase 19, I used to listen to songs like this on my portable music player. To me it is as true today as it was to Degenhardt in 1980, that the only thing we seem to learn from history is that a lot of people don’t (want to) learn anything from it …

Nationalekonomi — ett annat slags vetenskap

18 June, 2017 at 14:33 | Posted in Economics | Comments Off on Nationalekonomi — ett annat slags vetenskap

davidsonEn national-ekonomi, som antar, att dess föremål är en ren naturföreteelse eller blott ett tankeexperiment, är icke någon verklig national-ekonomi, utan ett annat slags vetenskap …

 
  

‘National-ekonomien i stöpsleven,’ 1936

 
 

Proving gender discrimination using randomization (student stuff)

17 June, 2017 at 10:06 | Posted in Statistics & Econometrics | Comments Off on Proving gender discrimination using randomization (student stuff)

 

Text och musik

17 June, 2017 at 08:54 | Posted in Varia | Comments Off on Text och musik

radioI en tid när ljudrummet dränks i den kommersiella radions tyckmyckentrutade ordbajseri och fullständigt intetsägande pubertalflamsande tjafs har många av oss mer eller mindre gett upp. Radion, som en gång i tiden var en källa till både vederkvickelse och reflexion har degenererat till en postmodern ytlighetsavgud.

Men det finns ljus i mörkret!

3270059_612_344I programmet Text och musik med Eric Schüldt — som sänds på söndagsförmiddagarna i P2 mellan klockan 11 och 12 — kan man lyssna på seriös musik och en programledare som har något att säga och inte bara låter foderluckan glappa.

En lisa för själen.

I söndagens program spelades bl. a. denna vackra georgiska sång:
spotify:track:1xMsuovvLgIq4Mx20k5CFJ

Ed Leamer and the pitfalls of econometrics

16 June, 2017 at 18:09 | Posted in Statistics & Econometrics | 1 Comment

Ed Leamer’s Tantalus on the Road to Asymptopia is one of my favourite critiques of econometrics, and for the benefit of those who are not versed in the econometric jargon, this handy summary gives the gist of it in plain English:

noahtantalus
 
Most work in econometrics and regression analysis is — still — made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

aWhen things sound to good to be true, they usually aren’t. And that goes for econometric wet dreams too. The snag is, as Leamer convincingly argues, that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, but a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

Every regression model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

overconfidenceIn order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

Ed Leamer

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are invalid.

Econometrics — and regression analysis — is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics and regression analysis.

What is it that DSGE models — really — explain?

16 June, 2017 at 16:55 | Posted in Economics | 3 Comments

Now it is “dynamic stochastic general equilibrium” (DSGE) models inspired by the Lucas critique that have failed to predict or even explain the Great Recession of 2007–2009. More precisely, the implicit “explanations” based on these models are that the recession, including the millions of net jobs lost, was primarily due to large negative shocks to both technology and willingness to work … So can the reputation of modern macroeconomics be rehabilitated by simply modifying DSGE models to include a few more realistic shocks? …

A simple example helps illustrate for the uninitiated just how DSGE models work and why it should come as little surprise that they are largely inadequate for the task of explaining the Great Recession.

For this simple DSGE model, consider the following technical assumptions: i) an infinitely-lived representative agent with rational expectations and additive utility in current and discounted future log consumption and leisure; ii) a Cobb-Douglas aggregate production function with labor-augmenting technology; iii) capital accumulation with a fixed depreciation rate; and iv) a stochastic process for exogenous technology shocks …

wrong-tool-by-jerome-awIt is worth making two basic points about the setup. First, by construction, technology shocks are the only underlying source of fluctuations in this simple model. Thus, if we were to assume that U.S. real GDP was the literal outcome of this model, we would be assuming a priori that fluctuations in real GDP were ultimately due to technology. When faced with the Great Recession, this model would have no choice but to imply that technology shocks were somehow to blame. Second, despite the underlying role of technology, the observed fluctuations in real GDP can be divided into those that directly reflect the behavior of the exogenous shocks and those that reflect the endogenous capital accumulation in response to these shocks.

To be more precise about these two points, it is necessary to assume a particular process for the exogenous technology shocks. In this case, let’s assume technology follows a random walk with drift [and assuming a 100% depreciation rate of capital]…

So, with this simple DSGE model and for typical measures of the capital share, we have the implication that output growth follows an AR(1) process with an AR coefficient of about one third. This is notable given that such a time-series model does reasonably well as a parsimonious description of quarterly real GDP dynamics for the U.S. economy …

However, the rather absurd assumption of a 100% depreciation rate at the quarterly horizon would surely still have prompted a sharp question or two in a University of Chicago seminar back in the days. So, with this in mind, what happens if we consider the more general case?

binary-options-bunny-tophat-magicianUnfortunately, for more realistic depreciation rates, we cannot solve the model analytically. Instead, taking a log-linearization around steady state, we can use standard methods to solve for output growth … This simple DSGE model is able to mimic the apparent AR(1) dynamics in real GDP growth. But it does so by assuming the exogenous technology shocks also follow an AR(1) process with an AR coefficient that happens to be the same as the estimated AR coefficient for output growth. Thus, the magic trick has been revealed: a rabbit was stuffed into the hat and then a rabbit jumped out of the hat …

Despite their increasing sophistication, DSGE models share one key thing in common with their RBC predecessors. After more than two decades of earnest promises to do better in the “future directions” sections of academic papers, they still have those serially-correlated shocks. Thus, the models now “explain” variables like real GDP, inflation, and interest rates as the outcome of more than just serially-correlated technology shocks. They also consider serially-correlated preference shocks and serially-correlated policy shocks …

James Morley

And still mainstream economists seem to be impressed by the ‘rigour’ brought to macroeconomics by New-Classical-New-Keynesian DSGE models and its rational expectations and micrcofoundations!

It is difficult to see why.

Take the rational expectations assumption. Rational expectations in the mainstream economists’ world implies that relevant distributions have to be time independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modelling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents make ‘predictable errors’ in DSGE models doesn’t take us any closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

‘Rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not say anything about real world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Mainstream economists think there is a gain from the DSGE style of modeling in its capacity to offer some kind of structure around which to organise discussions. To me that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

Stjärnorna kvittar det lika (personal)

16 June, 2017 at 09:45 | Posted in Varia | Comments Off on Stjärnorna kvittar det lika (personal)

 

 

In loving memory of my mother. Nils Ferlin was her favourite poet.

What kind of realist am I?

14 June, 2017 at 17:51 | Posted in Theory of Science & Methodology | 1 Comment

Some commentators on this blog seem to be of the opinion that since yours truly is critical of mainstream economics and ask for more relevance and realism I’m bound to be a “naive” realist or empiricist.

Nothing could be further from the truth!

bhaskarIn a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything of what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

In the face of the kind of methodological individualism and rational choice theory that dominate positivist social science we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals.

archerThe overarching flaw with methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events is what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of  the world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

Solow being uncomfortable with ‘modern’ macroeconomics

12 June, 2017 at 18:53 | Posted in Economics | Comments Off on Solow being uncomfortable with ‘modern’ macroeconomics

4703325-2So in what sense is this “dynamic stochastic general equilibrium” model firmly grounded in the principles of economic theory? I do not want to be misunderstood. Friends have reminded me that much of the effort of “modern macro” goes into the incorporation of important deviations from the Panglossian assumptions that underlie the simplistic application of the Ramsey model to positive macroeconomics. Research focuses on the implications of wage and price stickiness, gaps and asymmetries of information, long-term contracts, imperfect competition, search, bargaining and other forms of strategic behavior, and so on. That is indeed so, and it is how progress is made.

But this diversity only intensifies my uncomfortable feeling that something is being put over on us, by ourselves. Why do so many of those research papers begin with a bow to the Ramsey model and cling to the basic outline? Every one of the deviations that I just mentioned was being studied by macroeconomists before the “modern” approach took over. That research was dismissed as “lacking microfoundations.” My point is precisely that attaching a realistic or behavioral deviation to the Ramsey model does not confer microfoundational legitimacy on the combination. Quite the contrary: a story loses legitimacy and credibility when it is spliced to a simple, extreme, and on the face of it, irrelevant special case. This is the core of my objection: adding some realistic frictions does not make it any more plausible that an observed economy is acting out the desires of a single, consistent, forward-looking intelligence …

For completeness, I suppose it could also be true that the bow to the Ramsey model is like wearing the school colors or singing the Notre Dame fight song: a harmless way of providing some apparent intellectual unity, and maybe even a minimal commonality of approach. That seems hardly worthy of grown-ups, especially because there is always a danger that some of the in-group come to believe the slogans, and it distorts their work …

There has always been a purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts. Most of us have felt that tug. Here is a theory that gives you just that, and this
time “everything” means everything: macro, not micro. The theory is neat, learnable, not terribly difficult, but just technical enough to feel like “science.”

Robert Solow

Yes, indeed, there certainly is a “purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts.” That purist streak has given birth to a kind ‘deductivist blindness’ of mainstream economics, something that also to a larger extent explains why it contributes to causing economic crises rather than to solving them. But where does this ‘deductivist blindness’ of mainstream economics come from? To answer that question we have to examine the methodology of mainstream economics.

The insistence on constructing models showing the certainty of logical entailment has been central in the development of mainstream economics. Insisting on formalistic (mathematical) modeling has more or less forced the economist to give upon on realism and substitute axiomatics for real world relevance. The price paid for the illusory rigour and precision has been monumentally high

wrong-tool-by-jerome-awThis deductivist orientation is the main reason behind the difficulty that mainstream economics has in terms of understanding, explaining and predicting what takes place in our societies. But it has also given mainstream economics much of its discursive power – at least as long as no one starts asking tough questions on the veracity of – and justification for – the assumptions on which the deductivist foundation is erected. Asking these questions is an important ingredient in a sustained critical effort at showing how nonsensical is the embellishing of a smorgasbord of models founded on wanting (often hidden) methodological foundations.

The mathematical-deductivist straitjacket used in mainstream economics presupposes atomistic closed-systems — i.e., something that we find very little of in the real world, a world significantly at odds with an (implicitly) assumed logic world where deductive entailment rules the roost. Ultimately then, the failings of modern mainstream economics has its root in a deficient ontology. The kind of formal-analytical and axiomatic-deductive mathematical modeling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant. The empty formalism that Solow points at in his critique of ‘modern’ macroeconomics is still one of the main reasons behind the monumental failure of ‘modern’ macroeconomics.

The Rule of 72

11 June, 2017 at 18:31 | Posted in Economics | Comments Off on The Rule of 72

A fast way of finding an approximative answer to what time it takes for e.g. a country’s income to double when the income grows at x percent per year.

Song For Guy

10 June, 2017 at 23:25 | Posted in Varia | Comments Off on Song For Guy

 

Inequality and education

10 June, 2017 at 13:24 | Posted in Economics | 3 Comments

Harvard economist and George Bush advisor Greg Mankiw is one of many mainstream economists who has been appealing to the education variable to explain the rising inequality we have seen for the last 30 years in both the US and elsewhere in Western societies. Writes Mankiw:

mike1july1Even if the income gains are in the top 1 percent, why does that imply that the right story is not about education?…

If indeed a year of schooling guaranteed you precisely a 10 percent increase in earnings, then there is no way increasing education by a few years could move you from the middle class to the top 1 percent.

But it may be better to think of the return to education as stochastic. Education not only increases the average income a person will earn, but it also changes the entire distribution of possible life outcomes. It does not guarantee that a person will end up in the top 1 percent, but it increases the likelihood. I have not seen any data on this, but I am willing to bet that the top 1 percent are more educated than the average American; while their education did not ensure their economic success, it played a role.

To me this is nothing but really one big evasive attempt at trying to explain away a very disturbing structural shift that has taken place in our societies. And change that has very little to do with stochastic returns to education. Those were in place also 30 or 40 years ago. At that time they meant that a CEO earned 10-12 times what “ordinary” people earns. Today it means that they earn 100-200 times  what “ordinary” people earns.

A question of education? No way! It is a question of  income and wealth increasingly being concentrated in the hands of a small privileged elite, greed and a lost sense of a common project of building a society for everyone and not only for the chosen few.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.