Economists pretending to know

27 February, 2015 at 22:04 | Posted in Economics | Leave a comment

economists pretend to know

We are storytellers, operating much of the time in worlds of make believe. We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it is the only way we have found to think seriously about reality. In a way, there is nothing more to this method than maintaining the conviction … that imagination and ideas matter … there is no practical alternative”

Robert Lucas (1988) What Economists Do

Sounds great, doesn’t it? And here’s an example of the outcome of that serious think about reality …

lucIn summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problems they face. One cannot, even conceptually, arrive at a usable definition of full employment as a state in which no involuntary unemployment exists.

The difficulties are not the measurement error problems which necessarily arise in applied economics. They arise because the “thing” to be measured does not exist.

Wage discrimination

27 February, 2015 at 19:41 | Posted in Economics | 1 Comment

As made clear from my Friedman-post earlier today, we have pretty little to learn from libertarians on questions of fairness and wage discrimination. Happily there are others who have something of substance to say instead of just talking nonsense:

reports-gender-pay-gap-persists-new-jpgSo let’s say a woman faces discrimination by this definition – she loses out to a man with weaker credentials. “Loses out” itself is pretty vague and could reasonably be consistent with several different observed labor market outcomes, two of which are:

Outcome A: She gets hired to the same job as the man but at lower pay, and
Outcome B: She doesn’t get the job and instead takes her next best offer in a different occupation at lower pay. Let’s further say that she is paid her real productivity in this job.

Let’s say the woman’s wage in Outcome A and the wage in Outcome B is exactly the same.

Under Outcome A, a wage regression with occupational dummies and a gender dummy is going reliably report the magnitude of the discrimination in the gender dummy. Under Outcome B, a wage regression with occupational dummies and a gender dummy is going to report all of the discrimination under the occupational dummies. If you interpret the results thinking that “discrimination” as Scott D defines it is only in the gender coefficient, you would say there is discrimination in the case of Outcome A, but that there’s no discrimination in the case of Outcome B.

It would be one thing if these were very, very different sorts of discrimination but these are two reasonable outcomes from the exact same act of discrimination.

This is why people like Claudia Goldin see occupational dummies as describing the components of the wage gap and not as some way of eliminating part of the gap that isn’t really about gender.

“Equal pay for equal work” is a principle that I should hope everyone can agree on. It’s great stuff. And I for one think the courts might have some role to play in ensuring the principle is abided by in our society. But it’s a pretty vacuous phrase when it comes to economic science. It’s not entirely clear what it means or how it can be operationalized. Outcome A is clearly not equal pay for equal work, but what about Outcome B? After all the woman is being paid “fairly” for the work she ended up doing. Is that equal pay for equal work? You could make the argument but it doesn’t feel right and in any case it’s clearly incommensurate with the data analysis we’re doing. When two things are incommensurate it’s typically a good idea to keep them separate. Let “equal pay for equal work” ring out as a rallying call for a basic point of fairness and don’t act like you can either affirm it or refute it with economic science. As far as I can tell you can’t.

Daniel Kuehn

Milton Friedman’s anti-feminist feminism

27 February, 2015 at 14:58 | Posted in Economics, Politics & Society | 1 Comment

 

Econom(etr)ic fictions masquerading as rigorous science

27 February, 2015 at 09:15 | Posted in Statistics & Econometrics | 2 Comments

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

In his great book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

freedLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science.

Econometrics and the difficult art of making it count

26 February, 2015 at 20:43 | Posted in Statistics & Econometrics | Leave a comment

Modern econometrics is fundamentally based on assuming — usually without any explicit justification — that we can gain causal knowledge by considering independent variables that may have an impact on the variation of a dependent variable. This is however, far from self-evident. Often the fundamental causes are constant forces that are not amenable to the kind of analysis econometrics supplies us with. As Stanley Lieberson has it in his modern classic Making It Count:

LiebersonOne can always say whether, in a given empirical context, a given variable or theory accounts for more variation than another. But it is almost certain that the variation observed is not universal over time and place. Hence the use of such a criterion first requires a conclusion about the variation over time and place in the dependent variable. If such an analysis is not forthcoming, the theoretical conclusion is undermined by the absence of information …

Moreover, it is questionable whether one can draw much of a conclusion about causal forces from simple analysis of the observed variation … To wit, it is vital that one have an understanding, or at least a working hypothesis, about what is causing the event per se; variation in the magnitude of the event will not provide the answer to that question.

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena requires theory. Analysis of variation – the foundation of all econometrics – can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation. Too much in love with axiomatic-deductive modeling, neoclassical economists especially tend to forget that accounting for causation — how causes bring about their effects — demands deep subject-matter knowledge and acquaintance with the intricate fabrics and contexts. As already Keynes argued in his A Treatise on Probability, statistics and econometrics should not primarily be seen as means of inferring causality from observational data, but rather as description of patterns of associations and correlations that we may use as suggestions of possible causal realations.

‘How I became a Keynesian’

26 February, 2015 at 18:06 | Posted in Economics | 3 Comments

Until [2008], when the banking industry came crashing down and depression loomed for the first time in my lifetime, I had never thought to read The General Theory of Employment, Interest, and Money, despite my interest in economics … I had heard that it was a very difficult book and that the book had been refuted by Milton Friedman, though he admired Keynes’s earlier work on monetarism. I would not have been surprised by, or inclined to challenge, the claim made in 1992 by Gregory Mankiw, a prominent macroeconomist at Harvard, that “after fifty years of additional progress in economic science, The General Theory is an outdated book. . . . We are in a much better position than Keynes was to figure out how the economy works.”

adaWe have learned since [2008] that the present generation of economists has not figured out how the economy works …

Baffled by the profession’s disarray, I decided I had better read The General Theory. Having done so, I have concluded that, despite its antiquity, it is the best guide we have to the crisis …

It is an especially difficult read for present-day academic economists, because it is based on a conception of economics remote from theirs. This is what made the book seem “outdated” to Mankiw — and has made it, indeed, a largely unread classic … The dominant conception of economics today, and one that has guided my own academic work in the economics of law, is that economics is the study of rational choice … Keynes wanted to be realistic about decision-making rather than explore how far an economist could get by assuming that people really do base decisions on some approximation to cost-benefit analysis …

Economists may have forgotten The General Theory and moved on, but economics has not outgrown it, or the informal mode of argument that it exemplifies, which can illuminate nooks and crannies that are closed to mathematics. Keynes’s masterpiece is many things, but “outdated” it is not.

Richard Posner

On prices and profits

25 February, 2015 at 19:10 | Posted in Economics | Leave a comment

a twistMuch recent discussion about potential price inflation seems to take as a given that it would be sparked by a pickup of wage growth. But looking at data from the non-financial corporate sector–which accounts for well more than half of all private-sector economic activity and for which rich data are available–what’s really striking about price growth since the end of the Great Recession is how much of it has been driven by risingprofits, not rising labor costs. In fact, labor costs have been essentially flat between the end of the Great Recession and the first quarter of 2014. Profits earned per unit sold, on the other hand, have been rising at an average annual growth rate of nearly 9% since the recovery’s beginning. To the degree that there is any inflationary pressure in the U.S. economy over that time, it is surely not coming from labor costs.

Josh Bivens

Microfoundations — contestable incoherence

25 February, 2015 at 18:31 | Posted in Economics | 1 Comment

ba7658c533d4de20cf77161b8910d903cab9cbe6_m

Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimization often argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. I unequivocally reject that unsubstantiated view, and have given the reasons why here.

These defenders often also maintain that there are no methodologically coherent alternatives to microfoundations modeling. That allegation is of course difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and “New Keynesian” economists are pursuing, are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductivity is the hallmark of science tout court.

The fact that Lucas introduced rational expectations as a consistency axiom is not really an argument to why we should accept it as an acceptable assumption in a theory or model purporting to explain real macroeconomic processes (see e. g. here). And although virtually any macroeconomic empirical claim is contestable, so is any claim in micro (see e. g. here).

On knowledge and education

23 February, 2015 at 21:27 | Posted in Economics, Politics & Society | Leave a comment

Education is a friend of mine. And it should be available and affordable for all. But … people insisting that educational failings are at the root of still-weak job creation, stagnating wages and rising inequality. This sounds serious and thoughtful. But it’s actually a view very much at odds with the evidence, not to mention a way to hide from the real, unavoidably partisan debate.

The education-centric story of our problems runs like this: We live in a period of unprecedented technological change, and too many American workers lack the skills to cope with that change. This “skills gap” is holding back growth, because businesses can’t find the workers they need. It also feeds inequality, as wages soar for workers with the right skills… So what we need is more and better education … It’s repeated so widely that many people probably assume it’s unquestionably true. But it isn’t … there’s no evidence that a skills gap is holding back employment.

Paul Krugman


Although Krugman doesn’t name him explicitly, Harvard economist and George Bush advisor Greg Mankiw is one of those mainstream economists who has been appealing to the education variable to explain the rising inequality we have seen for the last 30 years in both the US and elsewhere in Western societies. Mankiw writes:

Even if the income gains are in the top 1 percent, why does that imply that the right story is not about education?

If indeed a year of schooling guaranteed you precisely a 10 percent increase in earnings, then there is no way increasing education by a few years could move you from the middle class to the top 1 percent.

But it may be better to think of the return to education as stochastic. Education not only increases the average income a person will earn, but it also changes the entire distribution of possible life outcomes. It does not guarantee that a person will end up in the top 1 percent, but it increases the likelihood. I have not seen any data on this, but I am willing to bet that the top 1 percent are more educated than the average American; while their education did not ensure their economic success, it played a role.

To me this is nothing but really one big evasive attempt at trying to explain away a very disturbing structural shift that has taken place in our societies. And change that has very little to do with stochastic returns to education. Those were in place also 30 or 40 years ago. At that time they meant that perhaps a CEO earned 10-12 times what “ordinary” people earns. Today it means that they perhaps earn 100-200 times  what “ordinary” people earns.

A question of education? No way! It is a question of  income and wealth increasingly being concentrated in the hands of a very small and privileged elite, greed and a lost sense of a common project of building a sustainable society.

 

Fiscal debt — what we should be aiming for

23 February, 2015 at 15:08 | Posted in Economics, Politics & Society | Leave a comment

 

Lynn Parramore: Do you think there are lessons in what has happened in the Eurozone for students of economics and the way the subject is taught?

Mario Seccareccia: Yes, indeed. Ever since the establishment of the modern nation-state in the late eighteenth and nineteenth centuries, the creation of the euro was perhaps the first significant experiment in modern times in which there was an attempt to separate money from the state, that is, to denationalize currency, as some right-wing ideologues and founders of modern neoliberalism, such as Friedrich von Hayek, had defended. What the Eurozone crisis teaches is that this perception of how the monetary system works is quite wrong, because, in times of crisis, the democratic state must be able to spend money in order to meet its obligations to its citizens. The denationalization or “supra-nationalization” of money with the establishment that happened in the Eurozone took away from elected national governments the capacity to meaningfully manage their economies. Unless governments in the Eurozone are able to renegotiate a significant control and access money from their own central banks, the system will be continually plagued with crisis and will probably collapse in the longer term.

Lynn Parramore

« Previous PageNext Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.