Economists — can-opener-assuming flimflammers

28 November, 2015 at 12:15 | Posted in Economics | Leave a comment


Kids, somehow, seem to be more in touch with real science than can-opener-assuming economists …

A physicist, a chemist, and an economist are stranded on a desert island. One can only imagine what sort of play date went awry to land them there. Anyway, they’re hungry. Like, desert island hungry. And then a can of soup washes ashore. Progresso Reduced Sodium Chicken Noodle, let’s say. Which is perfect, because the physicist can’t have much salt, and the chemist doesn’t eat red meat.

Campbell's_Soup_with_Can_OpenerBut, famished as they are, our three professionals have no way to open the can. So they put their brains to the problem. The physicist says “We could drop it from the top of that tree over there until it breaks open.” And the chemist says “We could build a fire and sit the can in the flames until it bursts open.”

Those two squabble a bit, until the economist says “No, no, no. Come on, guys, you’d lose most of the soup. Let’s just assume a can opener.”

Panglossian macroeconomics

27 November, 2015 at 13:57 | Posted in Economics | 3 Comments

panglossEconomic science does an excellent job of displacing bad ideas with good ones. It’s happening every day. For every person who places obstacles in the way of good science to protect his or her turf, there are five more who are willing to publish innovative papers in good journals, and to promote revolutionary ideas that might be destructive for the powers-that-be. The state of macro is sound – not that we have solved all the problems in the world, or don’t need a good revolution.

Stephen Williamson

Sure, and soon it will be Christmas and Santa Claus makes everyone happy …

Human capital and ‘bad taste in mouth’ models

26 November, 2015 at 18:02 | Posted in Economics | 1 Comment

The ever-growing literature on human capital has long recognized that the scope of the theory extends well beyond the traditional analysis of schooling and on-the-job training … Yet economists have ignored the analysis of an important class of activities which can and should be brought within the purview of the theory. A prime example of this class is brushing teeth.

BeckerGaryCartoon2009_07_10The conventional analysis of toothbrushing has centered around two basic models. The “bad taste in mouth” model is based on the notion that each person has a “taste for brushing,” and the fact that brushing frequencies differ is “explained” by differences in tastes. Since any pattern of human behavior can be rationalized by such implicit theorizing, this model is devoid of empirically testable predictions, and hence uninteresting.

The “mother told me so” theory is based on differences in cultural upbringing. Here it is argued, for example, that thrice-a-day brushers brush three times daily because their mothers forced them to do so as children. Of course, this is hardly a complete explanation. Like most psychological theories, it leaves open the question of why mothers should want their children to brush after every meal …

In a survey of professors in a leading Eastern university it was found that assistant professors brushed 2.14 times daily on average, while associate professors brushed only 1.89 times and full professors only 1.47 times daily. The author, a sociologist, mistakenly attributed this finding to the fact that the higher-ranking professors were older and that hygiene standards in America had advanced steadily over time. To a human capital theorist, of course, this pattern is exactly what would be expected from the higher wages received in the higher professorial ranks, and from the fact that younger professors, looking for promotions, cannot afford to have bad breath.

Alan Blinder

Economic growth

25 November, 2015 at 17:55 | Posted in Economics | 10 Comments


I came to think about this dictum when reading Thomas Piketty’s Capital in the Twenty-First Century. 

Piketty refuses to use the term ‘human capital’ in his inequality analysis.

I think there are many good reasons not to include ‘human capital’ in economic analyses. Let me just give one — perhaps analytically the most important one — reason and elaborate a little on that.

In modern endogenous growth theory knowledge (ideas) is presented as the locomotive of growth. But as Allyn Young, Piero Sraffa and others had shown already in the 1920s, knowledge is also something that has to do with increasing returns to scale and therefore not really compatible with neoclassical economics with its emphasis on constant returns to scale.

Increasing returns generated by non-rivalry between ideas is simply not compatible with pure competition and the simplistic invisible hand dogma. That is probably also the reason why neoclassical economists have been so reluctant to embrace the theory wholeheartedly.

Neoclassical economics has tried to save itself by blurring the distinction between ‘human capital’ and knowledge/ideas. But knowledge or ideas should not be confused with ‘human capital.’ Chad Jones gives a succinct and accessible account of the difference:

Of the three statevariables that we endogenize, ideas have been the hardest to bring into the applied general equilibrium structure. The difficulty arises because of the defining characteristic of an idea, that it is a pure nonrival good. A given idea is not scarce in the same way that land or capital or other objects are scarce; instead, an idea can be used by any number of people simultaneously without congestion or depletion.

new-way-oct14Because they are nonrival goods, ideas force two distinct changes in our thinking about growth, changes that are sometimes conflated but are logically distinct. Ideas introduce scale effects. They also change the feasible and optimal economic institutions. The institutional implications have attracted more attention but the scale effects are more important for understanding the big sweep of human history.

The distinction between rival and nonrival goods is easy to blur at the aggregate level but inescapable in any microeconomic setting. Picture, for example, a house that is under construction. The land on which it sits, capital in the form of a measuring tape, and the human capital of the carpenter are all rival goods. They can be used to build this house but not simultaneously any other. Contrast this with the Pythagorean Theorem, which the carpenter uses implicitly by constructing a triangle with sides in the proportions of 3, 4 and 5. This idea is nonrival. Every carpenter in the world can use it at the same time to create a right angle …

Ideas and human capital are fundamentally distinct. At the micro level, human capital in our triangle example literally consists of new connections between neurons in a carpenter’s head, a rival good. The 3-4-5 triangle is the nonrival idea. At the macro level, one cannot state the assertion that skill-biased technical change is increasing the demand for education without distinguishing between ideas and human capital.

In one way one might say that increasing returns is the darkness of the mainstream economics heart. And this is something most mainstream economists don’t really want to talk about. They prefer to look the other way and pretend that increasing returns are possible to seamlessly incorporate into the received paradigm — and talking about ‘human capital’ rather than knowledge/ideas makes this more easily digested.

Econometrics and the ’empirical turn’ in economics

23 November, 2015 at 17:17 | Posted in Economics, Statistics & Econometrics | 7 Comments

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led some economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In their plaidoyer for this view, the work of Joshua Angrist and Jörn-Steffen Pischke is often apostrophized, so lets start with one of their later books and see if there is any real reason to share the optimism on this ’empirical turn’ in economics.

In their new book, Mastering ‘Metrics: The Path from Cause to Effect, Angrist and Pischke write:

masteringOur first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.

Angrist and Pischke may “dream of the trials we’d like to do” and consider “the notion of an ideal experiment” something that “disciplines our approach to econometric research,” but to maintain that ‘on average’ is “usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.

First of all it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

notes7-2Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant “structural” causal effect and ε an error term.

The problem here is that although we may get an estimate of the “true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( X=1) may have causal effects equal to – 100 and those “not treated” (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

When Joshua Angrist and Jörn-Steffen Pischke in an earlier article of theirs [“The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics,” Journal of Economic Perspectives, 2010] say that “anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future,” I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …

randomizeAbsent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …

If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.

Ed Leamer

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Angrist’s and Pischke’s “ideally controlled experiments” tell us with certainty what causes what effects — but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods — and ‘on-average-knowledge’ — is despairingly small.

Like us, you want evidence that a policy will work here, where you are. Randomized controlled trials (RCTs) do not tell you that. They do not even tell you that a policy works. What they tell you is that a policy worked there, where the trial was carried out, in that population. Our argument is that the changes in tense – from “worked” to “work” – are not just a matter of grammatical detail. To move from one to the other requires hard intellectual and practical effort. The fact that it worked there is indeed fact. But for that fact to be evidence that it will work here, it needs to be relevant to that conclusion. To make RCTs relevant you need a lot more information and of a very different kind.


So, no, I find it hard to share the enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

How to get published in ‘top’ economics journals

23 November, 2015 at 09:49 | Posted in Economics | Leave a comment

an-inconvenient-truth1By the early 1980s it was already common knowledge among people I hung out with that the only way to get non-crazy macroeconomics published was to wrap sensible assumptions about output and employment in something else, something that involved rational expectations and intertemporal stuff and made the paper respectable. And yes, that was conscious knowledge, which shaped the kinds of papers we wrote.

Paul Krugman

More or less says it all, doesn’t it?

And for those of you who do not want to play according these sickening hypocritical rules — well, here’s one good alternative.

Do unrealistic economic models explain real-world phenomena?

22 November, 2015 at 15:14 | Posted in Economics | 1 Comment

When applying deductivist thinking to economics, neoclassical economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. idealization-in-cognitive-and-generative-linguistics-6-728The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply don’t hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.

Or as Hans Albert has it on the neoclassical style of thought:

In everyday situations, if, in answer to an inquiry about the weather forecast, one is told that the weather will remain the same as long as it does not change, then one does not normally go away with the impression of having been particularly well informed, although it cannot be denied that the answer refers to an interesting aspect of reality, and, beyond that, it is undoubtedly true …

We are not normally interested merely in the truth of a statement, nor merely in its relation to reality; we are fundamentally interested in what it says, that is, in the information that it contains …

Information can only be obtained by limiting logical possibilities; and this in principle entails the risk that the respective statement may be exposed as false. It is even possible to say that the risk of failure increases with the informational content, so that precisely those statements that are in some respects most interesting, the nomological statements of the theoretical hard sciences, are most subject to this risk. The certainty of statements is best obtained at the cost of informational content, for only an absolutely empty and thus uninformative statement can achieve the maximal logical probability …

hans_albertThe neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures – appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …

Science progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

The connections sketched out above are part of the general logic of the sciences and can thus be applied to the social sciences. Above all, with their help, it appears to be possible to illuminate a methodological peculiarity of neoclassical thought in economics, which probably stands in a certain relation to the isolation from sociological and social-psychological knowledge that has been cultivated in this discipline for some time: the model Platonism of pure economics, which comes to expression in attempts to immunize economic statements and sets of statements (models) from experience through the application of conventionalist strategies …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

Seen from a deductive-nomological perspective, typical economic models (M) usually consist of a theory (T) – a set of more or less general (typically universal) law-like hypotheses (H) – and a set of (typically spatio-temporal) auxiliary conditions (A). The auxiliary conditions (assumptions) give “boundary” descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises T & A. Using this kind of model economists are (portrayed as) trying to explain (predict) facts by subsuming them under T given A.

This account of theories, models, explanations and predictions does not — of course — give a realistic account of actual scientific practices, but rather aspires to give an idealized account of them.

An obvious problem with the formal-logical requirements of what counts as H is the often severely restricted reach of the “law”. In the worst case it may not be applicable to any real, empirical, relevant situation at all. And if A is not true, then M doesn’t really explain (although it may predict) at all. Deductive arguments should be sound – valid and with true premises – so that we are assured of having true conclusions. Constructing, e.g., models assuming ‘rational’ expectations, says nothing of situations where expectations are ‘non-rational.’

Most mainstream economic models are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?

When confronted with the massive empirical refutations of almost every theory and model they have set up, mainstream economists usually react by saying that these refutations only hit A (the Lakatosian “protective belt”), and that by “successive approximations” it is possible to make the theories and models less abstract and more realistic, and – eventually — more readily testable and predictably accurate. Even if T & A1 doesn’t have much of empirical content, if by successive approximation we reach, say, T & A25, we are to believe that we can finally reach robust and true predictions and explanations.

There are grave problems with this modeling view. What Hans Albert most forcefully is arguing with his “Model Platonism” critique of mainstream economics, is that there is a tendency for modelers to use the method of successive approximations as a kind of “immunization,” taking for granted that there can never be any faults with the theory. Explanatory and predictive failures hinge solely on the auxiliary assumptions. That the kind of theories and models used by mainstream economics should all be held non-defeasibly corrobated, seems, however — to say the least — rather unwarranted.

Confronted with the massive empirical failures of their models and theories, mainstream economists often retreat into looking upon their models and theories as some kind of “conceptual exploration,” and give up any hopes/pretenses whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way.

To me this kind of scientific defeatism is equivalent to surrendering our search for understanding the world we live in. It can’t be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything of the world we live in – then why should we waste any of our precious time on them?

Roman Frydman on the ‘rational expectations’ hoax

21 November, 2015 at 18:04 | Posted in Economics | 2 Comments

Lynn Parramore: It seems obvious that both fundamentals and psychology matter. Why haven’t economists developed an approach to modeling stock-price movements that incorporates both?

Roman Frydman: It took a while to realize that the reason is relatively straightforward. Economists have relied on models that assume away unforeseeable change. As different as they are, rational expectations and behavioral-finance models represent the market with what mathematicians call a probability distribution – a rule that specifies in advance the chances of absolutely everything that will ever happen.

In a world in which nothing unforeseen ever happened, rational individuals could compute precisely whatever they had to know about the future to make profit-maximizing decisions. Presuming that they do not fully rely on such computations and resort to psychology would mean that they forego profit opportunities.

LP: So this is why I often hear that supporters of the Rational Expectations Hypothesis imagine people as autonomous agents that mechanically make decisions in order to maximize profits?

fubYes! What has been misunderstood is that this purely computational notion of economic rationality is an artifact of assuming away unforeseeable change.

Imagine that I have a probabilistic model for stock prices and dividends, and I hypothesize that my model shows how prices and dividends actually unfold. Now I have to suppose that rational people will have exactly the same interpretation as I do — after all, I’m right and I have accounted for all possibilities … This is essentially the idea underpinning the Rational Expectations Hypothesis …

LP: So the only truth is the non-existence of the one true model?

RF: It’s the genuine openness that makes our ideas – and education – more exciting. Students can think about things in an open, yet structured way. We don’t lose the structure; we just renounce the pretense of exact knowledge.

Economics is not mechanistic. It requires understanding of history, politics, and psychology.  Some say that economics is an art, but NREH is actually rigorous economics. It simply recognizes that there’s a limit to what we can know.

Economists may fear that acknowledging this limit would make economic analysis unscientific. But that fear is rooted in a misconception of what the social scientific enterprise should be. Scientific knowledge generates empirically relevant regularities that are likely to be durable. In economics, that knowledge can only be qualitative, and grasping this insight is essential to its scientific status.  Until now, we have been wasting time looking for a model that would tell us exactly how the market works.

LP: Chasing the Holy Grail?

RF: Yes. It’s an illusion. We’ve trained generation after generation in this fruitless task, and it leads to extreme thinking. Fama and Shiller need not see themselves in irreconcilable opposition. There is no one truth. They both have had critical insights, and NREH acknowledges that and builds on their work.

Huffington Post

2-format2010Roman Frydman is Professor of Economics at New York University and a long time critic of the rational expectations hypothesis. In his seminal 1982 American Economic Review article Towards an Understanding of Market Processes: Individual Expectations, Learning, and Convergence to Rational Expectations Equilibrium — an absolute must-read for anyone with a serious interest in understanding what are the issues in the present discussion on rational expectations as a modeling assumption — he showed that models founded on the rational expectations hypothesis are inadequate as representations of economic agents’ decision making.

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to place macroeconomic models building on representative actors and rational expectations-microfoundations in the dustbin of pseudo-science.


Models and the poverty of atomistic behavioural assumptions

20 November, 2015 at 10:37 | Posted in Economics | Leave a comment


Is macroeconomics for real?

19 November, 2015 at 12:10 | Posted in Economics | 4 Comments

861cf344575acd50ed67b35d88615f2318610d8148e8c471ad10ca0132cda91eEmpirically, far from isolating a microeconomic core, real-business-cycle models, as with other representative-agent models, use macroeconomic aggregates for their testing and estimation. Thus, to the degree that such models are successful in explaining empirical phenomena, they point to the ontological centrality of macroeconomic and not to microeconomic entities … At the empirical level, even the new classical representative-agent models are fundamentally macroeconomic in content …

The nature of microeconomics and macroeconomics — as they are currently practiced — undermines the prospects for a reduction of macroeconomics to microeconomics. Both microeconomics and macroeconomics must refer to irreducible macroeconomic entities.

Kevin Hoover

Kevin Hoover has been writing on microfoundations for now more than 25 years, and is beyond any doubts the one economist/econometrician/methodologist who has thought most on the issue. It’s always interesting to compare his qualified and methodologically founded assessment on the representative-agent-rational-expectations microfoundationalist program with the more or less apologetic views of freshwater economists like Robert Lucas:

hoovGiven what we know about representative-agent models, there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. While they appear to use the mathematics of microeconomis, the subjects to which they apply that microeconomics are aggregates that do not belong to any agent. There is no agent who maximizes a utility function that represents the whole economy subject to a budget constraint that takes GDP as its limiting quantity. This is the simulacrum of microeconomics, not the genuine article …

[W]e should conclude that what happens to the microeconomy is relevant to the macroeconomy but that macroeconomics has its own modes of analysis … [I]t is almost certain that macroeconomics cannot be euthanized or eliminated. It shall remain necessary for the serious economist to switch back and forth between microeconomics and a relatively autonomous macroeconomics depending upon the problem in hand.

Instead of just methodologically sleepwalking into their models, modern followers of the Lucasian microfoundational program ought to do some reflection and at least try to come up with a sound methodological justification for their position. Just looking the other way won’t do. Writes Hoover:

garciaThe representative-­agent program elevates the claims of microeconomics in some version or other to the utmost importance, while at the same time not acknowledging that the very microeconomic theory it privileges undermines, in the guise of the Sonnenschein-­Debreu­-Mantel theorem, the likelihood that the utility function of the representative agent will be any direct analogue of a plausible utility function for an individual agent … The new classicals treat [the difficulties posed by aggregation] as a non-issue, showing no apprciation of the theoretical work on aggregation and apparently unaware that earlier uses of the representative-agent model had achieved consistency wiyh theory only at the price of empirical relevance.

Where ‘New Keynesian’ and New Classical economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they — as argued in my On the use and misuse of theories and models in economics — have to put a blind eye on the emergent properties that characterize all open social and economic systems. The interaction between animal spirits, trust, confidence, institutions, etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms.

Next Page »

Blog at | The Pool Theme.
Entries and comments feeds.