Towering debts, rapidly rising taxes, constant and expensive wars, a debt burden surpassing 200% of GDP. What are the chances that a country with such characteristics would grow rapidly? Almost anyone would probably say ‘none’.
And yet, these are exactly the conditions under which the Industrial Revolution took place in Britain. Britain’s government debt went from 5% of GDP in 1700 to over 200% in 1820, it fought a war in one year out of three (most of them for little or no economic gain), and taxes increased rapidly but not enough to keep pace with the rise in spending …
War drove up spending and led to massive debt accumulation … Over the same period, Britain moved a large part of its population out of agriculture and into industry and services – out of the countryside and into cities. Population grew rapidly, and industrial output surged … As a result, Britain became the first country to break free from the shackles of the Malthusian regime …
How much of the situation in industrialising England has any relevance for the world as it is now? Is this a tale from a distant island and period of which we know little – to paraphrase Chamberlain – or does it hold lessons for the present? Financial frictions are still very prominent even in the most developed countries today; changing the profitability of revolutionary sectors should have first-order effects on the long-run rate of growth. The issuance of government debt may still crowd out investment that is, overall, inefficient.
These efficiency-enhancing effects of government debt may be all the more important in developing countries. There, the added benefits of debt that we did not discuss – such as providing a safe store of value, and a certain source of liquidity (Holmstrom and Tirole 1998) – may tilt the overall scoresheet even more in favour of government borrowing. None of this is to say that debts may not become excessive (Reinhart and Rogoff 2009) – but when we consider the dangers of debt, we should keep an eye on its potential benefits as well.
[h/t Brad DeLong]
As you would expect from an economist, the normative assertion in “X is wrong because it undermines the scientific method” is based on what I thought would be a shared premise: that the scientific method is a better way to determine what is true about economic activity than any alternative method, and that knowing what is true is valuable.
In conversations with economists who are sympathetic to the freshwater economists I singled out for criticism in my AEA paper on mathiness, it has become clear that freshwater economists do not share this premise. What I did not anticipate was their assertion that economists do not follow the scientific method, so it is not realistic or relevant to make normative statements of the form “we ought to behave like scientists.” …
Together, the evidence I summarize in these three posts suggests that freshwater economists differ sharply from other economists. This evidence strengthens my belief that the fundamental divide here is between the norms of political discourse and the norms of scientific discourse. Lawyers and politicians both engage in a version of the adversarial method, but they differ in another crucial way. In the suggestive terminology introduced by Jon Haidt in his book The Righteous Mind, lawyers are selfish, but politicians are groupish. What is distinctive about the freshwater economists is that their groupishness depends on a narrow definition of group that sharply separates them from all other economists. One unfortunate result of this narrow groupishness may be that the freshwater economists do not know the facts about how most economists actually behave.
Are macro-economists doomed to always “fight the last war”? Are they doomed to always be explaining the last problem we had, even as a completely different problem is building on the horizon? Well, maybe. But I think the hope is that microfoundations might prevent this. If you can really figure out some timeless rules that describe the behavior of consumers, firms, financial markets, governments, etc., then you might be able to predict problems before they happen. So far, that dream has not been realized. But maybe the current round of “financial friction macro” will produce something more timeless. I hope so.
So there we have it!
This is nothing but the age-old machine dream of neoclassical economics — an epistemologically founded cyborg dream that disregards the fundamental ontological fact that economies and societies are open — not closed — systems.
If we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems,” they do only hold under ceteris paribus conditions and are a fortiori of limited value for understanding, explaining or predicting real economic systems. Or as Keynes has it in his masterpiece Treatise on Probability(1921):
The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.
The microfoundationalist’s fantasy has a powerful hold on macroeconomists. They recognize that an agent-by-agent reconstruction of the economy is not feasible, but they argue that it is something that we could do “in principle,” and that the in-principle claim warrants a particular theoretical strategy. The strategy is to start with the analysis of a single agent and to build up through ever more complex analyses to a whole economy …
The implicit argument in favor of representative-agent models as empirically relevant to aggregate economic data runs something like this: a representative-agent model is not itself an acceptable representation of the whole economy … but it is a first step in a program which step by step will inevitably bring the model closer to the agent-by-agent microeconomic model of the whole economy … I call this argument eschatological justification: it is the claim that there is a plausible in-principle game plan for a reductionist program and that the conclusions of early stages of that program are epistemically warranted by the presumed, but undemonstrated, success of the future implementation of the program in the fullness of time …
Analysis using the representative-agent model employs an analogy between the behavior of a single agent and the agents collectively in a whole economy. For example, the representative-agent is typically endowed with a utility function from precisely the same family as those typically assigned to individual agents in microeconomic analysis. Do we have any good reason to accept the analogy? Microeconomists have long known that the answer is, no.
Exact aggregation requires that utility functions be identical and homothetic … Translated into behavioral terms, it requires that every agent subject to aggregation have the same preferences (you must share the same taste for chocolate with Warren Buffett) and those preferences must be the same except for a scale factor (Warren Buffet with an income of $10 billion per year must consume one million times as much chocolate as Warren Buffet with an income of $10,000 per year). This is not the world that we live in. The Sonnenschein-Mantel-Debreu theorem shows theoretically that, in an idealized general-equilibrium model in which each individual agent has a regularly specified preference function, aggregate excess demand functions inherit only a few of the regularity properties of the underlying individual excess demand functions: continuity, homogeneity of degree zero (i.e., the independence of demand from simple rescalings of all prices), Walras’s law (i.e., the sum of the value of all excess demands is zero), and that demand rises as price falls (i.e., that demand curves ceteris paribus income effects are downward sloping) … These regularity conditions are very weak and put so few restrictions on aggregate relationships that the theorem is sometimes called “the anything goes theorem.”
The importance of the theorem for the representative-agent model is that it cuts off any facile analogy between even empirically well-established individual preferences and preferences that might be assigned to a representative agent to rationalize observed aggregate demand. The theorem establishes that, even in the most favorable case, there is a conceptual chasm between the microeconomic analysis and the macroeconomic analysis. The reasoning of the representative-agent modelers would be analogous to a physicist attempting to model the macro- behavior of a gas by treating it as single, room-size molecule. The theorem demonstrates thatthere is no warrant for the notion that the behavior of the aggregate is just the behavior of the individual writ large: the interactions among the individual agents, even in the most idealized model, shapes in an exceedingly complex way the behavior of the aggregate economy. Not only does the representative-agent model fail to provide an analysis of those interactions, but it seems likely that that they will defy an analysis that insists on starting with the individual, and it is certain that no one knows at this point how to begin to provide an empirically relevant analysis on that basis.
Kevin Hoover has been writing on microfoundations for now more than 25 years, and is beyond any doubts the one economist/econometrician/methodologist who has thought most on the issue. It’s always interesting to compare his qualified and methodologically founded assessment on the representative-agent-rational-expectations microfoundationalist program with the more or less apologetic views of freshwater economists like Robert Lucas:
Given what we know about representative-agent models, there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. While they appear to use the mathematics of microeconomis, the subjects to which they apply that microeconomics are aggregates that do not belong to any agent. There is no agent who maximizes a utility function that represents the whole economy subject to a budget constraint that takes GDP as its limiting quantity. This is the simulacrum of microeconomics, not the genuine article …
[W]e should conclude that what happens to the microeconomy is relevant to the macroeconomy but that macroeconomics has its own modes of analysis … [I]t is almost certain that macroeconomics cannot be euthanized or eliminated. It shall remain necessary for the serious economist to switch back and forth between microeconomics and a relatively autonomous macroeconomics depending upon the problem in hand.
Instead of just methodologically sleepwalking into their models, modern followers of the Lucasian microfoundational program ought to do some reflection and at least try to come up with a sound methodological justification for their position. Just looking the other way won’t do. Writes Hoover:
The representative-agent program elevates the claims of microeconomics in some version or other to the utmost importance, while at the same time not acknowledging that the very microeconomic theory it privileges undermines, in the guise of the SonnenscheinDebreuMantel theorem, the likelihood that the utility function of the representative agent will be any direct analogue of a plausible utility function for an individual agent … The new classicals treat [the difficulties posed by aggregation] as a non-issue, showing no apprciation of the theoretical work on aggregation and apparently unaware that earlier uses of the representative-agent model had achieved consistency wiyh theory only at the price of empirical relevance.
Where ‘New Keynesian’ and New Classical economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they — as argued in chapter 4 of my On the use and misuse of theories and models in economics — have to put a blind eye on the emergent properties that characterize all open social and economic systems. The interaction between animal spirits, trust, confidence, institutions, etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms.
John Cochrane is obviously a big euro fan who doesn’t accept the conventional wisdom that the euro is a bad idea.
However, there seems to be some rather basic facts about optimal currency areas that all economists would perhaps be wise to consider …
The idea that the euro has “failed” is dangerously naive. The euro is doing exactly what its progenitor – and the wealthy 1%-ers who adopted it – predicted and planned for it to do.
That progenitor is former University of Chicago economist Robert Mundell. The architect of “supply-side economics” is now a professor at Columbia University, but I knew him through his connection to my Chicago professor, Milton Friedman, back before Mundell’s research on currencies and exchange rates had produced the blueprint for European monetary union and a common European currency.
Mundell, then, was more concerned with his bathroom arrangements. Professor Mundell, who has both a Nobel Prize and an ancient villa in Tuscany, told me, incensed:
“They won’t even let me have a toilet. They’ve got rules that tell me I can’t have a toilet in this room! Can you imagine?”
As it happens, I can’t. But I don’t have an Italian villa, so I can’t imagine the frustrations of bylaws governing commode placement.
But Mundell, a can-do Canadian-American, intended to do something about it: come up with a weapon that would blow away government rules and labor regulations. (He really hated the union plumbers who charged a bundle to move his throne.)
“It’s very hard to fire workers in Europe,” he complained. His answer: the euro.
The euro would really do its work when crises hit, Mundell explained. Removing a government’s control over currency would prevent nasty little elected officials from using Keynesian monetary and fiscal juice to pull a nation out of recession.
“It puts monetary policy out of the reach of politicians,” he said. “[And] without fiscal policy, the only way nations can keep jobs is by the competitive reduction of rules on business.”
He cited labor laws, environmental regulations and, of course, taxes. All would be flushed away by the euro. Democracy would not be allowed to interfere with the marketplace – or the plumbing.
As another Nobelist, Paul Krugman, notes, the creation of the eurozone violated the basic economic rule known as “optimum currency area”. This was a rule devised by Bob Mundell.
That doesn’t bother Mundell. For him, the euro wasn’t about turning Europe into a powerful, unified economic unit. It was about Reagan and Thatcher.
“Ronald Reagan would not have been elected president without Mundell’s influence,” once wrote Jude Wanniski in the Wall Street Journal. The supply-side economics pioneered by Mundell became the theoretical template for Reaganomics – or as George Bush the Elder called it, “voodoo economics”: the magical belief in free-market nostrums that also inspired the policies of Mrs Thatcher.
Mundell explained to me that, in fact, the euro is of a piece with Reaganomics:
“Monetary discipline forces fiscal discipline on the politicians as well.”
And when crises arise, economically disarmed nations have little to do but wipe away government regulations wholesale, privatize state industries en masse, slash taxes and send the European welfare state down the drain.
Listen to BBC 4 where Duncan Weldon tries to explain in what way Hyman Minsky’s thoughts on banking and finance offer a radical challenge to mainstream economic theory.
As a young research stipendiate in the U.S. yours truly had the great pleasure and privelege of having Hyman Minsky as teacher.
He was a great inspiration at the time.
He still is.
They try to explain business cycles solely as problems of information, such as asymmetries and imperfections in the information agents have. Those assumptions are just as arbitrary as the institutional rigidities and inertia they find objectionable in other theories of business fluctuations … I try to point out how incapable the new equilibrium business cycles models are of explaining the most obvious observed facts of cyclical fluctuations … I don’t think that models so far from realistic description should be taken seriously as a guide to policy … I don’t think that there is a way to write down any model which at one hand respects the possible diversity of agents in taste, circumstances, and so on, and at the other hand also grounds behavior rigorously in utility maximization and which has any substantive content to it.
Real Business Cycle theory basically says that economic cycles are caused by technology-induced changes in productivity. It says that employment goes up or down because people choose to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense — and how on earth those guys that promoted this theory (Thomas Sargent et consortes) could be awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is really beyond comprehension.
In yours truly’s History of Economic Theories (4th ed, 2007, p. 405) it was concluded that
the problem is that it has turned out to be very difficult to empirically verify the theory’s view on economic fluctuations as being effects of rational actors’ optimal intertemporal choices … Empirical studies have not been able to corroborate the assumption of the sensitivity of labour supply to changes in intertemporal relative prices. Most studies rather points to expected changes in real wages having only rather little influence on the supply of labour.
Rigorous models lacking relevance is not to be taken seriously. Or as Keynes had it — It is better to be vaguely right than precisely wrong …
Flash back to 1968 … The standard approach was the temporary equilibrium model that John Hicks developed in Value and Capital. In the temporary equilibrium model, time proceeds in a sequence of weeks. Each week, people meet in a market. They bring goods to market to trade. They also bring money and bonds. The crucial point of temporary equilibrium theory is that the future price is different from our belief of the future price. To complete a model of this kind, we must add an equation to explain how beliefs are determined. I call this equation, the belief function.
Now jump forward to 1972 … Lucas argued that, although we may not know the future exactly: we do know the exact probability distribution of future events. Following the work of John Muth, he called this idea, rational expectations.
Rational expectations is a powerful idea. If expectations are rational, then we do not need to know how people form their beliefs. The belief function that was so important in temporary equilibrium theory can be relegated to the dustbin of history. We don’t care how people form beliefs because whatever mechanism they use, that mechanism must be right on average. Who can argue with that?
That is a clever argument. But it suffers from a fatal flaw. General equilibrium models of money do not have a unique equilibrium. They have many. This problem was first identified by the English economist Frank Hahn, and despite the best attempts of the rational expectations school to ignore the problem: it reappears with a alarming regularity. Rational expectations economists who deny an independent role for beliefs are playing a game of whack a mole …
This is not an esoteric point. It is at the core of the question that I pose at the beginning of this post: If the Fed raises the interest rate will it cause more or less inflation? And it is a point that policy makers are well aware of as this piece by Fed President Jim Bullard makes clear.
What is the solution? It is one thing to recognize that the world is random, and quite another to assume that we have perfect knowledge. If we place our agents in models where many different things can happen, we must model the process by which they form beliefs.
I agree with Farmer on most of his critique of rational expectations. But although multiplicity of equilibria certainly is one important criticism that can we waged against the Muth-Lucas idea, I don’t think it is the core problem with rational expectations.
Assumptions in scientific theories/models are often based on (mathematical) tractability (and so necessarily simplifying) and used for more or less self-evidently necessary theoretical consistency reasons. But one should also remember that assumptions are selected for a specific purpose, and so the arguments (in economics shamelessly often totally non-existent) put forward for having selected a specific set of assumptions, have to be judged against that background to check if they are warranted.
This, however, only shrinks the assumptions set minimally – it is still necessary to decide on which assumptions are innocuous and which are harmful, and what constitutes interesting/important assumptions from an ontological & epistemological point of view (explanation, understanding, prediction). Especially so if you intend to refer your theories/models to a specific target system — preferably the real world. To do this one should start by applying a Solowian Smell Test: Is the theory/model reasonable given what we know about the real world? If not, why should we care about it? If not – we shouldn’t apply it (remember time is limited and economics is a science on scarcity & optimization …)
As Farmer notices, the concept of rational expectations was first developed by John Muth in an Econometrica article in 1961 — Rational expectations and the theory of price movements — and later — from the 1970s and onward — applied to macroeconomics. Muth framed his rational expectations hypothesis (REH) in terms of probability distributions:
Expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes).
But Muth was also very open with the non-descriptive character of his concept:
The hypothesis of rational expectations] does not assert that the scratch work of entrepreneurs resembles the system of equations in any way; nor does it state that predictions of entrepreneurs are perfect or that their expectations are all the same.
To Muth, its main usefulness was its generality and ability to be applicable to all sorts of situations irrespective of the concrete and contingent circumstances at hand.
Muth’s concept was later picked up by New Classical Macroeconomics, where it soon became the dominant model-assumption and has continued to be a standard assumption made in many neoclassical (macro)economic models – most notably in the fields of (real) business cycles and finance (being a cornerstone of the “efficient market hypothesis”).
REH basically says that people on the average hold expectations that will be fulfilled. This makes the economist’s analysis enormously simplistic, since it means that the model used by the economist is the same as the one people use to make decisions and forecasts of the future.
But, strictly seen, REH only applies to ergodic – stable and stationary stochastic – processes. If the world was ruled by ergodic processes, people could perhaps have rational expectations, but no convincing arguments have ever been put forward, however, for this assumption being realistic.
Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system. Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. One could perhaps accept REH if it had produced lots of verified predictions and good explanations. But it has done nothing of the kind. Therefore the burden of proof is on those who still want to use models built on utterly unreal assumptions.
In models building on REH it is presupposed – basically for reasons of consistency – that agents have complete knowledge of all of the relevant probability distribution functions. And when trying to incorporate learning in these models – trying to take the heat of some of the criticism launched against it up to date – it is always a very restricted kind of learning that is considered. A learning where truly unanticipated, surprising, new things never take place, but only rather mechanical updatings – increasing the precision of already existing information sets – of existing probability functions.
Nothing really new happens in these ergodic models, where the statistical representation of learning and information is nothing more than a caricature of what takes place in the real world target system. This follows from taking for granted that people’s decisions can be portrayed as based on an existing probability distribution, which by definition implies the knowledge of every possible event (otherwise it is in a strict mathematical-statistically sense not really a probability distribution) that can be thought of taking place.
But in the real world it is – as shown again and again by behavioural and experimental economics – common to mistake a conditional distribution for a probability distribution. Mistakes that are impossible to make in the kinds of economic analysis that build on REH. On average REH agents are always correct. But truly new information will not only reduce the estimation error but actually change the entire estimation and hence possibly the decisions made. To be truly new, information has to be unexpected. If not, it would simply be inferred from the already existing information set.
In the real world, it is not possible to just assume — as Farmer puts it — “we do know the exact probability distribution of future events.” On the contrary. We can’t even assume that probability distributions are the right way to characterize, understand or explain acts and decisions made under uncertainty. When we simply do not know, when we have not got a clue, when genuine uncertainty prevail, REH simply is not “reasonable.” In those circumstances it is not a useful assumption, since under those circumstances the future is not like the past, and henceforth, we cannot use the same probability distribution – if it at all exists – to describe both the past and future.
Although in physics it may possibly not be straining credulity too much to model processes as taking place in “vacuum worlds” – where friction, time and history do not really matter – in social and historical sciences it is obviously ridiculous. If societies and economies were frictionless ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with frictionless ergodic “vacuum concepts.”
If the intention of REH is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than rather watered-down versions of “anything goes” when comes to rationality postulates. If one proposes REH one also has to support its underlying assumptions. None is given. REH economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. REH has been transformed from an – in principle – testable hypothesis to an irrefutable proposition.
As shown already by Paul Davidson in the 1980s, REH implies that relevant distributions have to be time independent (which follows from the ergodicity implied by REH). This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It’s really straining one’s imagination trying to see any similarity between these modelling assumptions and children’s expectations in the “tickling game.” In REH we are never disappointed in any other way than as when we lose at the roulette wheels, since, as Muth puts it, “averages of expectations are accurate.” But real life is not an urn or a roulette wheel, so REH is a vastly misleading analogy of real-world situations. It may be a useful assumption – but only for non-crucial and non-important decisions that are possible to replicate perfectly (a throw of dices, a spin of the roulette wheel etc).
Most models building on rational hypothesis are time-invariant and so give no room for any changes in expectations and their revisions. The only imperfection of knowledge they admit of is included in the error terms, error terms that are assumed to be additive and to have a give and known frequency distribution, so that the models can still fully pre-specify the future even when incorporating these stochastic variables into the models.
If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations.
Any model assumption — such as ‘rational expectations’ — that doesn’t pass the real world Smell Test is just silly nonsense on stilts.
Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous – that is, by laughing at it – so as not to fall into the trap of taking it seriously and passing on to matters of technique.
Because Finland has used the euro since its inception, the value of its currency cannot adjust in ways that would cushion the overall Finnish economy from those shocks. If Finland still had its old currency, the markka, it would have fallen in value on international markets. Suddenly other Finnish industries would have had a huge cost advantage over, say, German competitors, and they would have grown and created the jobs to help make up for those lost because of Nokia and the paper industry and Russian trade.
“Rubbish,” Mr. Stubb said. To evaluate the euro, you can’t just look at what he calls a current “rough patch” for the Finnish economy. You have to look at a longer time horizon. In his telling, the integration with Western Europe — of which the euro currency is a crucial element — deepened trade and diplomatic relations, making Finland both more powerful on the world stage and its industries better connected to the rest of the global economy. That made its people richer.
If anything is “rubbish” here, it’s Stubb’s pie in the sky …
On the whole, the euro has, thus far, gone much better than many U.S. economists had predicted. We survey how U.S. economists viewed European monetary unification from the publication of the Delors Report in 1989 to the introduction of euro notes and coins in January 2002. U.S. academic economists concentrated on whether a single currency was a good or bad thing, usually using the theory of optimum currency areas, and most were skeptical towards the single currency …
We suggest that the use of the optimum currency area paradigm was the main source of U.S. pessimism towards the single currency in the 1990s. The optimum currency area approach was biased towards the conclusion that Europe was far from being an optimum currency area. The optimum currency area paradigm inspired a static view, overlooking the time-consuming nature of the process of monetary unification. The optimum currency area view ignored the fact that the Europe was facing a choice between permanently fixed exchange rates and semi-permanent fixed rates. The optimum currency area approach led to the view that the single currency was a political construct with little or no economic foundation. In short, by adopting the optimum currency area theory as their main engine of analysis, U.S. academic economists became biased against the euro.
It is surprising that U.S. economists, living in a large monetary union and enjoying the benefits from monetary integration, were (and still remain) skeptical towards the euro. U.S. economists took, and still take, the desirability of a single currency for their country to be self-evident. To our knowledge, no U.S. econ- omist, inspired by the optimum currency area approach, has proposed to break up the United States into smaller regional currency areas. Perhaps we should take this as a positive sign for the future of the euro: in due time it will be accepted as the normal state of monetary affairs in Europe just like the dollar is in the United States.
Hmm. I somehow think there’s something missing here …