The end of Ricardo-Heckscher-Ohlin-Samuelson trade theory

16 December, 2016 at 14:39 | Posted in Economics | 8 Comments

In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have comparative advantage and importing goods in which they have a comparative disadvantage.

Heckscher-Ohlin-HO-Modern-Theory-of-International-TradeRicardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. In the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have a comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem –as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — beside the standard market clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with a constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor-prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. 

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real world situations. As so many other mainstream mathematical models taught to economics students today, this theorem has very little to do  with the real world.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘free trade is great’ just e.g. assume ‘all economists from Chicago are right’ and ‘all economists from Chicago consider free trade to be great’  The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.

david-ricardoThe logic behind classical free trade is that all can benefit when countries specialize in producing those things in which they have comparative advantage. The necessary requirement is that the means of production (capital and technology) are internationally immobile and stuck in each country. That is what globalization has undone.

Several years ago Jack Welch, former CEO of General Electric, captured the new reality when he talked of ideally having every plant you own on a barge. The economic logic was that factories should float between countries to take advantage of lowest costs …

Previously, when corporations were nationally based, profit maximization by business contributed to national economic success by ensuring efficient resource use. Today, corporations still maximize profits, but they do so from the standpoint of their global operations. Consequently, what is good for corporations may not be good for country …

Thomas Palley



RSS feed for comments on this post. TrackBack URI

  1. “That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real world situations.”
    One could indeed wonder exactly that. That would seem like an excellent place to start your enquiry.
    “Models and theories building on that kind of reasoning is nothing but a pointless waste of time.”
    Models and theories which legitimize the dominance of rentier interests in the political discourse are *anything but* a pointless waste of time for the rentiers.
    “This political propaganda is worthless! It isn’t even true!” –Lars Syll

  2. Again, one or the other of these is true:
    A) Improvements in the methodology of macroeconomic modeling can result in materially improved predictive power (which implies an unexplained existence of an eyewateringly large, yet unexploited, arbitrage opportunity in bond markets)
    B) Improvements in the methodology of macroeconomic modelling cannot result in materially improved predictive power (which implies an unexplained economically non-rational motivation for ongoing development of macroeconomic theory)
    A simple question: A or B?

    • Since I weighed in on the vanity of rigor previously, I suppose I should follow up.
      The Heckscher-Ohlin theory has been used extensively in the empirical analysis of international trade: it does not do a good job of explaining the observed patterns of international trade, as has been demonstrated in hundreds of published papers.
      In any real science or scholarly discipline, this would be a valuable result: we have learned that the world is not like this, that differences in factor endowments do not go very far in explaining the actually observed patterns of trade and exchange. And, then we would go back to the drawing board so to speak and revise our thinking, to develop better explanations.
      From the standpoint of methodology or epistemology, given that the limits of explanatory power have been explored so extensively in the empirical literature, the relevant question is not, why don’t economists move on? — to a large extent theoretical economists moved on more than a generation ago: the new trade theory of increasing returns to scale and networking was long ago already — but where and when are economists stuck in Heckscher-Ohlin? In textbooks? In polemics? And, when they are stuck there: why is theory allowed to trump the wealth of empirical research that contradicts it?
      Why is an a priori theory used descriptively?
      A theory like Heckscher-Ohlin can be used to construct an operational model that can be quite useful for organizing an accounting analysis of the data, by providing a contrasting framework that highlights deviations from stylized expectations. The unrealism of H-O’s simple assumptions can actually be a useful heuristic in the circumstance, not unlike an accountant imposing straight-line depreciation. This is why, I suspect, there are hundreds and hundreds of published papers that have used H-O to build operational models to use as a kind of data filter and accounting device. The research papers are not testing Heckscher-Ohlin, per se; most economists are not that stupid. They are testing the economy, the data.
      Given the wealth of data showing the actual patterns of trade as well as theories that do a better job of explaining those patterns, why aren’t economists informing the world about the actual patterns and drivers of trade? Why doesn’t knowledge of the economy trump knowledge of neoclassical economic theory?

    • I did not intend my previous comment as a reply to your comment. Sorry.

      But, I will answer your question.

      Re: A) — It is a mistake to imagine that arbitrage defeats predictive power; on the contrary, predictive power enables arbitrage to more efficiently apply the behavioral discipline necessary to get a predictable outcome. The economy is predictable to the extent that institutions successfully prescribe and regulate economic behavior. Arbitrage is a means of control, not a gaming of control. If we didn’t have effective arbitrage in financial markets, for example, prices would not informationally efficient. Better modeling, better institutions, more efficient control of behavior, better prediction.

      Re: B) A lot actual macroeconomic modeling, as I am sure you are aware, is a socially pathological exercise in agnotology, i.e. culturally induced ignorance or doubt.

      • Bruce, I believe you may have misread my point, which is not that “arbitrage defeats predictive power”, but rather that “predictive advantage enables arbitrage”.
        Let’s say there is a bond hedge fund (let’s call it “Even Longer Term Capital Management”) which has available a model which more accurately predicts inflation, employment, consumption, productivity, etc., than the models used by other market participants to price debt. In this instance ELTCM is in a position to arbitrage risk/yield pricing due to the ability to more narrowly bound outcomes than their competitors.
        From the historical example of LTCM and similar quant funds, the value of such a model would be (conservatively) in the tens of billions of dollars.
        Lars argues (persuasively) that contemporary macroeconomic modeling and econometrics is rife with obviously unsound statistical methodology, unsound scientific methodology, unsound methodological methodology, and so on.
        If fixing these shortcomings were to result in materially improved predictive power, there would be a $10s billion paycheck in it for anyone who worked it out.
        So, the unexplained question then is why has no one yet cashed that paycheck?

      • I do not think I agree that having a better model would necessarily pay off in the context of a well-functioning financial market. Roger Ibbotson used to tell a story about how, when Black and Scholes were still working out their eponymous options pricing model, he had the bright idea to acquire a seat on one of the Chicago exchanges where futures and options were regularly traded and combine the (pre-publication) model with one of the then somewhat exotic programmable electronic calculators to — as he imagined it — clean up. It didn’t work out quite as he imagined, as the regular traders seemed to adapt almost instantaneously to the innovation and his supposed “edge” and he was never able to find a way to operate that yielded any noticeable net to his method. He did allow that he unexpectedly made a considerable return, though, on the appreciation in the market value of his exchange seat, as trading volume increased, so there’s that.

        Like any other investment in a public good, an improved information technology is likely to see its “return” dissipate and that, from a public benefit viewpoint, is as it should be. The gains are no more real for being immune to private capture and one might argue that immunity to private capture can be a prerequisite for fully realizing the benefit.
        LTCM would seem to be another example that would tell against your thesis, as LTCM was dutch-booked out of its billions in the end by those invested in the “liquidity” LTCM had relied on to be nearly costlessly provided, becoming a black hole that very nearly destabilized the whole system.

  3. Today, most corporations are really based on so-called durable competitive advantage or absolute advantage not nondurable comparative advantage.

  4. well it’s empirically ‘proven’ now
    Seems to me like one big corculair argument but I can’t put my finger on it, could you? Thanks Hans Amsterdam

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at
Entries and comments feeds.