We must learn WHY the argument for revealed preference, which deceived Samuelson, is wrong. As per standard positivist ideas, preferences are internal to the heart and unobservable; hence they cannot be used in scientific theories. So Samuelson came up with the idea of using the observable Choices – unobservable preferences are revealed by observable choices … Yet the basic argument is wrong; one cannot eliminate the unobservable preference from economic theories. Understanding this error, which Samuelson failed to do, is the first knot to unravel, in order to clear our minds and hearts of the logical positivist illusions.
Asad Zaman’s blog post made me come to think about an article on revealed preference theory that yours truly wrote almost twenty-five years ago and got published in History of Political Economy (no. 25, 1993).
Paul Samuelson wrote a kind letter and informed me that he was the one who had recommended it for publication. But although he liked a lot in it, he also wrote a comment — published in the same volume of HOPE — saying:
Between 1938 and 1947, and since then as Pålsson Syll points out, I have been scrupulously careful not to claim for revealed preference theory novelties and advantages it does not merit. But Pålsson Syll’s readers must not believe that it was all redundant fuss about not very much.
Notwithstanding Samuelson’s comment, I do still think it basically was much fuss about ‘not very much.’
In 1938 Paul Samuelson offered a replacement for the then accepted theory of utility. The cardinal utility theory was discarded with the following words: “The discrediting of utility as a psychological concept robbed it of its possible virtue as an explanation of human behaviour in other than a circular sense, revealing its emptiness as even a construction” (1938, 61). According to Samuelson, the ordinalist revision of utility theory was, however, not drastic enough. The introduction of the concept of a marginal rate of substitution was considered “an artificial convention in the explanation of price behaviour” (1938, 62). One ought to analyze the consumer’s behaviour without having recourse to the concept of utility at all, since this did not correspond to directly observable phenomena. The old theory was criticized mainly from a methodological point of view, in that it used non-observable concepts and propositions.
The new theory should avoid this and thereby shed “the last vestiges of utility analysis” (1938, 62). Its main feature was a consistency postulate which said “if an individual selects batch one over batch two, he does not at the same time select two over one” (1938, 65). From this “perfectly clear” postulate and the assumptions of given demand functions and that all income is spent, Samuelson in (1938) and (1938a), could derive all the main results of ordinal utility theory (single-valuedness and homogeneity of degree zero of demand functions, and negative semi-definiteness of the substitution matrix).
In 1948 Samuelson no longer considered his “revealed preference” approach a new theory. It was then seen as a means of revealing consistent preferences and enhancing the acceptability of the ordinary ordinal utility theory by showing how one could construct an individual’s indifference map by purely observing his market behaviour. Samuelson concluded his article by saying that “[t]he whole theory of consumer’s behavior can thus be based upon operationally meaningful foundations in terms of revealed preference” (1948, 251). As has been shown lately, this is true only if we inter alia assume the consumer to be rational and to have unchanging preferences that are complete, asymmetrical, non-satiated, strictly convex, and transitive (or continuous). The theory, originally intended as a substitute for the utility theory, has, as Houthakker clearly notes, “tended to become complementary to the latter” (1950, 159).
Only a couple of years later, Samuelson held the view that he was in a position “to complete the programme begun a dozen years ago of arriving at the full empirical implications for demand behaviour of the most general ordinal utility analysis” (1950, 369). The introduction of Houthakker’s amendment assured integrability, and by that the theory had according to Samuelson been “brought to a close” (1950, 355). Starting “from a few logical axioms of demand consistency … [one] could derive the whole of the valid utility analysis as corollaries” (1950, 370). Since Samuelson had shown the “complete logical equivalence” of revealed preference theory with the regular “ordinal preference approach,” it follows that “in principle there is nothing to choose between the formulations” (1953, 1). According to Houthakker (1961, 709), the aim of the revealed preference approach is “to formulate equivalent systems of axioms on preferences and on demand functions.”
But if this is all, what has revealed preference theory then achieved? As it turns out, ordinal utility theory and revealed preference theory are – as Wong puts it – “not two different theories; at best, they are two different ways of expressing the same set of ideas” (2006, 118). And with regard to the theoretically solvable problem, we may still concur with Hicks that “there is in practice no direct test of the preference hypothesis” (1956, 58).
Sippel’s experiments showed “a considerable number of violations of the revealed preference axioms” (1997, 1442) and that from a descriptive point of view – as a theory of consumer behaviour – the revealed preference theory was of a very limited value.
Today it seems as though the proponents of revealed preference theory have given up the original 1938-attempt at building a theory on nothing else but observable facts, and settled instead on the 1950-version of establishing “logical equivalences.”
Mas-Collel et al. concludes their presentation of the theory by noting that “for the special case in which choice is defined for all subsets of X [the set of alternatives], a theory based on choice satisfying the weak axiom is completely equivalent to a theory of decision making based on rational preferences” (1995, 14).
When talking of determining people’s preferences through observation, Varian, for example, has “to assume that the preferences will remain unchanged” and adopts “the convention that … the underlying preferences … are known to be strictly convex.” He further postulates that the “consumer is an optimizing consumer.” If we are “willing to add more assumptions about consumer preferences, we get more precise estimates about the shape of indifference curves” (2006, 119-123, author’s italics). Given these assumptions, and that the observed choices satisfy the consistency postulate as amended by Houthakker, one can always construct preferences that “could have generated the observed choices.” This does not, however, prove that the constructed preferences really generated the observed choices, “we can only show that observed behavior is not inconsistent with the statement. We can’t prove that the economic model is correct.”
Kreps holds a similar view, pointing to the fact that revealed preference theory is “consistent with the standard preference-based theory of consumer behavior” (1990, 30).
The theory of consumer behavior has been developed in great part as an attempt to justify the idea of a downward-sloping demand curve. What forerunners like e.g. Cournot (1838) and Cassel (1899) did was merely to assert this law of demand. The utility theorists tried to deduce it from axioms and postulates on individuals’ economic behaviour. Revealed preference theory tried to build a new theory and to put it in operational terms, but ended up with just giving a theory logically equivalent to the old one. As such it also shares its shortcomings of being empirically nonfalsifiable and of being based on unrestricted universal statements.
As Kornai (1971, 133) remarked, “the theory is empty, tautological. The theory reduces to the statement that in period t the decision-maker chooses what he prefers … The task is to explain why he chose precisely this alternative rather than another one.” Further, pondering Amartya Sen’s verdict of the revealed preference theory as essentially underestimating “the fact that man is a social animal and his choices are not rigidly bound to his own preferences only” (1982, 66) and Georgescu-Roegen’s (1966, 192-3) apt description, a harsh assessment of what the theory accomplished should come as no surprise:
Lack of precise definition should not … disturb us in moral sciences, but improper concepts constructed by attributing to man faculties which he actually does not possess, should. And utility is such an improper concept … [P]erhaps, because of this impasse … some economists consider the approach offered by the theory of choice as a great progress … This is simply an illusion, because even though the postulates of the theory of choice do not use the terms ‘utility’ or ‘satisfaction’, their discussion and acceptance require that they should be translated into the other vocabulary … A good illustration of the above point is offered by the ingenious theory of the consumer constructed by Samuelson.
Nothing lost, nothing gained.
Cassel, Gustav 1899. ”Grundriss einer elementaren Preislehre.” Zeitschrift für die gesamte Staatswissenschaft 55.3:395-458.
Cournot, Augustin 1838. Recherches sur les principes mathématiques de la théorie des richesses. Paris. Translated by N. T. Bacon 1897 as Researches into the Mathematical Principles of the Theory of Wealth. New York: The Macmillan Company.
Georgescu-Roegen, Nicholas 1966. “Choice, Expectations, and Measurability.” In Analytical Economics: Issues and Problems. Cambridge, Massachusetts: Harvard University Press.
Hicks, John 1956. A Revision of Demand Theory. Oxford: Clarendon Press.
Houthakker, Hendrik 1950. “Revealed Preference and the Utility Function.” Economica 17 (May):159-74.
–1961. “The Present State of Consumption Theory.” Econometrica 29 (October):704-40.
Kornai, Janos 1971. Anti-equilibrium. London: North-Holland.
Kreps, David 1990. A Course in Microeconomic Theory. New York: Harvester Wheatsheaf.
Mas-Collel, Andreu et al. 1995. Microeconomic Theory. New York: Oxford University Press.
Samuelson, Paul 1938. “A Note on the Pure Theory of Consumer’s Behaviour.” Economica 5 (February):61-71.
–1938a. “A Note on the Pure Theory of Consumer’s Behaviour: An Addendum.” Economica 5 (August):353-4.
–1947. Foundations of Economic Analysis. Cambridge, Massachusetts: Harvard University Press.
–1948. “Consumption Theory in Terms of Revealed Preference.” Economica 15 (November):243-53.
–1950. “The Problem of Integrability in Utility Theory.” Economica 17 (November):355-85.
–1953. “Consumption Theorems in Terms of Overcompensation rather than Indifference Comparisons.” Economica 20 (February):1-9.
Sen, Amartya (1982). Choice, Welfare and Measurement. London: Basil Blackwell.
Sippel, Reinhard 1997. “An experiment on the pure theory of consumer’s behaviour.” Economic Journal 107:1431-44.
Varian, Hal 2006. Intermediate Microeconomics: A Modern Approach. (7th ed.) New York: W. W. Norton & Company.
Wong, Stanley 2006. The Foundations of Paul Samuelson’s Revealed Preference Theory. (Revised ed.) London: Routledge & Kegan Paul.
To his credit Keynes was not, in contrast to Samuelson, a formalist who was committed to mathematical economics. Keynes wanted models, but for him, building them required ‘ a vigilant observation of the actual working of our system.’ Indeed, ‘to convert a model into a quantitative formula is to destroy its usefulness as an instrument of thought.’ That conclusion can be strongly endorsed!
When it comes to my economics training, I’m a late bloomer. My primary training is in evolutionary theory, which I have used as a navigational guide to study many human-related topics, such as religion. But I didn’t tackle economics until 2008 …
At the time I had no way to answer this question. Economic jargon mystified me—an embarrassing confession, since I am fully at home with mathematical and computer simulation models. Economists were very smart, very powerful, and they spoke a language that I didn’t understand. They won Nobel Prizes.
Nevertheless, I had faith that evolution could say something important about the regulatory systems that economists preside over, even if I did not yet know the details …
Fortunately, I had a Fellowship of the Ring to rely upon … Some of my closest colleagues are highly respected economists, Herbert Gintis, Samuel Bowles, and Ernst Fehr …
I already knew from their work that the main body of modern economics, called neoclassical economics, was being challenged by a new school of thought called experimental and behavioural economics …
I was disappointed. My colleagues such as Herb, Sam, and Ernst confirmed my own impression: They appreciated the relevance of evolution but were a tiny minority among behavioral and experimental economists, who in turn were a tiny minority among neoclassical economists …
The more I learned about economics, the more I discovered a landscape that is surpassingly strange. Like the land of Mordor, it is dominated by a single theoretical edifice that arose like a volcano early in the 20th century and still dominates the landscape. The edifice is based upon a conception of human nature that is profoundly false, defying the dictates of common sense, before we even get to the more refined dictates of psychology and evolutionary theory. Yet, efforts to move the theory in the direction of common sense are stubbornly resisted.
[h/t Tom Hickey]
‘If you really want something, you have to be prepared to work very hard, take advantage of opportunity, and above all — never give up.’
[h/t Ulrika Hall]
Still absolutely breathtakingly great!
Distinguished social psychologist Richard E. Nisbett has a somewhat atypical aversion to multiple regression analysis . In his Intelligence and How to Get It (Norton 2011) he wrote (p. 17):
Researchers often determine the individual’s contemporary IQ or IQ earlier in life, socioeconomic status of the family of origin, living circumstances when the individual was a child, number of siblings, whether the family had a library card, educational attainment of the individual, and other variables, and put all of them into a multiple-regression equation predicting adult socioeconomic status or income or social pathology or whatever. Researchers then report the magnitude of the contribution of each of the variables in the regression equation, net of all the others (that is, holding constant all the others). It always turns out that IQ, net of all the other variables, is important to outcomes. But … the independent variables pose a tangle of causality – with some causing others in goodness-knows-what ways and some being caused by unknown variables that have not even been measured. Higher socioeconomic status of parents is related to educational attainment of the child, but higher-socioeconomic-status parents have higher IQs, and this affects both the genes that the child has and the emphasis that the parents are likely to place on education and the quality of the parenting with respect to encouragement of intellectual skills and so on. So statements such as “IQ accounts for X percent of the variation in occupational attainment” are built on the shakiest of statistical foundations. What nature hath joined together, multiple regressions cannot put asunder.
And now he is back with a half an hour lecture — The Crusade Against Multiple Regression Analysis — posted on The Edge website a week ago (watch the lecture here).
Now, I think that what Nisbett says is right as far as it goes, although it would certainly have strengthened Nisbett’s argumentation if he had elaborated more on the methodological question around causality, or at least had given some mathematical-statistical-econometric references. Unfortunately, his alternative approach is not more convincing than regression analysis. As so many other contemporary social scientists today, Nisbett seems to think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In this way we are supposed to be able to not have to actually know what all these other factors are.
If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.
Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.
As I have argued — e. g. here — that means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.
Nisbett is well worth reading and listening to, but on the issue of the shortcomings of multiple regression analysis, no one sums it up better than eminent mathematical statistician David Freedman in his Statistical Models and Causal Inference:
If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky. However, without the model, the data cannot be used to answer the research question …
In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …
Regression models often seem to be used to compensate for problems in measurement, data collection, and study design. By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian …
Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …
Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.
Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.
Paul Krugman’s recent posts have been most peculiar. Several have looked uncomfortably like special pleading for political figures he likes, notably Hillary Clinton. He has, in my judgement, stooped rather far down in attacking people well below him in the public relations food chain …
Perhaps the most egregious and clearest cut case is his refusal to address the substance of a completely legitimate, well-documented article by David Dayen outing Krugman, and to a lesser degree, his fellow traveler Mike Konczal, in abjectly misrepresenting Sanders’ financial reform proposals …
The Krugman that was early to stand up to the Iraq War, who was incisive before and during the crisis has been very much in absence since Obama took office. It’s hard to understand the loss of intellectual independence. That may not make Krugman any worse than other Democratic party apparatchiks, but he continues to believe he is other than that, and the lashing out at Dayen looks like a wounded denial of his current role. Krugman and Konczal need to be seen as what they are: part of the Vichy Left brand cover for the Democratic party messaging apparatus. Krugman, sadly, has chosen to diminish himself for a not very worthy cause.