Economics — nothing but an ideological paranoia

31 August, 2017 at 14:57 | Posted in Economics | Comments Off on Economics — nothing but an ideological paranoia


Economists have a tendency to get enthralled by their theories and models ​and forget that behind the figures and abstractions there is a real world with real people. Real people that have to pay dearly for fundamentally flawed doctrines and recommendations.

Let’s make sure the consequences will rest on the conscience of those economists.


Damon Runyon’s Law

31 August, 2017 at 12:59 | Posted in Economics | 2 Comments

The eminently quotable Robert Solow says it all:

To get right down to it, I suspect that the attempt to construct economics as an axiomatically based hard science is doomed to fail. There are many partially overlapping reasons for believing this …

soloA modern economy is a very complicated system. Since we cannot conduct controlled on its smaller parts, or even observe them in isolation, the classical hard- science devices for discriminating between competing hypotheses are closed to us. The main alternative device is the statistical analysis of historical time-series. But then another difficulty arises. The competing hypotheses are themselves complex and subtle. We know before we start that all of them, or at least many of them, are capable of fitting the data in a gross sort of way. Then, in order to make more refined distinctions, we need long time-series observed under stationary conditions.

Unfortunately, however, economics is a social science. It is subject to Damon Runyon’s Law that nothing between human beings is more than three to one. To express the point more formally, much of what we observe cannot be treated as the realization of a stationary stochastic process without straining credulity. Moreover, all narrowly economic activity is embedded in a web of social institutions, customs, beliefs, and attitudes. Concrete outcomes are indubitably affected by these background factors, some of which change slowly and gradually, others erratically. As soon as time-series get long enough to offer hope of discriminating among complex hypotheses, the likelihood that they remain stationary dwindles away, and the noise level gets correspondingly high. Under these circumstances, a little cleverness and persistence can get you almost any result you want. I think that is why so few econometricians have ever been forced by the facts to abandon a firmly held belief …

Robert Solow


31 August, 2017 at 09:40 | Posted in Politics & Society | 1 Comment


Does competition really eliminate bad bosses?

30 August, 2017 at 17:31 | Posted in Economics | 3 Comments

But what of the argument that competition eventually eliminates bad bosses? True, it does sometimes; the relatively egalitarian John Lewis Partnership has done better than department stores such as House of Fraser or Debenhams, for example. But market forces are weak (and perhaps getting weaker).

ceoOne reason for this is simply state intervention; without this, almost all shareholder-owned banks with their high-paid CEOs would have vanished.

Another reason is that there isn’t really a properly-functioning market for CEOs. I own tens of thousands of pounds of shares, but I’ve never been asked to vote on a CEO’s pay. Instead, the vote is exercised by fund managers many of whom are more bothered by the relative performance of their funds than absolute performance. We have widespread agency failures in which mates pay each other. And as Milton Friedman said:

“If I spend somebody else’s money on somebody else, I’m not concerned about how much it is, and I’m not concerned about what I get.”

Unsurprisingly, then, there is little clear link between CEO pay and corporate performance. It might be that high pay is better explained by rent-seeking than as a reward for maximizing shareholder value – though as it’s almost impossible to know in most cases what constitutes maximal value, we might never know for sure.

In fact, my analogy with feudal lords is too generous to CEOs. Whereas (arguendo) a bad lord would pay for his incompetence perhaps with his life, bad CEOs walk away with fat pay cheques.

When so-called free marketeers try to defend bosses’ pay, they do the cause of free markets a huge dis-service by encouraging people to equate free markets with what is in effect a rigged system whereby bosses enrich themselves with no obvious benefit to the rest of us.

Chris Dillow

Postmodern mumbo jumbo

29 August, 2017 at 17:19 | Posted in Theory of Science & Methodology | 3 Comments

MUMBO-JUMBO1The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

Judith Butler

Revealed preference theory — much fuss about ‘not very much’

29 August, 2017 at 09:00 | Posted in Economics | 1 Comment

Paul-Samuelson-Pioneer-of-Revealed-Preference-TheoryWe must learn WHY the argument for revealed preference, which deceived Samuelson, is wrong. As per standard positivist ideas, preferences are internal to the heart and unobservable; hence they cannot be used in scientific theories. So Samuelson came up with the idea of using the observable Choices – unobservable preferences are revealed by observable choices … Yet the basic argument is wrong; one cannot eliminate the unobservable preference from economic theories. Understanding this error, which Samuelson failed to do, is the first knot to unravel, in order to clear our minds and hearts of the logical positivist illusions.

Asad Zaman

This blog post made me come to think about an article on revealed preference theory that yours truly wrote twenty-five years ago and got published in History of Political Economy (no. 25, 1993).

Paul Samuelson wrote a kind letter and informed me that he was the one who had recommended it for publication. But although he liked a lot in it, he also wrote a comment — published in the same volume of HOPE — saying:

Between 1938 and 1947, and since then as Pålsson Syll points out, I have been scrupulously careful not to claim for revealed preference theory novelties and advantages it does not merit. But Pålsson Syll’s readers must not believe that it was all redundant fuss about not very much.

awongNotwithstanding Samuelson’s comment, I do still think it basically was much fuss about ‘not very much.’

In 1938 Paul Samuelson offered a replacement for the then accepted theory of utility. The cardinal utility theory was discarded with the following words: “The discrediting of utility as a psychological concept robbed it of its possible virtue as an explanation of human behaviour in other than a circular sense, revealing its emptiness as even a construction” (1938, 61). According to Samuelson, the ordinalist revision of utility theory was, however, not drastic enough. The introduction of the concept of a marginal rate of substitution was considered “an artificial convention in the explanation of price behaviour” (1938, 62). One ought to analyze the consumer’s behaviour without having recourse to the concept of utility at all, since this did not correspond to directly observable phenomena. The old theory was criticized mainly from a methodological point of view, in that it used non-observable concepts and propositions.

The new theory should avoid this and thereby shed “the last vestiges of utility analysis” (1938, 62). Its main feature was a consistency postulate which said: “if an individual selects batch one over batch two, he does not at the same time select two over one” (1938, 65). From this “perfectly clear” postulate and the assumptions of given demand functions and that all income is spent, Samuelson in (1938) and (1938a), could derive all the main results of ordinal utility theory (single-valuedness and homogeneity of degree zero of demand functions, and negative semi-definiteness of the substitution matrix).

In 1948 Samuelson no longer considered his “revealed preference” approach a new theory. It was then seen as a means of revealing consistent preferences and enhancing the acceptability of the ordinary ordinal utility theory by showing how one could construct an individual’s indifference map by purely observing his market behaviour. Samuelson concluded his article by saying that “[t]he whole theory of consumer’s behavior can thus be based upon operationally meaningful foundations in terms of revealed preference” (1948, 251). As has been shown lately, this is true only if we inter alia assume the consumer to be rational and to have unchanging preferences that are complete, asymmetrical, non-satiated, strictly convex, and transitive (or continuous). The theory, originally intended as a substitute for the utility theory, has, as Houthakker clearly notes, “tended to become complementary to the latter” (1950, 159).

Only a couple of years later, Samuelson held the view that he was in a position “to complete the programme begun a dozen years ago of arriving at the full empirical implications for demand behaviour of the most general ordinal utility analysis” (1950, 369). The introduction of Houthakker’s amendment assured integrability, and by that, the theory had according to Samuelson been “brought to a close” (1950, 355). Starting “from a few logical axioms of demand consistency … [one] could derive the whole of the valid utility analysis as corollaries” (1950, 370). Since Samuelson had shown the “complete logical equivalence” of revealed preference theory with the regular “ordinal preference approach,” it follows that “in principle there is nothing to choose between the formulations” (1953, 1). According to Houthakker (1961, 709), the aim of the revealed preference approach is “to formulate equivalent systems of axioms on preferences and on demand functions.”

But if this is all, what has revealed preference theory then achieved? As it turns out, ordinal utility theory and revealed preference theory are – as Wong puts it – “not two different theories; at best, they are two different ways of expressing the same set of ideas” (2006, 118). And with regard to the theoretically solvable problem, we may still concur with Hicks that “there is in practice no direct test of the preference hypothesis” (1956, 58).

Sippel’s experiments showed “a considerable number of violations of the revealed preference axioms” (1997, 1442) and that from a descriptive point of view – as a theory of consumer behaviour – the revealed preference theory was of a very limited value.

Today it seems as though the proponents of revealed preference theory have given up the original 1938-attempt at building a theory on nothing else but observable facts, and settled instead on the 1950-version of establishing “logical equivalences.”

Mas-Collel et al. concludes their presentation of the theory by noting that “for the special case in which choice is defined for all subsets of X [the set of alternatives], a theory based on choice satisfying the weak axiom is completely equivalent to a theory of decision making based on rational preferences” (1995, 14).

When talking of determining people’s preferences through observation, Varian, for example, has “to assume that the preferences will remain unchanged” and adopts “the convention that … the underlying preferences … are known to be strictly convex.” He further postulates that the “consumer is an optimizing consumer.” If we are “willing to add more assumptions about consumer preferences, we get more precise estimates about the shape of indifference curves” (2006, 119-123, author’s italics). Given these assumptions, and that the observed choices satisfy the consistency postulate as amended by Houthakker, one can always construct preferences that “could have generated the observed choices.” This does not, however, prove that the constructed preferences really generated the observed choices, “we can only show that observed behavior is not inconsistent with the statement. We can’t prove that the economic model is correct.”

Kreps holds a similar view, pointing to the fact that revealed preference theory is “consistent with the standard preference-based theory of consumer behavior” (1990, 30).

The theory of consumer behavior has been developed in great part as an attempt to justify the idea of a downward-sloping demand curve. What forerunners like e.g. Cournot (1838) and Cassel (1899) did was merely to assert this law of demand. The utility theorists tried to deduce it from axioms and postulates on individuals’ economic behaviour. Revealed preference theory tried to build a new theory and to put it in operational terms, but ended up with just giving a theory logically equivalent to the old one. As such it also shares its shortcomings of being empirically unfalsifiable and of being based on unrestricted universal statements.

As Kornai (1971, 133) remarked, “the theory is empty, tautological. The theory reduces to the statement that in period t the decision-maker chooses what he prefers … The task is to explain why he chose precisely this alternative rather than another one.” Further, pondering Amartya Sen’s verdict of the revealed preference theory as essentially underestimating “the fact that man is a social animal and his choices are not rigidly bound to his own preferences only” (1982, 66) and Georgescu-Roegen’s (1966, 192-3) apt description, a harsh assessment of what the theory accomplished should come as no surprise:

georgescu1Lack of precise definition should not … disturb us in moral sciences, but improper concepts constructed by attributing to man faculties which he actually does not possess, should. And utility is such an improper concept … [P]erhaps, because of this impasse … some economists consider the approach offered by the theory of choice as a great progress … This is simply an illusion, because even though the postulates of the theory of choice do not use the terms ‘utility’ or ‘satisfaction’, their discussion and acceptance require that they should be translated into the other vocabulary … A good illustration of the above point is offered by the ingenious theory of the consumer constructed by Samuelson.

Nothing lost, nothing gained.

Cassel, Gustav 1899. ”Grundriss einer elementaren Preislehre.” Zeitschrift für die gesamte Staatswissenschaft 55.3:395-458.

Cournot, Augustin 1838. Recherches sur les principes mathématiques de la théorie des richesses. Paris. Translated by N. T. Bacon 1897 as Researches into the Mathematical Principles of the Theory of Wealth. New York: The Macmillan Company.

Georgescu-Roegen, Nicholas 1966. “Choice, Expectations, and Measurability.” In Analytical Economics: Issues and Problems. Cambridge, Massachusetts: Harvard University Press.

Hicks, John 1956. A Revision of Demand Theory. Oxford: Clarendon Press.

Houthakker, Hendrik 1950. “Revealed Preference and the Utility Function.” Economica 17 (May):159-74.
–1961. “The Present State of Consumption Theory.” Econometrica 29 (October):704-40.

Kornai, Janos 1971. Anti-equilibrium. London: North-Holland.

Kreps, David 1990. A Course in Microeconomic Theory. New York: Harvester Wheatsheaf.

Mas-Collel, Andreu et al. 1995. Microeconomic Theory. New York: Oxford University Press.

Samuelson, Paul 1938. “A Note on the Pure Theory of Consumer’s Behaviour.” Economica 5 (February):61-71.
–1938a. “A Note on the Pure Theory of Consumer’s Behaviour: An Addendum.” Economica 5 (August):353-4.
–1947. Foundations of Economic Analysis. Cambridge, Massachusetts: Harvard University Press.
–1948. “Consumption Theory in Terms of Revealed Preference.” Economica 15 (November):243-53.
–1950. “The Problem of Integrability in Utility Theory.” Economica 17 (November):355-85.
–1953. “Consumption Theorems in Terms of Overcompensation rather than Indifference Comparisons.” Economica 20 (February):1-9.

Sen, Amartya (1982). Choice, Welfare and Measurement. London: Basil Blackwell.

Sippel, Reinhard 1997. “An experiment on the pure theory of consumer’s behaviour.” Economic Journal 107:1431-44.

Varian, Hal 2006. Intermediate Microeconomics: A Modern Approach. (7th ed.) New York: W. W. Norton & Company.

Wong, Stanley 2006. The Foundations of Paul Samuelson’s Revealed Preference Theory. (Revised ed.) London: Routledge & Kegan Paul.

On the limits of game theory

28 August, 2017 at 15:51 | Posted in Economics | 2 Comments


Back in 1991, when yours truly earned his first PhD​ with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level, ​it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

This, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed.

For those of you who are not familiar with game theory, but eager to learn something relevant about it, I have three suggestions:

Start with the best introduction there is


and then go on to read more on the objections that can be raised against game theory and its underlying assumptions on e.g. rationality, “backward induction” and “common knowledge” in


and then finish off with listening to what one of the world’s most renowned game theorists — Ariel Rubinstein — has to say on the — rather limited — applicability of game theory in this interview (emphasis added):

What are the applications of game theory for real life?

That’s a central question: Is game theory useful in a concrete sense or not? Game theory is an area of economics that has enjoyed fantastic public relations. [John] Von Neumann [one of the founders of game theory] was not only a genius in mathematics, he was also a genius in public relations. The choice of the name “theory of games” was brilliant as a marketing device.
rubinThe word “game” has friendly, enjoyable associations. It gives a good feeling to people. It reminds us of our childhood, of chess and checkers, of children’s games. The associations are very light, not heavy, even though you may be trying to deal with issues like nuclear deterrence. I think it’s a very tempting idea for people, that they can take something simple and apply it to situations that are very complicated, like the economic crisis or nuclear deterrence. But this is an illusion. Now my views, I have to say, are extreme compared to many of my colleagues. I believe that game theory is very interesting. I’ve spent a lot of my life thinking about it, but I don’t respect the claims that it has direct applications.

The analogy I sometimes give is from logic. Logic is a very interesting field in philosophy, or in mathematics. But I don’t think anybody has the illusion that logic helps people to be better performers in life. A good judge does not need to know logic. It may turn out to be useful – logic was useful in the development of the computer sciences, for example – but it’s not directly practical in the sense of helping you figure out how best to behave tomorrow, say in a debate with friends, or when analysing data that you get as a judge or a citizen or as a scientist.

In game theory, what we’re doing is saying, “Let’s try to abstract our thinking about strategic situations.” Game theorists are very good at abstracting some very complicated situations and putting some elements of the situations into a formal model. In general, my view about formal models is that a model is a fable. Game theory is about a collection of fables. Are fables useful or not? In some sense, you can say that they are useful, because good fables can give you some new insight into the world and allow you to think about a situation differently. But fables are not useful in the sense of giving you advice about what to do tomorrow, or how to reach an agreement between the West and Iran. The same is true about game theory.

In general, I would say there were too many claims made by game theoreticians about its relevance. Every book of game theory starts with “Game theory is very relevant to everything that you can imagine, and probably many things that you can’t imagine.” In my opinion that’s just a marketing device.

Why do it then?

… What I’m opposing is the approach that says, in a practical situation, “OK, there are some very clever game theoreticians in the world, let’s ask them what to do.” I have not seen, in all my life, a single example where a game theorist could give advice, based on the theory, which was more useful than that of the layman.

Looking at the flipside, was there ever a situation in which you were pleasantly surprised at what game theory was able to deliver?

None. Not only none, but my point would be that categorically game theory cannot do it.

Of Mice and Men

27 August, 2017 at 15:44 | Posted in Politics & Society | Comments Off on Of Mice and Men

pid_1103Throughout European history the idea of the human being has been expressed in contradistinction to the animal. The latter’s lack of reason is the proof of human dignity. So insistently and unanimously has this antithesis been recited … that few other ideas are so fundamental to Western anthropology. The antithesis is acknowledged even today. The behaviorists only appear to have forgotten it. That they apply to human beings the same formulae and results which they wring without restraint from defenseless animals in their abominable physiological laboratories, proclaims the difference in especially subtle way. The conclusion they draw from the mutilated animal bodies applies, not to animals in freedom, but to human beings today. By mistreating animals they announce that they, and only they in the whole of creation, function voluntarily in the same mechanical, blind, automatic way the twitching movements of the bound victims made use of by the expert …

In this world liberated from appearance — in which human beings, having forfeited reflection, have become once more the cleverest animals, which subjugate the rest of the universe when they happen not to be tear­ing themselves apart — to show concern for animals is considered no longer merely sentimental but​ a betrayal of progress.

Abba Lerner and functional finance

27 August, 2017 at 08:38 | Posted in Economics | 1 Comment

According to Abba Lerner, the purpose of public debt is “to achieve a rate of interest which results in the most desirable level of investment.” He also maintained that an application of Functional Finance will have a tendency to balance the budget in the long run:

Finally, there is no reason for assuming that, as a result of the continued application of Functional Finance to maintain full employment, the government must always be borrowing more money and increasing the national debt. There are a number of reasons for this.

dec3bb27f72875e4fb4d4b62daebb2fd161b36392c1a0626f00cfd2ece207d84First, full employment can be maintained by printing the money needed for it, and this does not increase the debt at all. It is probably advisable, however, to allow debt and money to increase together in a certain balance, as long as one or the other has to increase.

Second, since one of the greatest deterrents to private investment is the fear that the depression will come before the investment has paid for itself, the guarantee of permanent full employment will make private investment much more attractive, once investors have gotten over their suspicion of the new procedure. The greater private investment will diminish the need for deficit spending.

Third, as the national debt increases, and with it the sum of private wealth, there will be an increasingly yield from taxes on higher incomes and inheritances, even if the tax rates are unchanged. These higher tax payments do not represent reductions of spending by the taxpayers. Therefore the government does not have to use these proceeds to maintain the requisite rate of spending, and can devote them to paying the interest on the national debt.

Fourth, as the national debt increases it acts as a self-equilibrating force, gradually diminishing the further need for its growth and finally reaching an equilibrium level where its tendency to grow comes completely to an end. The greater the national debt the greater is the quantity of private wealth. The reason for this is simply that for every dollar of debt owed by the government there is a private creditor who owns the government obligations (possibly through a corporation in which he has shares), and who regards these obligations as part of his private fortune. The greater the private fortunes the less is the incentive to add to them by saving out of current income …

Fifth, if for any reason the government does not wish to see private property grow too much … it can check this by taxing the rich instead of borrowing from them, in its program of financing government spending to maintain full employment. The rich will not reduce their spending significantly, and thus the effects on the economy, apart from the smaller debt, will be the same as if Money had been borrowed from them.

Abba Lerner

Even if today’s mainstream Cambridge economists do not understand Lerner, there once was one who certainly did:

I recently read an interesting article on deficit budgeting … His argument is impeccable.

John Maynard Keynes CW XXVII:320


According to the Ricardian equivalence hypothesis the public sector basically finances its expenditures through taxes or by issuing bonds, and bonds must sooner or later be repaid by raising taxes in the future.

If the public sector runs extra spending through deficits, taxpayers will according to the hypothesis anticipate that they will have to pay higher taxes in future — and therefore increase their savings and reduce their current consumption to be able to do so, the consequence being that aggregate demand would not be different to what would happen if taxes were raised today.

Robert Barro attempted to give the proposition a firm theoretical foundation in the 1970s.

So let us get the facts straight from the horse’s mouth.

Describing the Ricardian Equivalence in 1989 Barro writes (emphasis added):

Suppose now that households’ demands for goods depend on the expected present value of taxes—that is, each household subtracts its share of this present value from the expected present value of income to determine a net wealth position. Then fiscal policy would affect aggregate consumer demand only if it altered the expected present value of taxes. But the preceding argument was that the present value of taxes would not change as long as the present value of spending did not change. Therefore, the substitution of a budget deficit for current taxes (or any other rearrangement of the timing of taxes) has no impact on the aggregate demand for goods. In this sense, budget deficits and taxation have equivalent effects on the economy — hence the term, “Ricardian equivalence theorem.” To put the equivalence result another way, a decrease in the government’s saving (that is, a current budget deficit) leads to an offsetting increase in desired private saving, and hence to no change in desired national saving.

Since desired national saving does not change, the real interest rate does not have to rise in a closed economy to maintain balance between desired national saving and investment demand. Hence, there is no effect on investment, and no burden of the public debt …

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

There is, of course, no reason for us to believe in that fairy-tale. Ricardo himself — mirabile dictu — didn’t believe in Ricardian equivalence. In “Essay on the Funding System” (1820) he wrote:

But the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly. We are too apt to think that the war is burdensome only in proportion to what we are at the moment called to pay for it in taxes, without reflecting on the probable duration of such taxes. It would be difficult to convince a man possessed of £20,000, or any other sum, that a perpetual payment of £50 per annum was equally burdensome with a single tax of £1000.

The balanced budget mythology

26 August, 2017 at 18:58 | Posted in Economics | 5 Comments

PAUL_SAMUELSONI think there is an element of truth in the view that the superstition that the budget must be balanced at all times [is necessary]. Once it is debunked, [it] takes away one of the bulwarks that every society must have against expenditure out of control. There must be discipline in the allocation of resources or you will have anarchistic chaos and inefficiency. And one of the functions of old fashioned religion was to scare people by sometimes what might be regarded as myths into behaving in a way that the long-run civilized life requires. We have taken away a belief in the intrinsic necessity of balancing the budget if not in every year, [and then] in every short period of time. If Prime Minister Gladstone came back to life he would say “oh, oh what you have done” and James Buchanan argues in those terms. I have to say that I see merit in that view.

Paul Samuelson

Samuelson’s statement makes me come to think of the following passage in Keynes’ General Theory:

The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is com­monly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authori­ty, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.

Wonder why …

Adorno hat Trump vorhergesehen

26 August, 2017 at 12:29 | Posted in Politics & Society | Comments Off on Adorno hat Trump vorhergesehen


If your German isn’t too rusty, here’s for more too listen and read on the Adorno-Trump theme!

Bad equality

25 August, 2017 at 15:40 | Posted in Politics & Society | 2 Comments

An emancipated society would be no unitary state, but the realization of the generality in the reconciliation of differences. A politics which took this seriously should therefore not propagate even the idea of the abstract equality of human beings .adornoThey should rather point to the bad equality of today, the identity of film interests with weapons interests, and think of the better condition as the one in which one could be different without fear. If one attested to blacks, that they are exactly like whites, while they are nevertheless not so, then one would secretly wrong them all over again. This humiliates them in a benevolent manner by a standard which, under the pressure of the system, they cannot attain, and moreover whose attainment would be a dubious achievement. The spokespersons of unitary tolerance are always prepared to turn intolerantly against any group which does not fit in: the obstinate enthusiasm for blacks meshes seamlessly with the outrage over obnoxious Jews. The “melting pot” was an institution of free-wheeling industrial capitalism. The thought of landing in it conjures up martyrdom, not democracy.

Georgy V. Sviridov — ‘Holy God’

24 August, 2017 at 23:06 | Posted in Varia | Comments Off on Georgy V. Sviridov — ‘Holy God’


‘Autonomy’ in econometics

24 August, 2017 at 22:50 | Posted in Statistics & Econometrics | 2 Comments

The point of the discussion, of course, has to do with where Koopmans thinks we should look for “autonomous behaviour relations”. He appeals to experience but in a somewhat oblique manner. He refers to the Harvard barometer “to show that relationships between economic variables … not traced to underlying behaviour equations are unreliable as instruments for prediction” … His argument would have been more effectively put had he been able to give instances of relationships that have been “traced to underlying behaviour equations” and that have been reliable instruments for prediction. He did not do this, and I know of no conclusive case that he could draw upon. There are of course cases of economic models that he could have mentioned as having been unreliable predictors. But these latter instances demonstrate no more than the failure of Harvard barometer: all were presumably built upon relations that were more or less unstable in time. devoidThe meaning conveyed, we may suppose, by the term “fundamental autonomous relation” is a relation stable in time and not drawn as an inference from combinations of other relations. The discovery of such relations suitable for the prediction procedure that Koopmans has in mind has yet to be publicly presented, and the phrase “underlying behaviour equation” is left utterly devoid of content.

Rutledge Vining

Guess Robert Lucas didn’t read Vining …

Markets as beauty contests

24 August, 2017 at 08:25 | Posted in Economics | 1 Comment


Dialectics of imagination

23 August, 2017 at 07:37 | Posted in Varia | 2 Comments


Solving the St. Petersburg Paradox

22 August, 2017 at 18:57 | Posted in Economics | 4 Comments

Solving the St Petersburg paradox in the way Peters suggests, involves arguments about ergodicity and the all-important difference between time averages and ensemble averages. These are difficult concepts that many students of economics have problems with understanding. So let me just try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student you probably would because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different ‘parallel universes’ where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble — the ensemble average — by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average — imagining one real universe where the six different but dependent outcomes occur consecutively — we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

From a mathematical point of view, you can  (somewhat non-rigorously) describe the difference between ensemble averages and time averages as a difference between arithmetic averages and geometric averages. Tossing a fair coin and gaining 20% on the stake (S) if winning (heads) and having to pay 20% on the stake (S) if losing (tails), the arithmetic average of the return on the stake, assuming the outcomes of the coin-toss being independent, would be [(0.5*1.2S + 0.5*0.8S) – S)/S]  = 0%. If considering the two outcomes of the toss not being independent, the relevant time average would be a geometric average return of squareroot [(1.2S *0.8S)]/S – 1 = -2%.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150 and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g.  here).

When applied to the neoclassical theory of expected utility, one thinks in terms of ‘parallel universe’ and asks what is the expected return of an investment, calculated as an average over the ‘parallel universe’? In our coin tossing example, it is as if one supposes that various ‘I’ are tossing a coin and that the loss of many of them will be offset by the huge profits one of these ‘I’ does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the ‘non-parallel universe’ in which we live.

Time averages give a more realistic answer, where one thinks in terms of the only universe we actually live in and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and ‘black swans’ are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked to a multiplicative process (as e. g. investment returns with ‘compound interest’) which is basically non-ergodic.


Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, is of little comfort to those who live in the one and only possible world that we call the real world.

So — solving the St Petersburg paradox may at first seem to be a highly esoteric kind of thing. As Peters shows — it’s not!

The right kind of realism

22 August, 2017 at 16:08 | Posted in Theory of Science & Methodology | 2 Comments

Some commentators on this blog seem to be of the opinion that since yours truly is critical of mainstream economics and ask for more relevance and realism I’m bound to be a ‘naive’ realist or empiricist.

Nothing could be further from the truth!

bhaskarIn a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is revealing​ what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything about​ what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

In the face of the kind of methodological individualism and rational choice theory that dominate positivist social science we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals.

archerThe overarching flaw with methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events are​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realist knowledge if it acknowledges its dependence of the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

Going for the right kind of certainty​ in economics

22 August, 2017 at 16:04 | Posted in Theory of Science & Methodology | 1 Comment

64800990In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world all evidence has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to math or logic, not science.

Trading in Myths

21 August, 2017 at 18:57 | Posted in Economics | 2 Comments

Pretending that the distribution of income and wealth that results
from a long set of policy decisions is somehow the natural workings of the
market is not a serious position. It might be politically convenient for
conservatives who want to lock inequality in place. It is a more politically
compelling position to argue that we should not interfere with market
outcomes than to argue for a system that is deliberately structured to make
some people very rich while leaving others in poverty.

rigged_coverPretending that distributional outcomes are just the workings of the market is convenient for any beneficiaries of this inequality, even those who consider themselves liberal …

But we should not structure our understanding of the economy around political convenience. There is no way of escaping the fact that
levels of output and employment are determined by policy, that the length
and strength of patent and copyright monopolies are determined by
policy, and that the rules of corporate governance are determined by policy. The people who would treat these and other policy decisions determining the distribution of income as somehow given are not being honest. We can debate the merits of a policy, but there is no policy-free option out there.

This may be discomforting to people who want to believe that we
have a set of market outcomes that we can fall back upon, but this is the
real world. If we want to be serious, we have to get used to it.

austerity22No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models.

On the limits of Adam Smith’s invisible hand

21 August, 2017 at 14:00 | Posted in Economics | 2 Comments

It might look trivial at first sight, but what Harold Hotelling showed in his classic paper Stability in Competition (1929) was that there are cases when Adam Smith’s invisible hand doesn’t actually produce a social optimum.

With the advent of neoclassical economics at the end of the 19th century, a large amount of intellectual energy was invested in trying to formalize the stringent conditions of obtaining equilibrium and showing in what way the prices and quantities of free competition constituted some kind of social optimum.

That the equilibrium reached in free competition is an optimum for each individual – given prevailing prices and income distribution – was not, however, seen by some economists as making a very strong case for a free market economy per se. It wasn’t possible to prove that free trade and competition gave a maximum of social utility. The gains made in exchange weren’t a manifestation of a maximum social utility.

wicksell2Knut Wicksell was one of those who criticized the idea of regarding the gain in utility arising from free competition as an absolute maximum. This market fundamentalist idea of harmony in a free market system didn’t live up to Wicksell’s demand for objectivity in science – and  “the harmony economists, who endeavoured to extend the doctrine so that it might become a defence of the existing distribution of wealth” were judged severely by Wicksell (Lectures 1934 (1901) p. 39).

When propounders of the new marginalist theory – especially Walras and Pareto – overstepped the strict boundaries of science and used it in ascribing to the market properties it did not possess, Wicksell had to react. To Wicksell (Lectures 1934 (1901) p. 73) it was

almost tragic that Walras … imagined that he had found the rigorous proof … merely because he clothed in mathematical formula the very arguments which he considered insufficient when they were expressed in ordinary language.

But what about the Pareto criterion? Wicksell had actually more or less anticipated it in his review (in Zeitschrift für Volkswirtschaft, Sozialpolitik und Verwaltung, 1913:132-51) of Pareto’s Manuel, but didn’t think it really contributed anything useful. It was just the same old doctrine in a new disguise. To Wicksell, the market fundamentalist doctrine of the Lausanne School obviously didn’t constitute an advance in economics.

Adam Smith’s visible hand

21 August, 2017 at 13:00 | Posted in Economics | 2 Comments

thmorsentHow selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.

Wonder why I’ve never found this passage quoted in all those best-selling mainstream economics textbooks …

James Heckman — ‘Nobel prize’ winner gone wrong

20 August, 2017 at 11:32 | Posted in Statistics & Econometrics | 1 Comment

Here’s James Heckman in 2013:

Also holding back progress are those who claim that Perry and ABC are experiments with samples too small to accurately predict widespread impact and return on investment. This is a nonsensical argument. Their relatively small sample sizes actually speak for — not against — the strength of their findings. Dramatic differences between treatment and control-group outcomes are usually not found in small sample experiments, yet the differences in Perry and ABC are big and consistent in rigorous analyses of these data.”

maxresdefaultWow. The “What does not kill my statistical significance makes it stronger” fallacy, right there in black and white … Heckman’s pretty much saying that if his results are statistically significant (and “consistent in rigorous analyses,” whatever that means) that they should be believed—and even more so if sample sizes are small (and of course the same argument holds in favor of stronger belief if measurement error is large).

With the extra special bonus that he’s labeling contrary arguments as “nonsensical” …

Heckman is wrong here. Actually, the smaller sample sizes (and also the high variation in these studies) speaks against—not for—the strength of the published claims …

Andrew Gelman

One of the first things yours truly warns his statistics students against, is to jump to the conclusion that signal-to-noise levels have to be high just because they get statistically significant estimates when running regressions. One would have thought a prize winner should know that too …

Rocker ohne amputiertes Gehirn (personal)

18 August, 2017 at 19:27 | Posted in Varia | 4 Comments


Wenn ich 64 bin (personal)

18 August, 2017 at 19:11 | Posted in Economics | 1 Comment


Dutch books and money pumps

18 August, 2017 at 17:32 | Posted in Economics | 2 Comments

dutchdirectmemeMainstream neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some ‘Dutch book’ or ‘money pump’ argument – susceptible to being ruined by some clever ‘bookie.’

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here and here) there is no strong warrant for believing so, but in this post, I want to make a point on the informational requirement that the economic ilk of Bayesianism presupposes.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we ‘simply do not know’ or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’s A Treatise on Probability (1921) and General Theory (1936). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we ‘simply do not know.’ Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that have precious little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by Bayesian economists.

Modern macroeconomics — totally messed up

17 August, 2017 at 16:39 | Posted in Economics | 3 Comments

Until a few years ago, economists of all persuasions confidently proclaimed that the Great Depression would never recur. In a way, they were right. After the financial crisis of 2008 erupted, we got the Great Recession instead. Governments managed to limit the damage by pumping huge amounts of money into the global economy and slashing interest rates to near zero. But, having cut off the downward slide of 2008-2009, they ran out of intellectual and political ammunition.


Economic advisers assured their bosses that recovery would be rapid. And there was some revival; but then it stalled in 2010. Meanwhile, governments were running large deficits – a legacy of the economic downturn – which renewed growth was supposed to shrink. In the eurozone, countries like Greece faced sovereign-debt crises as bank bailouts turned private debt into public debt.

Attention switched to the problem of fiscal deficits and the relationship between deficits and economic growth. Should governments deliberately expand their deficits to offset the fall in household and investment demand? Or should they try to cut public spending in order to free up money for private spending?

Depending on which macroeconomic theory one held, both could be presented as pro-growth policies. The first might cause the economy to expand, because the government was increasing public spending; the second, because they were cutting it. Keynesian theory suggests the first; governments unanimously put their faith in the second.

The consequences of this choice are clear. It is now pretty much agreed that fiscal tightening has cost developed economies 5-10 percentage points of GDP growth since 2010. All of that output and income has been permanently lost. Moreover, because fiscal austerity stifled economic growth, it made the task of reducing budget deficits and national debt as a share of GDP much more difficult. Cutting public spending, it turned out, was not the same as cutting the deficit, because it cut the economy at the same time.

Robert Skidelsky

Indeed, there are many kinds of useless economics held in high regard within the mainstream economics establishment today. Few are less deserved than the post-real macroeconomic theory — mostly connected with Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called Real Business Cycle theory (RBC).

In Chicago economics, one is cultivating the view that scientific theories have nothing to do with truth. Constructing theories and building models is not even considered an activity with the intent of approximating truth. For Chicago economists, it is only an endeavour to organize their thoughts in a ‘useful’ manner.

What a handy view of science!

What these defenders of scientific storytelling ‘forget’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough.

Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough — not even after having put ‘New Keynesian’ sticky-price DSGE lipstick on the RBC pig.

Truth is an important concept in real science — and models based on meaningless calibrated ‘facts’ and ‘assumptions’ with unknown truth value are poor substitutes.

Economists — people being paid for telling stories justifying inequality

15 August, 2017 at 10:26 | Posted in Economics | 1 Comment

If economics was an honest profession, economists would focus their efforts on documenting the waste associated with protectionist barriers for professionals. They devoted endless research studies to estimating the cost to consumers of tariffs on products like shoes and tires. It speaks to the incredible corruption of the economics profession that there are not hundreds of studies showing the loss to consumers from the barriers to trade in physicians’ services. If trade could bring down the wages of physicians in the United States just to European levels, it would save consumers close to $100 billion a year.

But economists are not rewarded for studying the economy. That is why almost everyone in the profession missed the $8 trillion housing bubble, the collapse of which stands to cost the country more than $7 trillion in lost output according to the Congressional Budget Office (that comes to around $60,000 per household).

Few if any economists lost their 6-figure paychecks for this disastrous mistake. But most economists are not paid for knowing about the economy. They are paid for telling stories that justify giving more money to rich people. Hence we can look forward to many more people telling us that all the money going to the rich was just the natural workings of the economy. When it comes to all the government rules and regulations that shifted income upward, they just don’t know what you’re talking about.

Dean Baker

In case you’re in doubt, you might better have a look at e. g. what Harvard economist and George Bush advisor Greg Mankiw writes on the rising inequality we have seen for the last 30 years in both the US and elsewhere in Western societies:

Even if the income gains are in the top 1 percent, why does that imply that the right story is not about education?

I then realized that Paul [Krugman] is making an implicit assumption–that the return to education is deterministic. If indeed a year of schooling guaranteed you precisely a 10 percent increase in earnings, then there is no way increasing education by a few years could move you from the middle class to the top 1 percent.

But it may be better to think of the return to education as stochastic. Education not only increases the average income a person will earn, but it also changes the entire distribution of possible life outcomes. It does not guarantee that a person will end up in the top 1 percent, but it increases the likelihood. I have not seen any data on this, but I am willing to bet that the top 1 percent are more educated than the average American; while their education did not ensure their economic success, it played a role.

To me, this is nothing but really one big evasive story-telling attempt at trying to explain away a very disturbing structural shift that has taken place in our societies. And change that has very little to do with stochastic returns to education. Those were in place also 30 or 40 years ago. At that time they meant that a CEO earned 10-12 times what “ordinary” people earn. Today it means that they earn 100-200 times what “ordinary” people earn.

A question of education? No way! It is a question of income and wealth increasingly being concentrated in the hands of a small privileged elite, greed and a lost sense of a common project of building a society for everyone and not only for the chosen few.

But once, we were here (personal)

13 August, 2017 at 18:31 | Posted in Varia | 2 Comments


In loving memory of my brother, Peter ‘Uncas’ Pålsson.

Keynes vs. Wicksell — the loanable funds theory

13 August, 2017 at 17:06 | Posted in Economics | 7 Comments

WicksellThe fundamental difference between Keynes and Wicksell and in general the
supporters of the LFT [Loanable Funds Theory] lies in the specification of the consequences of the presence of bank money. Introducing the distinction between the natural rate of interest and interest rate on money, Wicksell and the LFT supporters state that an economy that uses bank money converges towards the equilibrium position that characterises an economy without banks, in which there is no credit market, but just a capital market where the resources not consumed by savers are exchanged. The presence of bank money does not alter the structure of the economic system; the only element that distinguishes a pure credit economy is the presence of an adjustment mechanism that drives the rate of interest on money, determined within the credit market, towards the natural rate of interest. The working of a pure credit economy can therefore be described using a theory that applies to a world without banks.

In contrast, Keynes states that the spread of a fiat money such as bank money changes the structure of the economic system. He underscores this point by introducing the distinction between a real exchange economy and a monetary economy. As is well known, Keynes uses the former term to refer to an economy in which money is merely a tool to reduce the cost of exchange and whose presence does not alter the structure of the economic system, which remains substantially a barter economy. Keynes notes that the classical economists formulated an explanation of how the real-exchange economy works, convinced that this explanation could be easily applied to a monetary economy. He believed that this conviction was unfounded …

Giancarlo Bertocco

Next Page »

Create a free website or blog at
Entries and comments feeds.