What guarantee is there … that economic concepts can be mapped unambiguously and subjectively – to be terribly and unnecessarily mathematical about it – into mathematical concepts? The belief in the power and necessity of formalizing economic theory mathematically has thus obliterated the distinction between cognitively perceiving and understanding concepts from different domains and mapping them into each other. Whether the age-old problem of the equality between supply and demand should be mathematically formalized as a system of inequalities or equalities is not something that should be decided by mathematical knowledge or convenience. Surely it would be considered absurd, bordering on the insane, if a surgical procedure was implemented because a tool for its implementation was devised by a medical doctor who knew and believed in topological fixed-point theorems? Yet, weighty propositions about policy are decided on the basis of formalizations based on ignorance and belief in the veracity of one kind of one-dimensional mathematics.
It is important, for the record, to recognize that key participants in the debate openly admitted their mistakes. Samuelson’s seventh edition of Economics was purged of errors. Levhari and Samuelson published a paper which began, ‘We wish to make it clear for the record that the nonreswitching theorem associated with us is definitely false’ … Leland Yeager and I jointly published a note acknowledging his earlier error and attempting to resolve the conflict between our theoretical perspectives … However, the damage had been done, and Cambridge, UK, ‘declared victory’: Levhari was wrong, Samuelson was wrong, Solow was wrong, MIT was wrong and therefore neoclassical economics was wrong. As a result there are some groups of economists who have abandoned neoclassical economics for their own refinements of classical economics. In the United States, on the other hand, mainstream economics goes on as if the controversy had never occurred. Macroeconomics textbooks discuss ‘capital’ as if it were a well-defined concept — which it is not, except in a very special one-capital-good world (or under other unrealistically restrictive conditions). The problems of heterogeneous capital goods have also been ignored in the ‘rational expectations revolution’ and in virtually all econometric work.
Highly recommended reading for Paul Romer and other ‘busy’ economists …
Why Discussions of Methodology Are Risky
So what does this mean for a discussion about methodology? I think that it would eventually be valuable to have a discussion about methodology, but only if we can trust that the people who participate are committed to the norms of science. It is too soon to start that discussion now.
We have clear evidence from the recent past that when someone who is secretly committed to the norms of politics can be trusted for advice about scientific methodology, things can turn out very badly for the discipline. Bad methodology can do a lot more harm than a bad model.
So, Paul Romer seems to be rather reluctant to have a methodological discussion — it’s too “risky”.
Well, maybe, but on the other hand, if we’re not prepared to take that risk, economics can’t progress, as Tony Lawson forcefully argues in his new book, Essays on the Nature and State of Modern Economics:
Twenty common myths and/or fallacies of modern economics
1. The widely observed crisis of the modern economics discipline turns on problems that originate at the level of economic theory and/or policy.
It does not. The basic problems mostly originate at the level of methodology, and in particular with the current emphasis on methods of mathematical modelling.The latter emphasis is an error given the lack of match of the methods in question to the conditions in which they are applied. So long as the critical focus remains only, or even mainly, at the level of substantive economic theory and/or policy matters, then no amount of alternative text books, popular monographs, introductory pocketbooks, journal or magazine articles … or whatever, are going to get at the nub of the problems and so have the wherewithal to help make economics a sufficiently relevant discipline. It is the methods and manner of their use that are the basic problem.
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
Use the first-order functional calculus.
It then becomes logic,
And, as if by magic,
The obvious is hailed as miraculous.
I’ve written several times about what I call the Economics 101 ideology: the overuse of a few simplified concepts from an introductory course to make sweeping policy recommendations (while branding any opponents as ignorant simpletons). The most common way that first-year economics is misused in the public sphere is ignoring assumptions. For example, most arguments for financial deregulation are ultimately based on the idea that transactions between rational actors with perfect information are always good for both sides — and most of the people making those arguments have forgotten that people are not rational and do not have perfect information.
Mark Buchanan and Noah Smith have both called out Greg Mankiw for a different and more pernicious way of misusing first-year economics: simply ignoring what it teaches — or, in this case, what Mankiw himself teaches. At issue is Mankiw’s Times column claiming that all economists agree on the overall benefits of free trade, so everyone should be in favor of the Trans-Pacific Partnership, among other trade agreements.
This is what Mankiw writes about international trade in his textbook (p. 183 of the fifth edition):
“Trade can make everyone better off. … [T]he gains of the winners exceed the losses of the losers, so the winners could compensate the losers and still be better off. … But will trade make everyone better off? Probably not. In practice, compensation for the losers from international trade is rare. …
“We can now see why the debate over trade policy is often contentious. Whenever a policy creates winners and losers, the stage is set for a political battle.”
Yet, in his recent column, Mankiw says that opposition to free trade is because of irrational voters who are subject to “anti-foreign,” “anti-market,” and “make-work” biases. He doesn’t mention what he said clearly in his textbook: opposition to free trade is perfectly rational on the part of people who will be harmed by it, and they express that opposition through the political process. That’s how a democracy is supposed to work, by the way.
Mankiw’s column is a perfect example of how ideology works. It provides a simple way to interpret the world — people who don’t agree with you are idiots or xenophobes — while sweeping aside inconvenient evidence to the contrary. And first-year economics is as powerful an ideology as we have in this country today.
Vid en analys av den svenska depressionens förlopp är det viktigt att ha klart för sig att statsskuldens snabba tillväxt inte har utgjort någon orsak till krisen utan istället varit ett symptom på nedgången i ekonomin. I själva verket skulle krisen ha blivit djupare om inte mycket stora underskott i de offentliga finanserna släppts fram … Krisförloppet innebar en överflyttning av en given skuldbörda från privat till offentlig sektor. Någon ökning av folkhushållets totala skuldsättning har inte kommit till stånd.
En nödvändig privat skuldsanering utgör alltså kärnan i den svenska depressionen … Man måste också fråga sig hur krisen skulle ha utvecklat om den offentliga sektorn inte hade accepterat att utgöra en — förhoppningsvis tillfällig — ‘parkeringsplats’ för den privata sektorns alltför stora skulder …
De stora budgetunderskotten kan ses som ett resultat av en omfattande ‘socialisering,’ där den offentliga sektorn kortsiktigt bidrar till att lyfta av den privata en alltför stor skuldbörda …
Statsskuldsutvecklingen spelar idag en viktig pedagogisk roll som indikator på den fara som ligger i dröjsmål med det ekonomisk-politiska reformarbetet. Endast under hotet om statsbankrutt förefaller Sveriges riksdag förmögen att fatta beslut om begränsningar av statens utgiftsåtaganden.
Tyvärr lika sant idag som för 20 år sedan — och det säger en hel del om kvalitén på den svenska statsskuldsdebatten bland politiker och ekonomer.
A low-powered study is only going to be able to see a pretty big effect. But sometimes you know that the effect, if it exists, is small. In other words, a study that accurately measures the effect … is likely to be rejected as statistically insignificant, while any result that passes the p < .05 test is either a false positive or a true positive that massively overstates the … effect.
A conventional boundary, obeyed long enough, can be easily mistaken for an actual thing in the world. Imagine if we talked about the state of the economy this way! Economists have a formal definition of a ‘recession,’ which depends on arbitrary thresholds just as ‘statistical significance’ does. One doesn’t say, ‘I don’t care about the unemployment rate, or housing starts, or the aggregate burden of student loans, or the federal deficit; if it’s not a recession, we’re not going to talk about it.’ One would be nuts to say so. The critics — and there are more of them, and they are louder, each year — say that a great deal of scientific practice is nuts in just this way.
If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero — even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:
There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.
Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and here), it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on a weird backward logic that students and researchers usually don’t understand?
In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since it can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.
As shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” And — most importantly — we should of course never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:
I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.
Are you tired of people like walked-out Harvard economist Greg Mankiw and their repeated attempts at defending the 1 % by invoking Adam Smith’s invisible hand and arguing that a market economy is some kind of moral free zone where, if left undisturbed, people get what they “deserve”?
Then I suggest you listen to this great conversation on inequality:
Listening to Solow and Krugman is a healthy antidote to unashamed neoliberal inequality apologetics.
The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist to-day.
John Maynard Keynes General Theory (1936)
A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed.