Top 10 Popular Science Books

30 May, 2015 at 18:20 | Posted in Economics | Comments Off on Top 10 Popular Science Books

top-10-retail-news-thumb-610xauto-79997-600x240

•David Hand: The Improbability Principle
•Dan Ariely: Predictably Irrational
•James Surowiecki: The Wisdom of Crowds
•Scott Page: Diversity and Complexity
•Nate Silver: The Signal and the Noise
•Daniel Kahneman: Thinking, Fast and Slow
•Jordan Ellenberg: How Not to Be Wrong
•Michael Mauboissin: The Success Equation
•David Salsburg: The Lady Tasting Tea
•Sam Savage: The Flaw of Averages

Advertisements

Debunking the use of mathematics in economics

28 May, 2015 at 20:02 | Posted in Economics | 3 Comments

What guarantee is there … that economic concepts can be mapped unambiguously and subjectively – to be terribly and unnecessarily mathematical about it – into mathematical concepts? The belief in the power and necessity of formalizing economic theory mathematically has thus obliterated the distinction between cognitively perceiving and understanding concepts from different domains and mapping them into each other. math-520x245Whether the age-old problem of the equality between supply and demand should be mathematically formalized as a system of inequalities or equalities is not something that should be decided by mathematical knowledge or convenience. Surely it would be considered absurd, bordering on the insane, if a surgical procedure was implemented because a tool for its implementation was devised by a medical doctor who knew and believed in topological fixed-point theorems? Yet, weighty propositions about policy are decided on the basis of formalizations based on ignorance and belief in the veracity of one kind of one-dimensional mathematics.

K. Vela Velupillai

What is this thing called ‘capital’?

28 May, 2015 at 15:10 | Posted in Economics | 2 Comments

fraud-kit

It is important, for the record, to recognize that key participants in the debate openly admitted their mistakes. Samuelson’s seventh edition of Economics was purged of errors. Levhari and Samuelson published a paper which began, ‘We wish to make it clear for the record that the nonreswitching theorem associated with us is definitely false’ … Leland Yeager and I jointly published a note acknowledging his earlier error and attempting to resolve the conflict between our theoretical perspectives … burmeisterHowever, the damage had been done, and Cambridge, UK, ‘declared victory’: Levhari was wrong, Samuelson was wrong, Solow was wrong, MIT was wrong and therefore neoclassical economics was wrong. As a result there are some groups of economists who have abandoned neoclassical economics for their own refinements of classical economics. In the United States, on the other hand, mainstream economics goes on as if the controversy had never occurred. Macroeconomics textbooks discuss ‘capital’ as if it were a well-defined concept — which it is not, except in a very special one-capital-good world (or under other unrealistically restrictive conditions). The problems of heterogeneous capital goods have also been ignored in the ‘rational expectations revolution’ and in virtually all econometric work.

Edwin Burmeister

Highly recommended reading for Paul Romer and other ‘busy’ economists …

Causal inference in economics — an elementary introduction

27 May, 2015 at 20:20 | Posted in Statistics & Econometrics | Comments Off on Causal inference in economics — an elementary introduction

 

Dance of the p value

27 May, 2015 at 19:23 | Posted in Economics | Comments Off on Dance of the p value

 

Hard Times

26 May, 2015 at 22:30 | Posted in Varia | Comments Off on Hard Times

If you have access to Spotify you can enjoy this superb recording of Sviridov’s masterpiece (and don’t forget to turn up the volume):

Anti-Romer

26 May, 2015 at 21:06 | Posted in Economics | Comments Off on Anti-Romer

Why Discussions of Methodology Are Risky
scary_science_presentation_peter_mcmahon_notice_science_canadaSo what does this mean for a discussion about methodology? I think that it would eventually be valuable to have a discussion about methodology, but only if we can trust that the people who participate are committed to the norms of science. It is too soon to start that discussion now.

We have clear evidence from the recent past that when someone who is secretly committed to the norms of politics can be trusted for advice about scientific methodology, things can turn out very badly for the discipline. Bad methodology can do a lot more harm than a bad model.

Paul Romer

So, Paul Romer seems to be rather reluctant to have a methodological discussion — it’s too “risky”.

Well, maybe, but on the other hand, if we’re not prepared to take that risk, economics can’t progress, as Tony Lawson forcefully argues in his new book, Essays on the Nature and State of Modern Economics:

Twenty common myths and/or fallacies of modern economics

1. The widely observed crisis of the modern economics discipline turns on problems that originate at the level of economic theory and/or policy.

onedoesIt does not. The basic problems mostly originate at the level of methodology, and in particular with the current emphasis on methods of mathematical modelling.The latter emphasis is an error given the lack of match of the methods in question to the conditions in which they are applied. So long as the critical focus remains only, or even mainly, at the level of substantive economic theory and/or policy matters, then no amount of alternative text books, popular monographs, introductory pocketbooks, journal or magazine articles … or whatever, are going to get at the nub of the problems and so have the wherewithal to help make economics a sufficiently relevant discipline. It is the methods and manner of their use that are the basic problem.

Adam Smith’s visible hand

26 May, 2015 at 08:55 | Posted in Economics | 1 Comment

thmorsentHow selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.

TPP and the Economics 101 ideology

25 May, 2015 at 10:01 | Posted in Economics | Comments Off on TPP and the Economics 101 ideology

economics-101I’ve written several times about what I call the Economics 101 ideology: the overuse of a few simplified concepts from an introductory course to make sweeping policy recommendations (while branding any opponents as ignorant simpletons). The most common way that first-year economics is misused in the public sphere is ignoring assumptions. For example, most arguments for financial deregulation are ultimately based on the idea that transactions between rational actors with perfect information are always good for both sides — and most of the people making those arguments have forgotten that people are not rational and do not have perfect information.

Mark Buchanan and Noah Smith have both called out Greg Mankiw for a different and more pernicious way of misusing first-year economics: simply ignoring what it teaches — or, in this case, what Mankiw himself teaches. At issue is Mankiw’s Times column claiming that all economists agree on the overall benefits of free trade, so everyone should be in favor of the Trans-Pacific Partnership, among other trade agreements.

This is what Mankiw writes about international trade in his textbook (p. 183 of the fifth edition):

“Trade can make everyone better off. … [T]he gains of the winners exceed the losses of the losers, so the winners could compensate the losers and still be better off. … But will trade make everyone better off? Probably not. In practice, compensation for the losers from international trade is rare. …
“We can now see why the debate over trade policy is often contentious. Whenever a policy creates winners and losers, the stage is set for a political battle.”

Yet, in his recent column, Mankiw says that opposition to free trade is because of irrational voters who are subject to “anti-foreign,” “anti-market,” and “make-work” biases. He doesn’t mention what he said clearly in his textbook: opposition to free trade is perfectly rational on the part of people who will be harmed by it, and they express that opposition through the political process. That’s how a democracy is supposed to work, by the way.

Mankiw’s column is a perfect example of how ideology works. It provides a simple way to interpret the world — people who don’t agree with you are idiots or xenophobes — while sweeping aside inconvenient evidence to the contrary. And first-year economics is as powerful an ideology as we have in this country today.

James Kwak

På Magdalena Anderssons läslista

25 May, 2015 at 09:38 | Posted in Economics | Comments Off on På Magdalena Anderssons läslista

Vid en analys av den svenska depressionens förlopp är det viktigt att ha klart för sig att statsskuldens snabba tillväxt inte har utgjort någon orsak till krisen utan istället varit ett symptom på nedgången i ekonomin. I själva verket skulle krisen ha blivit djupare om inte mycket stora underskott i de offentliga finanserna släppts fram … Krisförloppet innebar en överflyttning av en given skuldbörda från privat till offentlig sektor. Någon ökning av folkhushållets totala skuldsättning har inte kommit till stånd.

debtEn nödvändig privat skuldsanering utgör alltså kärnan i den svenska depressionen … Man måste också fråga sig hur krisen skulle ha utvecklat om den offentliga sektorn inte hade accepterat att utgöra en — förhoppningsvis tillfällig — ‘parkeringsplats’ för den privata sektorns alltför stora skulder …

De stora budgetunderskotten kan ses som ett resultat av en omfattande ‘socialisering,’ där den offentliga sektorn kortsiktigt bidrar till att lyfta av den privata en alltför stor skuldbörda …

Statsskuldsutvecklingen spelar idag en viktig pedagogisk roll som indikator på den fara som ligger i dröjsmål med det ekonomisk-politiska reformarbetet. Endast under hotet om statsbankrutt förefaller Sveriges riksdag förmögen att fatta beslut om begränsningar av statens utgiftsåtaganden.

Hans Tson Söderström

Tyvärr lika sant idag som för 20 år sedan — och det säger en hel del om kvalitén på den svenska statsskuldsdebatten bland politiker och ekonomer.

‘Doctor, it hurts when I p’

24 May, 2015 at 16:34 | Posted in Economics | Comments Off on ‘Doctor, it hurts when I p’

HowNotToBeWrongA low-powered study is only going to be able to see a pretty big effect. But sometimes you know that the effect, if it exists, is small. In other words, a study that accurately measures the effect … is likely to be rejected as statistically insignificant, while any result that passes the p < .05 test is either a false positive or a true positive that massively overstates the … effect.

A conventional boundary, obeyed long enough, can be easily mistaken for an actual thing in the world. Imagine if we talked about the state of the economy this way! Economists have a formal definition of a ‘recession,’ which depends on arbitrary thresholds just as ‘statistical significance’ does. One doesn’t say, ‘I don’t care about the unemployment rate, or housing starts, or the aggregate burden of student loans, or the federal deficit; if it’s not a recession, we’re not going to talk about it.’ One would be nuts to say so. The critics — and there are more of them, and they are louder, each year — say that a great deal of scientific practice is nuts in just this way.

If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero — even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and  here), it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on a weird backward logic that students and researchers usually don’t understand?

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since it can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

As shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” And — most importantly — we should of course never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

Solow and Krugman on inequality

24 May, 2015 at 11:06 | Posted in Economics | Comments Off on Solow and Krugman on inequality

Are you tired of  people like walked-out Harvard economist Greg Mankiw and their repeated attempts at defending the 1 % by invoking Adam Smith’s invisible hand and  arguing that a market economy is some kind of moral free zone where, if left undisturbed, people get what they “deserve”?

Then I suggest you listen to this great conversation on inequality:


Listening to Solow and Krugman is a  healthy antidote to  unashamed neoliberal inequality apologetics.

The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist to-day.

John Maynard Keynes General Theory (1936)

A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed.

On tour

20 May, 2015 at 21:07 | Posted in Varia | Comments Off on On tour

 
sodermalm

Touring again. Conference in Stockholm and guest appearence in the Swedish Parliament and the National Institute of Economic Research. Regular blogging to be resumed during the weekend.

Consistency and validity is not enough!

20 May, 2015 at 19:01 | Posted in Economics | Comments Off on Consistency and validity is not enough!

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. As Julian Reiss writes:

errorineconomicsThere is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical prof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability. To have valid evidence is not enough. What economics needs is sound evidence.

Discussing Paul Romer’s “mathiness” concept, Peter Dorman yesterday criticized economists’ belief that theories and models being “consistent with” data somehow make the theories and models a success story. And Chris Dillow elaborates on the weakness of this “consistent with” error in a post today:

If a man has no money, this is “consistent with” the theory that he has given it away. But if in fact he has been robbed, that theory is grievously wrong. Mere consistency with the facts is not sufficient.

ConsistencyThis is a point which some defenders of inequality miss. Of course, you can devise theories which are “consistent with” inequality arising from reasonable differences in choices and marginal products. Such theories, though, beg the question: is that how inequality really emerged?** And the answer, to put it mildly, is: only partially. It also arose from luck, inefficient selection, rigged markets, rent-seeking and outright theft …

The Duhem-Quine thesis warns us that facts under-determine theory: they are “consistent with” multiple theories. This is perhaps especially true when those facts are snapshots. For example, a Gini coefficient – being a mere snapshot of inequality – tells us nothing about how the inequality emerged.

So, how can we guard against the “consistent with” error? One thing we need is history: this helps tell us how things actually happened. And – horrific as it might seem to some economists – we also need sociology: we need to know how people actually behave and not merely that their behaviour is “consistent with” some theory. Economics, then, cannot be a stand-alone discipline but part of the social sciences and humanities – a point which is lost in the discipline’s mathiness.

Yes indeed, history helps. And if we’re not to ‘busy’ doing the things we do, but once in a while take a brake and do some methodological reflection on why we do what we do — well, that takes us a long way too.

Paul Romer is ‘busy’ …

19 May, 2015 at 14:21 | Posted in Economics | 9 Comments

About math: I have an undergraduate degree in physics. I’ve seen clear evidence that math can facilitate scientific progress toward the truth.

If you think that math is worthless or dangerous, I’m sure that there are people who will be happy to discuss this with you. I’m not interested. I’m busy.

too-busy-people-workplace-ecard-someecardsAbout truth and science: My fundamental premise is that there is an objective notion of truth and that science can help us make progress toward truth.

If you do not accept this premise, I’m sure that there are people who would be happy to debate it with you. I’m not interested. I’m busy.

Paul Romer

Hmm …

To me this sounds more like a person afraid of methodological self-reflection, rather than an open-minded and pluralist person.

Where does this methodology-aversion come from?

As far as yours truly can see it all grinds down to a misplaced belief in deductivist mathematical reasoning being the only kind of scientific economics around. If economics isn’t performed as a mathematical modeling it’s not really science in Romer’s world-view. There is no problem with that view — as long as you have done some ontological and methodological reflection and presented arguments for the appropriateness of insisting on deductivist-mathematical modeling being the preferred scientific procedure in economics. No such argumentation is presented.

When applying deductivist thinking to economics, Romer and other mainstream economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a “weight of argument” that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people”. The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by “legal atoms” with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and our theories and models of it.

Now, if the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

Models preferably ought to somehow reflect/express/correspond to reality. I’m not saying that the answers are self-evident, but at least you have to do some methodological and philosophical under-labouring to rest your case. Too often that is wanting in modern economics, where methodological justifications of chosen models and methods as a rule are non-existent.

“Human logic” has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world I would say we are better served with a methodology that takes into account that “the more we know the more we know we don’t know”.

The models and methods we choose to work with have to be in conjunction with the economy as it is situated and structured. Epistemology has to be founded on ontology. Deductivist closed-system theories, as all the varieties of the Walrasian general equilibrium kind, could perhaps adequately represent an economy showing closed-system characteristics. But since the economy clearly has more in common with an open-system ontology we ought to look out for other theories – theories who are rigorous and precise in the meaning that they can be deployed for enabling us to detect important causal mechanisms, capacities and tendencies pertaining to deep layers of the real world.

Rigour, coherence and consistency have to be defined relative to the entities for which they are supposed to apply. Too often they have been restricted to questions internal to the theory or model. But clearly the nodal point has to concern external questions, such as how our theories and models relate to real-world structures and relations. Applicability rather than internal validity ought to be the arbiter of taste.

But obviosly Paul Romer doesn’t want to talk about these scary methodological-philosophical issues. He is ‘busy’ …

DeLong on the real mathiness-people

19 May, 2015 at 11:29 | Posted in Economics | Comments Off on DeLong on the real mathiness-people

Paul Romer inquired why I did not endorse his following Krusell and Smith (2014) in characterizing Piketty and Piketty and Zucman as a canonical example of what Romer calls “mathiness”. Indeed, I think that, instead, it is Krusell and Smith (2014) that suffers from “mathiness”–people not in control of their models deploying algebra untethered to the real world in a manner that approaches gibberish.

amathI wrote about this last summer … This time, I replied to Paul Romer’s question with a Tweetstorm. Here it is, collected, with paragraphs added and redundancy deleted:

My objection to Krusell and Smith (2014) was that it seemed to me to suffer much more from what you call “mathiness” than does Piketty or Piketty and Zucman.

Recall that Krusell and Smith began by saying that they:

do not quite recognize… k/y=s/g”…

But k/y=s/g is Harrod (1939) and Domar (1946). How can they fail to recognize it?

And then their calibration–n+g=.02, δ=.10–not only fails to acknowledge Piketty’s estimates of economy-wide depreciation rate as between .01 and .02, but leads to absolutely absurd results:

For a country with a k/y=4, δ=.10 -> depreciation is 40% of gross output.
For a country like Belle Époque France with a k/y=7, δ=.10 -> depreciation is 70% of gross output.

It seemed to me that Krusell and Smith had no control whatsoever over the calibration of their model at all.

Note that I am working from notes here, because http://aida.wss.yale.edu/smith/piketty1.pdf no longer points to Krusell and Smith (2014). It points, instead, to Krusell and Smith (2015), a revised version.

In the revised version, the calibration differs. It differs in:
1. raising (n+g) from .02 to .03,

2. lowering δ from .10 or .05 (still more than twice Piketty’s historical estimates), and

3.changing the claim that as n+g->0 k/y increases “only very marginally” to “only modestly”

(The right thing to do would be to take economy-wide δ=.02 and say that k/y increases “substantially”.)

If Krusell and Smith (2015) offers any reference to Piketty’s historical depreciation efforts, I missed it.

If it offers any explanation of why they decided to raise their calibration of n+g when they lowered their δ, I missed that too.

Piketty has flaws, but it does not seem to me that working in a net rather than a gross production function framework is one of them. And Krusell and Smith’s continued attempts to demonstrate otherwise seem to me to suffer from “mathiness” to a high degree …

Brad DeLong

How to interpret economic theory

18 May, 2015 at 19:41 | Posted in Economics | Comments Off on How to interpret economic theory

8428740853_5c7d09141f_zThe issue of interpreting economic theory is, in my opinion, the most serious problem now facing economic theorists. The feeling among many of us can be summarized as follows. Economic theory should deal with the real world. It is not a branch of abstract mathematics even though it utilizes mathematical tools. Since it is about the real world, people expect the theory to prove useful in achieving practical goals. But economic theory
has not delivered the goods. Predictions from economic theory are not nearly as accurate as those offered by the natural sciences, and the link between economic theory and practical problems … is tenuous at best. Economic theory lacks a consensus as to its purpose and interpretation. Again and again, we find ourselves asking the question “where does it lead?”

Ariel Rubinstein

Modelling consistency and real world non-coherence in mainstream economics

18 May, 2015 at 19:12 | Posted in Economics | Comments Off on Modelling consistency and real world non-coherence in mainstream economics

In those cases where economists do focus on questions of market or competitive equilibrium etc., the formulators of the models in question are often careful to stress that their theorising has little connection with the real world anyway and should not be used to draw conclusions about the latter, whether in terms of efficiency or for policy or whatever.

In truth in those cases where mainstream assumptions and categories are couched in terms of economic systems as a whole they are mainly designed to achieve consistency at the level of modelling rather than coherence with the world in which we live.

9781138851023This concern for a notion of consistency in modelling practice is true for example of the recently fashionable rational expectations hypothesis, originally formulated by John Muth (1961), and widely employed by those that do focus on system level outcomes. The hypothesis proposes that predictions attributed to agents (being theorised about) are treated as being essentially the same as (consistent with)
those generated by the economic model within which the same agents are theorised. As such the proposal is clearly no more than a technique for (consistency in) modelling, albeit a bizarre one. Significantly any assertion that the expectations held (and so model in which they are imposed) are essentially correct, is a step that is additional to assuming rational expectations.

It is a form of modelling consistency (albeit a different one) that underpins the notion of equilibrium itself. In modern mainstream economics the category equilibrium has nothing to do with the features of the real economy … Economic models often comprise not single, but sets of, equations, each of which is notoriously found to have little relation to what happens in the real world. One question that nevertheless keeps economists occupied with such unrealistic models is whether the equations formulated are mutually consistent in the sense that there ‘exists’ a vector of values of some variable, say one labelled ‘prices’, that is consistent with each and all the equations. Such a model ‘solution’ is precisely the meaning of equilibrium in this context. As such the notion is not at all a claim about the world but merely a (possible) property that a set of equations may or may not be found to possess …In short, when mainstream economists question whether an equilibrium ‘exists’ they merely enquire as to whether a set of equations has a solution.

Modern economics has become increasingly irrelevant to the understanding of the real world. Tony Lawson traces this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism.

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Paul Romer on math masquerading as science

16 May, 2015 at 16:18 | Posted in Economics | 15 Comments

I have a new paper in the Papers and Proceedings Volume of the AER that is out in print and on the AER website …

Paul_RomerThe point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model …

The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.

From my paper:

“The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.”

Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning …

The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. A necessary condition for making this kind of progress is a capacity for reaching consensus that is grounded in logic and evidence. Given how deeply entrenched positions seem to have become in macroeconomics, this discussion could be unpleasant. If animosity surfaces, it will be tempting to postpone this discussion. We should resist this temptation.

I know many of the people whose work I’m criticizing. I genuinely like them. It will be costly for many of us if disagreement spills over into animosity. But if it does, we can be confident that the bad feelings will pass and we should stay focused on the long run …

Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.

Paul Romer

One of those economists Romer knows and — rightfully — criticizes in his paper is Robert Lucas.

Lucas is as we all know a very “mathy” person, and Romer is not he first to notice that “mathiness” lets academic politics masquerade as science …

quote-too-large-a-proportion-of-recent-mathematical-economics-are-mere-concoctions-as-imprecise-as-the-john-maynard-keynes-243582-3

Added 20:00 GMT: Joshua Gans has a post up on Romer’s article well worth reading, not the least because it highlights the nodal Romer-Lucas difference behind the “mathiness” issue.

In modern endogenous growth theory knowledge (ideas) is presented as the locomotive of growth. But as Allyn Young, Piero Sraffa and others had shown already in the 1920s, knowledge is also something that has to do with increasing returns to scale and therefore not really compatible with neoclassical economics with its emphasis on constant returns to scale.

Increasing returns generated by non-rivalry between ideas is simply not compatible with pure competition and the simplistic invisible hand dogma. That is probably also the reason why so many neoclassical economists — like Robert Lucas — have been so reluctant to embrace the theory wholeheartedly.

Neoclassical economics has tried to save itself by more or less substituting human capital for knowledge/ideas. But knowledge or ideas should not be confused with human capital.

In one way one might say that increasing returns is the darkness of the neoclassical heart. And this is something most mainstream neoclassical economists don’t really want to talk about. They prefer to look the other way and pretend that increasing returns are possible to seamlessly incorporate into the received paradigm. Romer’s view of human capital as a good example of non-“mathiness” not-withstanding, yours truly is of the view that talking about “human capital” — or as Lucas puts it,”knowledge ’embodied’ in individual people in the short run” — rather than knowledge/ideas, is only preferred because it makes this more easily digested.

Added 20:55 GMT: Romer has an even newer post up, further illustrating Lucasian obfuscations.

Added May 17: Brad DeLong has a comment up on Romer’s article, arguing that Lucas et consortes don’t approve of imperfect competition models because they are “intellectually dangerous” since they might open up for government intervention and “interventionist planning.” I agree with Brad, but as I’ve argued above, what these guys fear even more, is taking aboard increasing returns, since that would not only mean that policy preferences would have to change, but actually would bring havoc to one of the very fundaments of mainstream neoclassicism — marginal productivity theory.

Added May 18: Sandwichman has a great post up on this issue, with pertinent quotations from one of my intellectual heros, Nicholas Georgescu-Roegen.

Added May 19: David Ruccio has some interesting thoughts on Romer and the fetishism of mathematics here.

Piketty and the non-applicability of neoclassical economics

16 May, 2015 at 10:55 | Posted in Economics | 8 Comments

economic-mythIn yours truly’s On the use and misuse of theories and models in economics the author of Capital in the Twenty-First Century is criticized for not being prepared to fully take the consequences of marginal productivity theory — and the alleged close connection between productivity and remuneration postulated in mainstream income distribution theory — over and over again being disconfirmed both by history and, as shown already by Sraffa in the 1920s and in the Cambridge capital controversy in the 1960s, also from a theoretical point of view:

Having read Piketty (2014, p. 332) no one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality:

“It is only reasonable to assume that people in a position to set their own salaries have a natural incentive to treat themselves generously, or at the very least to be rather optimistic in gauging their marginal productivity.”

But although I agree with Piketty on the obvious – at least to anyone not equipped with ideological blinders – insufficiency and limitation of neoclassical marginal productivity theory to explain the growth of top 1 % incomes, I strongly disagree with his rather unwarranted belief that when it comes to more ordinary wealth and income, the marginal productivity theory somehow should still be considered applicable. It is not.

Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to “capital,” what it is supposed to mean and how to measure it. Remunerations, a fortiori, do not necessarily correspond to any marginal product of different factors of production – or to “compensating differentials” due to non-monetary characteristics of different jobs, natural ability, effort or chance.

It’s pleasing to see that Piketty has taken this critique to heart. In an interview in Potemkin Review he admits that marginal productivity explanations of income is wanting, not only for those at the very top, but, generally:

Piketty: I do not believe in the basic neoclassical model. But I think it is a language that is important to use in order to respond to those who believe that if the world worked that way everything would be fine. And one of the messages of my book is, first, it does not work that way, and second, even if it did, things would still be almost as bad …

All I am saying to neoclassical economists is this: if you really want to stick to your standard model, very small departures from it like an elasticity of substitution slightly above 1 will be enough to generate what we observe in recent decades. But there are many other, and in my view more plausible, ways to explain it. You should be aware of the fact that even with your perfect competition and simplified one good assumption, things can still go wrong, in the sense that the capital share can rise, etc.

PR: Are you saying that notwithstanding your rhetorical strategy to communicate with neoclassical economists on a ground where they feel comfortable, in your views it is not just that you reject marginal productivity explanations of income for those at the very top but more generally as well?

Piketty: Yes, I think bargaining power is very important for the determination of the relative shares of capital and labor in national income. It is perfectly clear to me that the decline of labor unions, globalization, and the possibility of international investors to put different countries in competition with one another–not only different groups of workers, but even different countries–have contributed to the rise in the capital share.

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.