Paul Krugman — a case of dangerous neglect of methodological reflection
29 Jun, 2014 at 18:31 | Posted in Theory of Science & Methodology | 9 CommentsAlex Rosenberg — chair of the philosophy department at Duke University, renowned economic methodologist and author of Economics — Mathematical Politics or Science of Diminshing Returns? — had an interesting article on What’s Wrong with Paul Krugman’s Philosophy of Economics in 3:AM Magazine the other day. Writes Rosenberg:
Krugman writes: ‘So how do you do useful economics? In general, what we really do is combine maximization-and-equilibrium as a first cut with a variety of ad hoc modifications reflecting what seem to be empirical regularities about how both individual behavior and markets depart from this idealized case.’
But if you ask the New Classical economists, they’ll say, this is exactly what we do—combine maximizing-and-equilibrium with empirical regularities. And they’d go on to say it’s because Krugman’s Keynesian models don’t do this or don’t do enough of it, they are not “useful” for prediction or explanation.
When he accepts maximizing and equilibrium as the (only?) way useful economics is done Krugman makes a concession so great it threatens to undercut the rest of his arguments against New Classical economics:
‘Specifically: we have a body of economic theory built around the assumptions of perfectly rational behavior and perfectly functioning markets. Any economist with a grain of sense — which is to say, maybe half the profession? — knows that this is very much an abstraction, to be modified whenever the evidence suggests that it’s going wrong. But nobody has come up with general rules for making such modifications.’
The trouble is that the macroeconomic evidence can’t tell us when and where maximization-and-equilibrium goes wrong, and there seems no immediate prospect for improving the assumptions of perfect rationality and perfect markets from behavioral economics, neuroeconomics, experimental economics, evolutionary economics, game theory, etc.
But these concessions are all the New Classical economists need to defend themselves against Krugman. After all, he seems to admit there is no alternative to maximization and equilibrium …
One thing that’s missing from Krugman’s treatment of economics is the explicit recognition of what Keynes and before him Frank Knight, emphasized: the persistent presence of enormous uncertainty in the economy … Why is uncertainty so important? Because the more of it there is in the economy the less scope for successful maximizing and the more unstable are the equilibria the economy exhibits, if it exhibits any at all …
There is a second feature of the economy that Krugman’s useful economics needs to reckon with, one that Keynes and after him George Soros, emphasized. Along with uncertainty, the economy exhibits pervasive reflexivity: expectations about the economic future tend to actually shift that future …
When combined uncertainty and reflexivity together greatly limit the power of maximizing and equilibrium to do predictively useful economics. Reflexive relations between future expectations and outcomes are constantly breaking down at times and in ways about which there is complete uncertainty.
I think Rosenberg is on to something important here regarding Krugman’s neglect of methodological reflection.
When Krugman earlier this year responded to my critique of IS-LM this hardly came as a surprise. As Rosenberg notes, Krugman works with a very simple modelling dichotomy — either models are complex or they are simple. For years now, self-proclaimed “proud neoclassicist” Paul Krugman has in endless harpings on the same old IS-LM string told us about the splendour of the Hicksian invention — so, of course, to Krugman simpler models are always preferred.
In an earlier post on his blog, Krugman has argued that “Keynesian” macroeconomics more than anything else “made economics the model-oriented field it has become.” In Krugman’s eyes, Keynes was a “pretty klutzy modeler,” and it was only thanks to Samuelson’s famous 45-degree diagram and Hicks’s IS-LM that things got into place. Although admitting that economists have a tendency to use ”excessive math” and “equate hard math with quality” he still vehemently defends — and always have — the mathematization of economics:
I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.
Sure, “New Keynesian” economists like Krugman — and their forerunners, “Keynesian” economists like Paul Samuelson and (young) John Hicks — certainly have contributed to making economics more mathematical and “model-oriented.”
But if these math-is-the-message-modelers aren’t able to show that the mechanisms or causes that they isolate and handle in their mathematically formalized macromodels are stable in the sense that they do not change when we “export” them to our “target systems,” these mathematical models do only hold under ceteris paribus conditions and are consequently of limited value to our understandings, explanations or predictions of real economic systems.
Science should help us disclose the causal forces at work behind the apparent facts. But models — mathematical, econometric, or what have you — can never be more than a starting point in that endeavour. There is always the possibility that there are other (non-quantifiable) variables – of vital importance, and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the formalized mathematical model.
The kinds of laws and relations that “modern” economics has established, are laws and relations about mathematically formalized entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made mathematical-statistical “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of contemporary mainstream neoclassical endeavours of mathematical economic modeling rather useless. And that also goes for Krugman and the rest of the “New Keynesian” family.
When it comes to modeling philosophy, Paul Krugman has in an earlier piece defended his position in the following words (my italics):
I don’t mean that setting up and working out microfounded models is a waste of time. On the contrary, trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking. In fact, I’ve had that experience several times.
The argument is hardly convincing. If people put that enormous amount of time and energy that they do into constructing macroeconomic models, then they really have to be substantially contributing to our understanding and ability to explain and grasp real macroeconomic processes. If not, they should – after somehow perhaps being able to sharpen our thoughts – be thrown into the waste-paper-basket (something the father of macroeconomics, Keynes, used to do), and not as today, being allowed to overrun our economics journals and giving their authors celestial academic prestige.
Krugman’s explications on this issue is really interesting also because they shed light on a kind of inconsistency in his art of argumentation. During a couple of years Krugman has in more than one article criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now — just to take one example — Paul Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.”
When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise criticizes.
The same critique – that when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models that he otherwise criticize – can be directed against his new post. Krugman has said these things before, but I am still waiting for him to really explain HOW the silly assumptions behind IS-LM helps him work with the fundamental issues. If one can only use those assumptions with — as Krugman says, “tongue in cheek” – well, why then use them at all? Wouldn’t it be better to use more adequately realistic assumptions and be able to talk clear without any tongue in cheek?
I have noticed again and again, that on most macroeconomic policy issues I find myself in agreement with Krugman. To me that just shows that Krugman is right in spite of and not thanks to those neoclassical models — IS-LM included — he ultimately refers to. When he is discussing austerity measures, Ricardian equivalence or problems with the euro, he is actually not using those models, but rather (even) simpler and more adequate and relevant thought-constructions much more in the vein of Keynes.
The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic model building is little more than “hand waving” that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around. As Keynes has it:
Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world. It is compelled to be this, because, unlike the natural science, the material to which it is applied is, in too many respects, not homogeneous through time.
If macroeconomic models – no matter of what ilk – make assumptions, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Macroeconomic theorists – regardless of being New Monetarist, New Classical or ”New Keynesian” – ought to do some ontological reflection and heed Keynes’ warnings on using thought-models in economics:
The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors amongst themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error.
So let me — respectfully — summarize: A gadget is just a gadget — and brilliantly silly simple models — IS-LM included — do not help us working with the fundamental issues of modern economies any more than brilliantly silly complicated models — calibrated DSGE and RBC models included. And as Rosenberg rightly notices:
When he accepts maximizing and equilibrium as the (only?) way useful economics is done Krugman makes a concession so great it threatens to undercut the rest of his arguments against New Classical economics.
Phil Mirowski on neoliberalism
27 Jun, 2014 at 08:23 | Posted in Economics, Politics & Society | 2 Comments
(h/t Jan Milch)
Bob Rowthorn questions two of Piketty’s central assumptions
26 Jun, 2014 at 17:38 | Posted in Economics | 5 CommentsPiketty uses the terms “capital” and “wealth” interchangeably to denote the total monetary value of shares, housing and other assets. “Income” is measured in money terms. We shall reserve the term “capital” for the totality of productive assets evaluated at constant prices. The term “output” is used to denote the totality of net output (value-added) measured at constant prices. Piketty uses the symbol β to denote the ratio of “wealth” to “income” and he denotes the share of wealth-owners in total income by α. In his theoretical analysis this share is equated to the share of profits in total output. Piketty documents how α and β have both risen by a considerable amount in recent decades. He argues that this is not mere correlation, but reflects a causal link. It is the rise in β which is responsible for the rise in α. To reach this conclusion, he first assumes that β is equal to the capital-output ratio K/Y, as conventionally understood. From his empirical finding that β has risen, he concludes that K/Y has also risen by a similar amount. According to the neoclassical theory of factor shares, an increase in K/Y will only lead to an increase in α when the elasticity of substitution between capital and labour σ is greater than unity. Piketty asserts that this is the case. Indeed, based on movements α and β, he estimates that σ is between 1.3 and 1.6 (page 221).
Thus, Piketty’s argument rests on two crucial assumptions: β = K/Y and σ > 1. Once these assumptions are granted, the neoclassical theory of factor shares ensures that an increase in β will lead to an increase in α. In fact, neither of these assumptions is supported by the empirical evidence which is surveyed briefly in the appendix. This evidence implies that the large observed rise in β in recent decades is not the result of a big rise in K/Y but is primarily a valuation effect …
Piketty argues that the higher income share of wealth-owners is due to an increase in the capital-output ratio resulting from a high rate of capital accumulation. The evidence suggests just the contrary. The capital-output ratio, as conventionally measured has either fallen or been constant in recent decades. The apparent increase in the capital-output ratio identified by Piketty is a valuation effect reflecting a disproportionate increase in the market value of certain real assets. A more plausible explanation for the increased income share of wealth-owners is an unduly low rate of investment in real capital.
It seems to me that Rowthorn is closing in on the nodal point in Piketty’s picture of the long-term trends in income distribution in advanced economies. As I wrote the other day:
Being able to show that you can get the Piketty results using one or another of the available standard neoclassical growth models is of course — from a realist point of view — of limited value. As usual — the really interesting thing is how in accord with reality are the assumptions you make and the numerical values you put into the model specification.
(h/t Erik Hegelund)
The Spirit of Tallis (feat. Anna Sandström)
23 Jun, 2014 at 20:08 | Posted in Varia | Comments Off on The Spirit of Tallis (feat. Anna Sandström)
The Real Damage Done
23 Jun, 2014 at 19:35 | Posted in Economics | 1 Comment
The number of Britons living in poverty has more than doubled over the past 30 years, a report has revealed. About one in three UK families now live below the breadline, despite the British economy doubling in size over the same period.
The UK’s largest ever study into poverty in the country has urged the government to take measures to tackle growing levels of poverty. The Poverty and Social Exclusion in the United Kingdom (PSE) project, led by the University of Bristol, has revealed the wealth gap between the haves and the have-nots in the UK is widening still further.
Almost 18 million people are unable to afford adequate housing, while one in three do not have the money to heat their homes in winter.
“The Coalition government aimed to eradicate poverty by tackling the causes of poverty,”said Professor David Gordon, from the Townsend Centre for International Poverty Research at the University of Bristol. “Their strategy has clearly failed. The available high quality scientific evidence shows that poverty and deprivation have increased since 2010, the poor are suffering from deeper poverty and the gap between the rich and poor is widening.”
Perhaps the most interesting revelation unearthed by the report is the fact that most of the people living in poverty are employed, dispelling the myth propagated by ministers that poverty is a consequence of lack of work, the PSE report says.
It found that the majority of children living below the breadline live in small families with at least one employed parent.
In households that suffer from food deprivation, the study found that parents often sacrifice their own wellbeing for that of their children. In 93 percent of cases at least one parent skimped on meals “often”to make sure others had enough to eat.
According to the study, women were more likely to make sacrifices, cutting back on clothes and social visits.
“The research has shown that in many households’ parents sacrifice their own welfare – going without adequate food, clothing or a social life – in order to try to protect their children from poverty and deprivation,” Professor Jonathan Bradshaw, of the University of York, said in the PSE’s statement.
The report calls on the government to address the growing number of poor in the UK.
Wage rigidity and the importance of checking your model assumptions
23 Jun, 2014 at 10:20 | Posted in Economics | Comments Off on Wage rigidity and the importance of checking your model assumptions
The task of this book is to explain wage rigidities … It seems reasonable to hope that a successful explanation of wage rigidity would contribute to understanding the extent of the welfare loss associated with unemployment and what can be done to reduce it … Many theories of wage rigidity and unemployment include partial answers to these questions as part of their assumptions, so that the phenomena of real interest … are described in the theories’ assumptions. For instance, Lucas concludes that increased unemployment during recessions implies little welfare loss … Lucas’s policy conclusions are not strongly supported … Good support can come only from information that distinguishes his microeconomic assumptions from others yielding different policy recommendations.
A fanciful example may illustrate the danger of taking too narrow a view of instrumentalism. You are an explorer seeking contact with the Dafs, an isolated tribe about which almost nothing is known. You observe one of their villages through binoculars from far away …
You observe that every morning on sunny days, men wearing bright yellow hats stand in the backyards and make sweeping gestures toward the sky … When you finally arrange a meeting with some Dafs, you meet a few men with yellow hats and a few other plainer people. Believing the first to be leaders, you offer them presents, at which point all the Dafs are outraged and assault you. What you have not observed is that yellow hats mark slaves, who throw grain to the household chickens in the yard on sunny days and inside on rainy ones … It would have been worth your while not to settle for treating arm waving and wearing yellow hats as the phenomena to be explained, but to test your assumptions about behavior by taking risks to sneak up closer and see precisely what the men were up to.
Did Keynes accept the IS-LM model?
22 Jun, 2014 at 19:13 | Posted in Economics | 1 CommentLord Keynes has some interesting references and links for those wanting to dwell upon the question if Keynes really “accepted” Hicks’s IS-LM model.
My own view is that IS-LM doesn’t adequately reflect the width and depth of Keynes’s insights on the workings of modern market economies:
1 Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.
2 IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historic time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …
3 IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.
4 IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.
5 IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”
6 IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.
Given this, it’s difficult to see how and why Keynes in earnest should have “accepted” Hicks’s construct.
If you only have time to read one statistics book — this is the one!
21 Jun, 2014 at 12:19 | Posted in Statistics & Econometrics | Comments Off on If you only have time to read one statistics book — this is the one!Mathematical statistician David A. Freedman‘s Statistical Models and Causal Inference (Cambridge University Press, 2010) is a marvellous book. It ought to be mandatory reading for every serious social scientist – including economists and econometricians – who doesn’t want to succumb to ad hoc assumptions and unsupported statistical conclusions!
How do we calibrate the uncertainty introduced by data collection? Nowadays, this question has become quite salient, and it is routinely answered using wellknown methods of statistical inference, with standard errors, t -tests, and P-values … These conventional answers, however, turn out to depend critically on certain rather restrictive assumptions, for instance, random sampling …
Thus, investigators who use conventional statistical technique turn out to be making, explicitly or implicitly, quite restrictive behavioral assumptions about their data collection process … More typically, perhaps, the data in hand are simply the data most readily available …
The moment that conventional statistical inferences are made from convenience samples, substantive assumptions are made about how the social world operates … When applied to convenience samples, the random sampling assumption is not a mere technicality or a minor revision on the periphery; the assumption becomes an integral part of the theory …
In particular, regression and its elaborations … are now standard tools of the trade. Although rarely discussed, statistical assumptions have major impacts on analytic results obtained by such methods.
Consider the usual textbook exposition of least squares regression. We have n observational units, indexed by i = 1, . . . , n. There is a response variable yi , conceptualized as μi + i , where μi is the theoretical mean of yi while the disturbances or errors i represent the impact of random variation (sometimes of omitted variables). The errors are assumed to be drawn independently from a common (gaussian) distribution with mean 0 and finite variance. Generally, the error distribution is not empirically identifiable outside the model; so it cannot be studied directly—even in principle—without the model. The error distribution is an imaginary population and the errors i are treated as if they were a random sample from this imaginary population—a research strategy whose frailty was discussed earlier.
Usually, explanatory variables are introduced and μi is hypothesized to be a linear combination of such variables. The assumptions about the μi and i are seldom justified or even made explicit—although minor correlations in the i can create major bias in estimated standard errors for coefficients …
Why do μi and i behave as assumed? To answer this question, investigators would have to consider, much more closely than is commonly done, the connection between social processes and statistical assumptions …
We have tried to demonstrate that statistical inference with convenience samples is a risky business. While there are better and worse ways to proceed with the data at hand, real progress depends on deeper understanding of the data-generation mechanism. In practice, statistical issues and substantive issues overlap. No amount of statistical maneuvering will get very far without some understanding of how the data were produced.
More generally, we are highly suspicious of efforts to develop empirical generalizations from any single dataset. Rather than ask what would happen in principle if the study were repeated, it makes sense to actually repeat the study. Indeed, it is probably impossible to predict the changes attendant on replication without doing replications. Similarly, it may be impossible to predict changes resulting from interventions without actually intervening.
One woman — seventeen British accents
21 Jun, 2014 at 11:57 | Posted in Varia | 1 Comment
Impressive!
Piketty’s Second Fundamental Law of Capitalism
20 Jun, 2014 at 10:43 | Posted in Economics | 4 Comments
When Piketty introduces this law in the text of Capital in the Twenty-First Century, he abbreviates it in the form β = s/g for the benefit of the general reader. But he then follows up that introductory statement with several pages of explanatory text and warnings to the reader that make it clear that that his intent is only to describe the value toward which β would converge in the long run, if s and g remain relatively constant. Unfortunately, the abbreviated statement of the law has thrown many people off. Economists are in the habit of using equilibrium methods to turn long-term asymptotic limits of processes that evolve over time into unchanging constraints on some idealized steady-state equilibrium. So rather than use the inherently dynamic accounts of wealth, income and the capital-to-income ratio that are partially explained above, and that Piketty uses to study the dynamical evolution of the structure of economic inequality as it evolves over time, some commentators have instead gone right to a steady-state model in which β is set constantly equal to s/g, and from which they then fallaciously deduce immediate (and absurd) changes in β from the fluctuations in s and g. This is quite far from Piketty’s intent and misrepresents his framework.
Since part of the point of the Second Fundamental law and its surrounding framework is to analyze what happens as the capital-to-income ratio increases over time, not what happens in the less usual conditions when it is stable, the focus on steady-state conditions seems to me misguided. For example, if a society begins in 2014 with a capital-to-income ratio of 4, a savings rate of 10% and a growth rate of 1.5%, then even if the savings rate and growth rate remained constant indefinitely, it would take a very long time for β to get near its limiting value of s/g = 6.7. Specifically, β would not get to within 10% of s/g until the year 2115; it would not get to within 5% of s/g until the year 2159; and it would not get to within 1% of s/g until the year 2159. The distance between the present value of β and the present value of s/g is a measure of the present strength of the “force” pushing the capital-to-income ratio higher. Since some of the forces generating inequality depend on the rapidity with which β is increasing, it is the strength of this force and the direction in which β is moving that are important. Piketty’s Fundamental Law should not be interpreted as the statement that β is approximately equal to s/g. Under ordinary conditions β can be quite far from s/g. That’s part of the point.
Skidelsky on the need for an economics curriculum reform
19 Jun, 2014 at 10:03 | Posted in Economics | 2 CommentsIn a manifesto published in April, economics students at the University of Manchester advocated an approach “that begins with economic phenomena and then gives students a toolkit to evaluate how well different perspectives can explain it,” rather than with mathematical models based on unreal assumptions.
The Manchester students argue that “the mainstream within the discipline (neoclassical theory) has excluded all dissenting opinion, and the crisis is arguably the ultimate price of this exclusion …
Today’s “post-crash” students are right. So what is keeping the mainstream’s intellectual apparatus going?
For starters, economics teaching and research is deeply embedded in an institutional structure that, as with any ideological movement, rewards orthodoxy and penalizes heresy. The great classics of economics, from Smith to Ricardo to Veblen, go untaught. Research funding is allocated on the basis of publication in academic journals that espouse the neoclassical perspective. Publication in such journals is also the basis of promotion …
For now, the best that curriculum reform can do is to remind students that economics is not a science like physics, and that it has a much richer history than is to be found in the standard textbooks …
Indeed, mainstream economics is a pitifully thin distillation of historical wisdom on the topics that it addresses. It should be applied to whatever practical problems it can solve; but its tools and assumptions should always be in creative tension with other beliefs concerning human wellbeing and flourishing. What students are taught today certainly does not deserve its imperial status in social thought.
Ricardian equivalence — total horseshit
18 Jun, 2014 at 08:32 | Posted in Economics | 13 CommentsRicardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.
Why?
In the standard neoclassical consumption model — used in DSGE macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon – today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form
ct + cf/(1+r) = ft + yt + yf/(1+r),
where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form
U = u(ct) + au(cf),
where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when
u'(ct) = a(1+r)u'(cf).
This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as
1/ct = a(1+r)(1/cf),
or
cf/ct = a(1+r).
This importantly implies that according to the neoclassical consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes, since wealth — ft + yt + yf/(1+r) — has to be interpreted as present discounted value net of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged.
That the theory doesn’t fit the facts we already knew.
And yesterday, on Voxeu, Jonathan A. Parker summarized a series of studies empirically testing the theory, reconfirming how out of line with reality is Ricardian equivalence.
This only, again, underlines that there is, of course, no reason for us to believe in that fairy-tale. Ricardo himself — mirabile dictu — didn’t believe in Ricardian equivalence. In Essay on the Funding System (1820) he wrote:
But the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly. We are too apt to think that the war is burdensome only in proportion to what we are at the moment called to pay for it in taxes, without reflecting on the probable duration of such taxes. It would be difficult to convince a man possessed of £20,000, or any other sum, that a perpetual payment of £50 per annum was equally burdensome with a single tax of £1000.
And as one Nobel laureate had it:
Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.
Joseph E. Stiglitz, twitter
The Grossman-Stiglitz paradox
17 Jun, 2014 at 13:27 | Posted in Economics | 3 CommentsIn general the price system does not reveal all the information about “the true value” of the risky asset …
The only way informed traders can earn a return on their activity of information gathering, is if they can use their information to take positions in the market which are “better” than the positions of uninformed traders. “Efficient Markets” theorists have claimed that “at any time prices fully reflect all available information” … If this were so then informed traders could not earn a return on their information.
When the efficient markets hypothesis is true and information is costly, competitive markets break down … As soon as the assumptions of the conventional perfect capital markets model are modified to allow even a slight amount of information imperfection and a slight cost of information, the traditional theory becomes untenable. There cannot be as many securities as states of nature. For if there were, competitive equilibrium would not exist …
Because information is costly, prices cannot perfectly reflect the information which is available, since if it did, those who spent resources to obtain it would receive no compensation. There is a fundamental conflict between the efficiency with which markets spread information and the incentives to acquire information.
Here is my own take on the paradox (only in Swedish, sorry).
IMF warns of Swedish housing bubble
17 Jun, 2014 at 09:19 | Posted in Economics | 3 CommentsA new IMF report warns of rising financial instability and unsustainabe household indebtedness in Sweden:
Financial instability is an increasing concern. House price increases have picked up again, exceeding 71⁄2 percent annual growth for single-family homes and 121⁄2 percent for tenant-owned apartments in April. Household credit growth also remained strong, pushing household indebtedness to almost 175 percent of disposable income in 2013, and over 190 percent if debt from tenant- owned housing associations is included. Recent data indicate that household debt ratios are high across all income groups, but particularly so for indebted lower-income households who are especially vulnerable to income, interest rate, and house price shocks. The aggregate net asset position of households is solid, but a large share of assets is illiquid and has limited value as a buffer. As a consequence, a large and sudden drop in house prices would lower consumption, employment, and growth, and ultimately impact the banking system.
The increase in house loans – and house prices – in Sweden has for many years been among the steepest in the world.
Sweden’s house price boom started in mid-1990s, and looking at the development of real house prices since 1986, there are reasons to be deeply worried:
Source: Statistics Sweden and own calculations
The indebtedness of the Swedish household sector has also risen to alarmingly high levels, as can be seen in the figure below (based on new data published earlier this month by Statistics Sweden, showing the development of household debts/disposable income 1990 – 2012):
Source: Statistics Sweden and own calculations
Yours truly has been trying to argue – for two years now – with “very serious people” that it’s really high time to “take away the punch bowl.” Mostly I have felt like the voice of one calling in the desert, and up until now neither the Swedish central bank, nor the government, has been willing to listen. Compairing the above figures with the one below (source) could perhaps give some refreshing perspective …
Where do housing bubbles come from? There are of course many different explanations, but one of the fundamental mechanisms at work is that people expect house prices to increase, which makes people willing to keep on buying houses at steadily increasing prices. It’s this kind of self-generating cumulative process à la Wicksell-Myrdal that is the core of the housing bubble. Unlike the usual commodities markets where demand curves usually point downwards, on asset markets they often point upwards, and therefore give rise to this kind of instability. And, the greater leverage, the greater the increase in prices.
Blog at WordPress.com.
Entries and Comments feeds.