Lördagsmorgon i P2 — en lisa för själen

13 Jan, 2015 at 11:30 | Posted in Varia | Comments Off on Lördagsmorgon i P2 — en lisa för själen

radioI dessa tider — när ljudrummet dränks i den kommersiella radions tyckmyckentrutade ordbajseri och fullständigt intetsägande melodifestivalskval — har man ju nästan gett upp.

Men det finns ljus i mörkret! I radions P2 går varje lördagmorgon ett vederkvickelsens och den seriösa musikens Lördagmorgon i P2.

Så passa på och börja dagen med en musikalisk örontvätt och rensa hörselgångarna från kvarvarande musikslagg. Här kan man till exempel lyssna på musik av Vassilis Tsabropoulos, John Tavener, Gustav Mahler, Arvo Pärt och — inte minst — Stefan Nilsson. Att i tre timmar få lyssna till sådan musik ger sinnet ro och får hoppet att återvända. Tack public-service-radio!

Och tack Erik Schüldt. Att i tre timmar få lyssna till underbar musik och en programledare som har något att säga och inte bara låter foderluckan glappa hela tiden — vilken lisa för själen!

University of Greenwich shows the way!

13 Jan, 2015 at 11:20 | Posted in Economics | Comments Off on University of Greenwich shows the way!

The last seven years have not been easy for the global economy as well as the teaching of economics. The recent financial crisis and the Great Recession have led many economists, non-economists and students in economics to question the state of the discipline, wondering to what extent it provides the necessary tools to interpret the complex world we live in, signalling a deep dissatisfaction with economists’ ability to provide solutions to real world problems. Employers have recognised that the economics graduates that the standard curriculum generates are not equipped with the skills that the real world requires. Likewise, students themselves have recognised that the tools and theories they learn don’t enable them to make sense of the world they live in, let alone to address and solve real world problems …

charles-schulz-peanuts-think-bigThe reason the revalidation of the economics programmes at the University of Greenwich is special is that it constitutes one of the first institutional responses to current pressures from students, faculty, employers and policy makers to produce more ‘world-ready’ graduates. In redesigning our economics programmes we – the economics programmes team – have decided to:

– Address socially relevant economic questions in all core economic courses by adopting a historical and pluralistic perspective right from the start and throughout the programme.

– Add two new compulsory courses – Economic History in the first year and History of Economic Thought in the second year, and an optional course Political Economy of International Development and Finance in the third year.

– Integrate the concept of environmental and social sustainability – in the teaching of economics in all courses, as well as provide specific courses such as Environmental Economics and Environmental Regulation and Business Ethics and Corporate Social Responsibility.

– Eliminate from the curriculum those topics that tend to be taught by default just because they appear on standard economics textbooks rather than because they are recognised as truly useful in understanding how economies really work.

However, we do not isolate the development of a pluralistic perspective to only a few courses, but rather integrate it in all our courses by approaching real world problems from the perspective of different theories, both old and contemporary, comparing, contrasting, or at times synthesising them. This should help the students to develop a critical perspective towards current economic theories and evolving economic events, and develop an understanding about the limitations of theories and models (for example, what happens out of equilibrium), and think more widely about the historical, institutional and political context of economic behaviour and policies …

Sara Gorgoni

‘New Keynesianism’ — neat, plausible and wrong

11 Jan, 2015 at 17:10 | Posted in Economics | 8 Comments

Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the “New Keynesian” pretences and aspirations of people like Paul Krugman, Simon Wren-Lewis and Greg Mankiw.

kKeynes basically argued that it was inadmissible to project history on the future. Consequently an economic policy cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different “variables” was not enough. If they could not get at the causal structure that generated the data, they were not really “identified.” Dynamic stochastic general euilibrium (DSGE) macroeconomists — including “New Keynesians” — has drawn the conclusion that the problem with unstable relations is to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioural parameters are seen to be stable even to changes in economic policies. As yours truly has argued in a couple of posts (e. g. here and here), this, however, is a dead end.

Where “New Keynesian” economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterize all open social systems – including the economic system. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms.

In microeconomics we know that aggregation really presupposes homothetic an identical preferences, something that almost never exist in real economies. The results given by these assumptions are therefore not robust and do not capture the underlying mechanisms at work in any real economy. And models that are critically based on particular and odd assumptions – and are neither robust nor congruent to real world economies – are of questionable value.

Even if economies naturally presuppose individuals, it does not follow that we can infer or explain macroeconomic phenomena solely from knowledge of these individuals. Macroeconomics is to a large extent emergent and cannot be reduced to a simple summation of micro-phenomena. Moreover, even these microfoundations aren’t immutable. The “deep parameters” of “New Keynesian” DSGE models – “tastes” and “technology” – are not really the bedrock of constancy that they believe (pretend) them to be.

So I cannot concur with Paul Krugman, Simon Wren-Lewis, Greg Mankiw and other sorta-kinda “New Keynesians” when they more or less try to reduce Keynesian economics to “intertemporal maximization modified with sticky prices and a few other deviations”. ” As John Quiggin so aptly writes:

If there is one thing that distinguished Keynes’ economic analysis from that of his predecessors, it was his rejection of the idea of a unique full employment equilibrium to which a market economy will automatically return when it experiences a shock. Keynes argued that an economy could shift from a full-employment equilibrium to a persistent slump as the result of the interaction between objective macroeconomic variables and the subjective ‘animal spirits’ of investors and other decision-makers. It is this perspective that has been lost in the absorption of New Keynesian macro into the DSGE framework.

Krugman & Wren-Lewis flim-flamming on heterodox assaults on mainstream economics

10 Jan, 2015 at 18:10 | Posted in Economics | 3 Comments

Simon Wren-Lewis is not satisfied with heterodox economists’s attacks on the mainstream. He’s even annoyed:

The implication is that modern intertemporal New Keynesian theory is somehow behind the view that austerity will not harm a recovery.

This is absolute and dangerous nonsense. Having spent the last decade or two looking at fiscal policy in intertemporal New Keynesian models, I know that exactly the opposite is true. In these models temporary decreases in government spending have significant negative effects on output for given real interest rates … Anyhow anyone who says that mainstream New Keynesian theory supports austerity does not know what they are talking about.

And Paul Krugman seems to share his annoyance:

The point is that standard macroeconomics does NOT justify the attacks on fiscal stimulus and the embrace of austerity. On these issues, people like Simon and myself have been following well-established models and analyses, while the austerians have been making up new stuff and/or rediscovering old fallacies to justify the policies they want. Formal modeling and quantitative analysis doesn’t justify the austerian position; on the contrary, austerians had to throw out the models and abandon statistical principles to justify their claims.

flimflam-2But even if Simon and Paul do not generally defend (“expansionary” or not) austerity measures, there certainly are other mainstream “New Keynesian” economists that do. Greg Mankiw, e. g., has more than once defended austerity policies (here). There has to be some reason for this. If three self-proclaimed sort-kinda “New Keynesians” come up with different views on such a central macroeconomic issue, one has to legitimately ask what kind of theories and models that ilk of “Keynesianism” stands for.

Reading Wren-Lewis and Krugman ultimately reaffirms the impression of a macroeconomic framework that doesn’t succeed in giving a convincing analysis of what a modern capitalist economy is. Keynes’s macroeconomics was a theory for all seasons. It is not enough to put on some “Keynesian” glasses at the zero lower bound and then take them off and put on New Classical glasses once we’re out of that predicament.

Back in 1994 Laurence Ball and Greg Mankiw argued that

although traditionalists are often called ‘New Keynesians,’ this label is a misnomer. They could just as easily be called ‘New Monetarists.’

That is still true today. “New Keynesianism” is a gross misnomer. The macroeconomics of people like Greg Mankiw, Paul Krugman and Simon Wren-Lewis has theoretically and methodologically a lot to do with Milton Friedman, Robert Lucas and Thomas Sargent — and very little to do with the founder of macroeconomics, John Maynard Keynes.

Read my lips — validity is NOT enough!

10 Jan, 2015 at 13:49 | Posted in Economics | 1 Comment

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. As Julian Reiss writes:

errorineconomicsThere is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical prof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability. To have valid evidence is not enough. What economics needs is sound evidence.

Extraordinarily absurd things called ‘Keynesian’

8 Jan, 2015 at 20:25 | Posted in Economics | 2 Comments

phil mirowskiToday, it seems, just about anyone can get away with calling themselves a Keynesian, and they do, no matter what salmagundi of doctrinal positions they may hold dear, without fear of ridicule or reproach. Consequently, some of the most extraordinarily absurd things are now being attributed to Keynes and called “Keynesian theories”. For instance, J. Bradford DeLong, a popular blogger and faculty member at Berkeley, has in a (2009) paper divided up the history of macroeconomics into what he identifies as a “Peel–Keynes–Friedman axis” and a “Marx–Hoover–Hayek” axis: clearly he has learned a trick or two from the neoliberals, who sow mass confusion by mixing together oil and water in their salad dressing versions of history. The self-appointed “New Keynesians” of the 1990s (including Gregory Mankiw, David Romer and Michael Woodford) took the name of Keynes in vain by unashamedly asserting a proposition that Keynes himself had repeatedly and expressly rejected, namely that market-clearing models cannot explain short-run economic fluctuations, and so proceeded to advocate models with “sticky” wages and prices (Mankiw, 2006). George Akerlof and Robert Shiller (2009) have taken three sentences from the General Theory out of context and spun it into some banal misrepresentation concerning what Keynes actually wrote about the notion of “animal spirits,” not to mention his actual conception of macroeconomics. And we observe contemporary journalists going gaga over Keynes, with almost no underlying substantive justification from the track record of the economics profession …

It is undeniably a Sisyphusian task to lean against this blustering tide of misrepresentation in the current Humpty Dumpty climate, with its gales of misinformation and gusts whipping about the turncoats, where economists harbor such easy contempt for history that words can be purported to mean anything that is convenient or politic for the selfish purposes of the writer.

Philip Mirowski

Phil has always been one of my favourite critics of neoclassical economics. I first met him twenty years ago, when he was invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defense. Being at a nonplus, one of them, in total desperation, finally asked “But what shall we do then?”

Yes indeed, what shall they do? Moments like that you never forget. It has stayed with me for all these years. The emperor turned out to be naked. Thanks Phil!

Let’s empty the econometric garbage can!

8 Jan, 2015 at 17:06 | Posted in Statistics & Econometrics | 3 Comments

This is where statistical analysis enters. Validation comes in many different forms, of course, and much good theory testing is qualitative in character. Yet when applicable, statistical theory is our most powerful inductive tool, and in the end, successful theories have to survive quantitative evaluation if they are to be taken seriously. Moreover, statistical analysis is not confined to theory evaluation. Quantitative analysis also discovers empirical generalizations that theory must account for. Scientific invention emerges from data and experiment as often as data and experiment are used to confirm prior theory …

garbageHow is all this empirical creativity and validation to be achieved? Most empirical researchers … believe that they know the answer. First, they say, decide which explanations of a given phenomenon are to be tested. One or more such hypotheses are set out. Then “control variables” are chosen— factors which also affect the phenomenon under study, but not in a way relevant to the hypotheses under discussion. Then measures of all these explanatory factors are entered into a regression equation (linearly), and each variable is assigned a coefficient with a standard error. Hypotheses whose factors acquire a substantively and statistically significant coefficient are taken to be influential, and those that do not are treated as rejected. Extraneous influences are assumed to be removed by the “controls” …

In the great majority of applied work with all these methods, a particular statistical distribution is specified for the dependent variable, conditional on the independent variables. The explanatory factors are postulated to exert their influence through one or more parameters, usually just the mean of the statistical distribution for the dependent variable. The function that connects the independent variables to the mean is known as the “link function” …

In practice, researchers nearly always postulate a linear specification as the argument of the link function … Computer packages often make this easy: One just enters the variables into the specification, and linearity is automatically applied. In effect, we treat the independent variable list as a garbage can: Any variable with some claim to relevance can be tossed in. Then we carry out least squares or maximum likelihood estimation (MLE) or Bayesian estimation or generalized method of moments, perhaps with the latest robust standard errors. It all sounds very impressive. It is certainly easy: We just drop variables into our mindless linear functions, start up our computing routines, and let ’er rip …

Linear link functions are not self-justifying. Garbage-can lists of variables entered linearly into regression, probit, logit, and other statistical models have no explanatory power without further argument. In the absence of careful supporting argument, the results belong in the statistical rubbish bin …

In sum, we need to abandon mechanical rules and procedures. “Throw in every possible variable” won’t work; neither will “rigidly adhere to just three explanatory variables and don’t worry about anything else.” Instead, the research habits of the profession need greater emphasis on classic skills that generated so much of what we know in quantitative social science: plots, crosstabs, and just plain looking at data. Those methods are simple, but sophisticatedly simple. They often expose failures in the assumptions of the elaborate statistical tools we are using, and thus save us from inferential errors.

Christopher H. Achen

This paper is one of my absolute favourites. Why? I guess it’s because Achen reaffirms my firm conviction that since there is no absolutely certain knowledge at hand in social sciences — including economics — explicit argumentation and justification ought to play an extremely strong role if the purported knowledge claims are to be sustainably warranted. Or as Achen puts it, without careful supporting arguments, “just dropping variables into SPSS, STATA, S or R programs accomplishes nothing.”

Stiglitz on the break down of marginal productivity theory

7 Jan, 2015 at 17:51 | Posted in Economics | Comments Off on Stiglitz on the break down of marginal productivity theory

Lynn Parramore: Many neoclassical economists have argued that when people contribute to the economy, they get rewarded proportionally. Is this model breaking down?

Joseph Stiglitz: Yes. I think that the thrust of my book, The Price of Inequality, and a lot of other work has been to question the margin of productivity theory, which is a theory that has been prevalent for 200 years. A lot of people have questioned it, but my work is a renewal of questioning. And I think that some of the very interesting work that Piketty and his associates have done is providing some empirical basis for doing it. Not only the example that I just gave that if you look at the people at the top, monopolists actually constrain output.

reality-header2

It’s also true that people who make the most productive contributions, the ones who make lasers or transistors, or the inventor of the computer, DNA researchers — none of these are the top wealthiest people in the country. So if you look at the people who contributed the most, and the people who are there at the top, they’re not the same. That’s the second piece.

A very interesting study that Piketty and his associates did was on the effect of an increase in taxes on the top 1 percent. If you had the hypothesis that these were people who were working hard and contributing more, you might say, OK, that’s going to significantly slow down the economy. But if you say it’s rent-seeking, then you’re just capturing for the government some of the rents.

SALON

The credit creation theory of banking — the only theory consistent with empirical evidence

7 Jan, 2015 at 14:08 | Posted in Economics | 10 Comments

In the process of making loaned money available in the borrower’s bank account, it was found that the bank did not transfer the money away from other internal or external accounts, resulting in a rejection of both the fractional reserve theory and the financial intermediation theory. Instead, it was found that the bank newly ‘invented’ the funds by crediting the borrower’s account with a deposit, although no such deposit had taken place. This is in line with the claims of the credit creation theory.

athink-biga1Thus it can now be said with confidence for the first time – possibly in the 5000 years’ history of banking – that it has been empirically demonstrated that each individual bank creates credit and money out of nothing, when it extends what is called a ‘bank loan’. The bank does not loan any existing money, but instead creates new money. The money supply is created as ‘fairy dust’ produced by the banks out of thin air. The implications are far-reaching.

Henceforth, economists need not rely on assertions concerning banks. We now know, based on empirical evidence, why banks are different, indeed unique — solving the longstanding puzzle posed by Fama (1985) and others — and different from both non-bank financial institutions and corporations: it is because they can individually create money out of nothing.

The empirical evidence shows that of the three theories of banking, it is the one that today has the least influence and that is being belittled in the literature that is supported by the empirical evidence. Furthermore, it is the theory which was widely held at the end of the 19th century and in the first three decades of the twentieth. It is sobering to realise that since the 1930s, economists have moved further and further away from the truth, instead of coming closer to it. This happened first via the half-truth of the fractional reserve theory and then reached the completely false and misleading financial intermediation theory that today is so dominant. Thus this paper has found evidence that there has been no progress in scientific knowledge in economics, finance and banking in the 20th century concerning one of the most important and fundamental facts for these disciplines. Instead, there has been a regressive development. The known facts were unlearned and have become unknown. This phenomenon deserves further research. For now it can be mentioned that this process of unlearning the facts of banking could not possibly have taken place without the leading economists of the day having played a significant role in it.

Richard A. Werner

Added: Indeed, there certainly has been a “regressive development.” Things that were known facts back in 1948 have somehow been unlearned and become unknown …
 

[h/t lasse]

The Mankiw-Piketty showdown at The ASSA Annual Meeting January 2015

6 Jan, 2015 at 14:40 | Posted in Economics | 3 Comments

Link here: http://t.co/6q9FlLJH2X

depew
Photo credit: Kyle Depew

Empirical evidence now irrevocably shows banks create money out of thin air

6 Jan, 2015 at 13:18 | Posted in Economics | 13 Comments

This paper presents the first empirical evidence in the history of banking on the question of whether banks can create money out of nothing. The banking crisis has revived interest in this issue, but it had remained unsettled. Three hypotheses are recognised in the literature. According to the financial intermediation theory of banking, banks are merely intermediaries like other non-bank financial institutions, collecting deposits that are then lent out. According to the fractional reserve theory of banking, individual banks are mere financial intermediaries that cannot create money, but collectively they end up creating money through systemic interaction. HowtocreatemoneyA third theory maintains that each individual bank has the power to create money ‘out of nothing’ and does so when it extends credit (the credit creation theory of banking). The question which of the theories is correct has far-reaching implications for research and policy. Surprisingly, despite the longstanding controversy, until now no empirical study has tested the theories. This is the contribution of the present paper. An empirical test is conducted, whereby money is borrowed from a cooperating bank, while its internal records are being monitored, to establish whether in the process of making the loan available to the borrower, the bank transfers these funds from other accounts within or outside the bank, or whether they are newly created. This study establishes for the first time empirically that banks individually create money out of nothing. The money supply is created as ‘fairy dust’ produced by the banks individually, “out of thin air”.

Richard A. Werner

Ditch marginal productivity theory once and for all

5 Jan, 2015 at 17:28 | Posted in Economics | 2 Comments

By all these accounts, labor’s share of gross income is falling. At the same time, productivity is increasing.
corporate-profitsv-wagesFor the marginal productivity theory to make sense, you have to assume that labor compared to other factors of production is less productive than it was before. It’s hard to see how this could be true in an economy dominated by the service sector, as in the US. So, not surprisingly, the data doesn’t support marginal productivity as an explanation of income distribution. To see why, here’s a relatively short but detailed criticism of this armchair theory through reality checking.

As the same writer points out, the theory is quietly being removed from most textbooks because it’s useless, except to dead-enders like Mankiw, who continues to teach it from his perch at Harvard. So if most economists think so little of it, why does it survive?

Maybe it’s because the distribution it describes is supposed to arise from the operation of Natural Law. As such, it fits neatly with Invisible Hand mumbo-jumbo. Natural Law isn’t a testable or usable theory. Instead, it is a normative theory. It tells you what the writer thinks is the moral and righteous position. People who tell you marginal productivity theory is true want you to think that current distribution of income is natural and just, and that any other distribution would be unjust, unfair to someone.

That’s what that Natural Law stuff means: the income you get from the labor market is what it Should Be, and if you get more, you’re taking it away from someone. Maybe that someone is another worker, but more likely, you’re stealing from the owners of the things used in production: the return to which the capital owner and the land owner are entitled by virtue of the Natural Law. Samuelson and Nordhaus teach their students that the economy as currently constructed is natural and fair.

Add to that a desire to believe that the economic system is fair, and a constant media and political drumbeat about the wonders of capitalism, and you have the perfect setting for uncritical belief in a false and stupid idea. You are worth more than the “market” says.

Ed Walker/Naked Capitalism

Debunking Mankiw

5 Jan, 2015 at 12:36 | Posted in Economics | 1 Comment

No one in the econ blogosphere has gone after Greg Mankiw, who did his best to provoke outrage with his latest New York Times column. mankiw“Debunking” Piketty, Mankiw says that rich people save because they are altruistic toward their unfortunate kids, who, because of regression to the mean, won’t be as financially successful as they are. But the unintended consequence of all this saving is that the capital-labor ratio changes, and the principle of diminishing marginal productivity means that the rate of profit will fall and wages will rise. Hence Piketty’s patrimonial capitalism is good for the workers!

But let’s put it in Mankiw’s inimitable words:

“Because capital is subject to diminishing returns, an increase in its supply causes each unit of capital to earn less. And because increased capital raises labor productivity, workers enjoy higher wages. In other words, by saving rather than spending, those who leave an estate to their heirs induce an unintended redistribution of income from other owners of capital toward workers.
The bottom line is that inherited wealth is not an economic threat. Those who have earned extraordinary incomes naturally want to share their good fortune with their descendants. Those of us not lucky enough to be born into one of these families benefit as well, as their accumulation of capital raises our productivity, wages and living standards.”

Now let’s just make a list of the assumptions you have to make in order to accept Mankiw’s argument (none of which he mentions himself):

1. All resources are fully employed, and the economy is on its production possibility frontier.

2. A decision to save, by lowering the cost of capital, increases the quantity of investment.

3. Financial and real capital are identical, and the return to the first is the return to the second.

4. All savings and investment occur in the same economy; rich people do not earn income from investments elsewhere.

5. All prices represent true social costs and benefits. There are no profits to be made except by increasing the net wealth of the community. For instance, transfers and uncompensated externalities play no role whatsoever in profits.

6. There are no monopoly profits, with the exception of self-extinguishing temporary monopolies associated with wealth-creating innovations.

7. But, in partial contradiction to (6), there is no technological change at all, since it would alter the marginal productivities of labor and capital.

8. Production sets are convex everywhere; there are no increasing returns or interactions between resources or activities that would give rise to nonconvexities and multiple equilibria.

On top of all this, it should be pointed out that, if Mankiw is right, the rate of profit—Piketty’s r—should fall as the capital-income ratio rises. But a central argument in Piketty’s book is that r is remarkably consistent through relative capital accumulations and decumulations, a steady 4-5%. There isn’t a single dollop of data in Mankiw’s little piece that challenges Piketty’s finding.

Putting all of this together, it doesn’t sound like the sweeping conclusion at the end of Mankiw’s column is justified, does it?

Peter Dorman/Econospeak

Mon Cousin

5 Jan, 2015 at 11:13 | Posted in Economics | Comments Off on Mon Cousin

 

En stor skådespelare har gått ur tiden. Ingvar Kjellson (1923-2014) gjorde många oförglömliga roller, men den som jag mest förknippar honom med är den excentriske Mon Cousin i Hedebyborna. Bättre än så här blir det inte.

Time for another walkout on Mankiw

4 Jan, 2015 at 22:21 | Posted in Economics | 2 Comments

With this model as background, let’s move to the big question: Why should we be concerned about inequality in wealth? Why should anyone care if some families have accumulated capital and enjoy the life of the rentier? Piketty writes about such inequality as if we all innately share his personal distaste for it. But before we embark on policies aimed at reducing wealth inequality, such as a global tax on capital, it would be useful to explore why this inequality matters.

mankiwOne place to look for answers is Occupy Wall Street, the protest movement that drew
attention to growing inequality. This movement was motivated, I believe, by the sense that the
affluence of the financial sector was a threat to other people’s living standards. In the aftermath
of a financial crisis followed by a deep recession, this sentiment was understandable. Yet the
protesters seemed not to object to affluence itself … From this perspective, the rentier lifestyle of capitalists should not be a concern. As we have seen, in a standard neoclassical growth model, the owners of capital earn the value of their marginal contribution to the production process, and their accumulation of capital enhances the productivity and incomes of workers.

Greg Mankiw

In deed — and nothing could more forcefully show what total horseshit marginal productivity theory is.

To think that the exploding income and wealth inequality that we see around us today can be explained by marginal productivity theory is — to put it gently — an obvious sign of someone having bad luck when trying to think.

Three years ago, a walkout was staged by students in the introductory economics class Ec10 at Harvard, taught by Greg Mankiw. Maybe it’s time for another one …

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.