My new book is out

14 april, 2015 kl. 19:37 | Publicerat i Economics | 2 kommentarer

wea-ebookcover-syll-225x300”A wonderful set of clearly written and highly informative essays by a scholar who is knowledgeable, critical and sharp enough to see how things really are in the discipline, and honest and brave enough to say how things are. A must read especially for those truly concerned and/or puzzled about the state of modern economics.”

Tony Lawson

Table of Contents
Introduction
What is (wrong with) economic theory?
Capturing causality in economics and the limits of statistical inference
Microfoundations – spectacularly useless and positively harmful
Economics textbooks – anomalies and transmogrification of truth
Rational expectations – a fallacious foundation for macroeconomics
Neoliberalism and neoclassical economics
The limits of marginal productivity theory
References

About the author
Lars Pålsson Syll received a PhD in economic history in 1991 and a PhD in economics in 1997, both at Lund University, Sweden. Since 2004 he has been professor of social science at Malmö University, Sweden. His primary research areas have been in the philosophy and methodology of economics, theories of distributive justice, and critical realist social science. As philosopher of science and methodologist he is a critical realist and an outspoken opponent of all kinds of social constructivism and postmodern relativism. As social scientist and economist he is strongly influenced by John Maynard Keynes and Hyman Minsky. He is the author of Social Choice, Value and Exploitation: an Economic-Philosophical Critique (in Swedish, 1991), Utility Theory and Structural Analysis (1997), Economic Theory and Method: A Critical Realist Perspective (in Swedish, 2001), The Dismal Science (in Swedish, 2001), The History of Economic Theories (in Swedish, 4th ed., 2007), John Maynard Keynes (in Swedish, 2007), An Outline of the History of Economics (in Swedish, 2011), as well as numerous articles in scientific journals.

World Economics Association Books

Is there anything worth keeping in mainstream microeconomics?

14 april, 2015 kl. 12:48 | Publicerat i Economics | 1 kommentar

The main reason why the teaching of microeconomics (or of “ micro foundations” of macroeconomics) has been called “autistic” is because it is increasingly impossible to discuss real-world economic questions with microeconomists – and with almost all neoclassical theorists. They are trapped in their system, and don’t in fact care about the outside world any more. If you consult any microeconomic textbook, it is full of maths (e.g. Kreps or Mas-Colell, Whinston and Green) or of “tales” (e.g. Varian or Schotter), without real data (occasionally you find “examples”, or “applications”, with numerical examples – but they are purely fictitious, invented by the authors).

an-inconvenient-truth1At first, French students got quite a lot of support from teachers and professors: hundreds of teachers signed petitions backing their movement – specially pleading for “pluralism” in teaching the different ways of approaching economics. But when the students proposed a precise program of studies … almost all teachers refused, considering that is was “too much” because “students must learn all these things, even with some mathematical details”. When you ask them “why?”, the answer usually goes something like this: “Well, even if we, personally, never use the kind of ‘theory’ or ‘tools’ taught in micoreconomics Courses … surely there are people who do ‘use’ and ‘apply’ them, even if it is in an ‘unrealistic’, or ‘excessive’ way”.

But when you ask those scholars who do “use these tools”, especially those who do a lot of econometrics with “representative agent” models, they answer (if you insist quite a bit): “OK, I agree with you that it is nonsense to represent the whole economy by the (intertemporal) choice of one agent – consumer and producer – or by a unique household that owns a unique firm; but if you don’t do that, you don’t do anything !”

Bernard Guerrien

Yes indeed — ”you don’t do anything!”

Twenty years ago Phil Mirowski was invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defense. Being at a nonplus, one of them, in total desperation, finally asked “But what shall we do then?”

Yes indeed — what shall they do? The emperor turned out to be naked.

[h/t Edward Fullbrook]

Does big government help or hurt?

14 april, 2015 kl. 09:44 | Publicerat i Economics | Kommentarer inaktiverade för Does big government help or hurt?

 

Mastering ‘metrics

13 april, 2015 kl. 14:41 | Publicerat i Economics | 4 kommentarer

In their new book, Mastering ‘Metrics: The Path from Cause to Effect, Joshua D. Angrist and Jörn-Steffen Pischke write:

masteringOur first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.

Angrist and Pischke may ”dream of the trials we’d like to do” and consider ”the notion of an ideal experiment” something that ”disciplines our approach to econometric research,” but to maintain that ‘on average’ is ”usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.

First of all it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

notes7-2Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at  by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ”structural” causal effect and ε an error term.

The problem here is that although we may get an estimate of the ”true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( X=1) may have causal effects equal to – 100 and those “not treated” (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ”export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ”laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

When Joshua Angrist and Jörn-Steffen Pischke in an earlier article of theirs [”The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics,” Journal of Economic Perspectives, 2010] say that

anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future

I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …

randomizeAbsent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …

If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.

Ed Leamer

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of ”gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ”closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Angrist’s and Pischke’s ”ideally controlled experiments” tell us with certainty what causes what effects — but only given the right ”closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ”It works there” is no evidence for ”it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ”rigorous” and ”precise” methods — and ‘on-average-knowledge’ — is despairingly small.

The cleavage that counts

13 april, 2015 kl. 12:55 | Publicerat i Economics | 3 kommentarer

On the one side were those who believed that the existing economic system is in the long run self-adjusting, though with creaks and groans and jerks, and interrupted by time-lags, outside interference and mistakes … These economists did not, of course, believe that the system is automatic or immediately self-adjusting, but they did maintain that it has an inherent tendency towards self-adjustment, if it is not interfered with, and if the action of change and chance is not too rapid.

John Maynard KeynesThose on the other side of the gulf, however, rejected the idea that the existing economic system is, in any significant sense, self-adjusting. They believed that the failure of effective demand to reach the full potentialities of supply, in spite of human psychological demand being immensely far from satisfied for the vast majority of individuals, is due to much more fundamental causes …

The gulf between these two schools of thought is deeper, I believe, than most of those on either side of it realize. On which side does the essential truth lie?

The strength of the self-adjusting school depends on its having behind it almost the whole body of organized economic thinking and doctrine of the last hundred years. This is a formidable power … It has vast prestige and a more far-reaching influence than is obvious. For it lies behind the education and the habitual modes of thought, not only of economists but of bankers and business men and civil servants and politicians of all parties …

Now I range myself with the heretics. I believe their flair and their instinct move them towards the right conclusion. But I was brought up in the citadel and I recognize its power and might … For me, therefore, it is impossible to rest satisfied until I can put my finger on the flaw in the part of the orthodox reasoning that leads to the conclusions that for various reasons seem to me to be inacceptable. I believe that I am on my way to do so. There is, I am convinced, a fatal flaw in that part of the orthodox reasoning that deals with the theory of what determines the level of effective demand and the volume of aggregate employment …

John Maynard Keynes (1934)

Balance sheet recessions — a massive case of fallacy of composition problems

11 april, 2015 kl. 17:43 | Publicerat i Economics | 3 kommentarer

 

The way I understand Richard Koo, he maintains that interest rates and monetary policy don’t really matter when we’re in a balance sheet recession where, following on a nationwide collapse in asset prices, more or less every company and household find themselves carrying excess debt and have to pay down debt. The number of willing private borrowers is strongly reduced – even when interest rates are at zero – and as a result of this ”debt minimization” monetary policy by itself therefore loses all power. To get things going, the government has to run a fiscal deficit,  by increasing borrowing producing an increase in money supply and thereby making monetary policy work.

Paul Krugman had a post up earlier this year, basically maintaining that this argument can’t be right, since if there are some people – debtors – in the balance sheet recession that pay down their debt, there also have to be other people – creditors – that a fortiori strengthen their balance sheets, and who are susceptible to being influenced by what happens to interest rates and inflation.

To be honest, I have some problems seeing the great gulf between them – at least on the level of general principles – that one is lead to believe ought to be there, considering all the heated discussion there has been on this issue between them for a couple of years now.

For although it’s true, as Koo says, for those firms that try to minimize debt, no injections what so ever that the central bank makes will generate inflationary impulses. For others – and probably not even in the worst balance sheet recessions imaginable are all firms debt-constrained – there might be room for some (limited) inflationary generation by monetary means. So ultimately, it looks like more of a differences in degree rather than in kind. To Koo monetary policy has by itself no power, and instead we have to put our trust in fiscal policy. Krugman on the other hand says that some private actors might not be  balance sheet-constrained and therefore susceptible to (inflationary) monetary policy, and that besides fiscal policy anyway can work. And more importantly – both definitely agree that increased liquidity will not not always and  everywhere get the economy out of a slump, and that neither fiscal, nor monetary policy, in itself is capable of solving the problems created in a balance sheet recession.

Market fundamentalist ideologies

11 april, 2015 kl. 17:19 | Publicerat i Economics | Kommentarer inaktiverade för Market fundamentalist ideologies

 

On the irrelevance of general equilibrium theory

11 april, 2015 kl. 11:18 | Publicerat i Economics | 1 kommentar

The general equilibrium approach starts with individual decisions. It assumes that trades are voluntary and that there exist mutually advantageous opportunities of exchange. Up to here, everyone can agree. The problem lies in the next step. At this point, let us folllow David Kreps’s (1990) reasoning in his A Course in Microeconomic Theory. Kreps asks the reader to “imagine consumers wandering around a large market square” with different kinds of food in their bags. When two of them meet, “they examine what each has to offer, to see if they can arrange a mutually agreeable trade. To be precise, we might imagine that at every chance meeting of this sort, the two flip a coin and depending on the outcome, one is allowed to propose an exchange, which the other may either accept or reject. The rule is that you can’t eat until you leave the market square, so consumers wait until they are sat- isfied with what they possess” (196).

Kreps “imagines” other models of this kind. In each of them by the word “market” he means a “market square,” and he introduces rules (“flip a coin,” “nobody can leave before the end of the process”). He is aware that “exploration of more realistic models of markets is in relative infancy.” And when he speaks of “more realistic” models, he means more realistic with respect to perfect competition.

_files_2012_05_Foreclosure-MythsBut the problem with perfect competition is not its “lack” of realism; it is its “irrelevancy” as it surreptitiously assumes an entity that gives prices (present and future) to price taking agents, that collects information about supplies and demands, adds these up, moves prices up and down until it finds their equilibrium value. Textbooks do not tell this story; they assume that a deus ex machina called the “market” does the job.

Sorry, but we do not want to teach these absurdities. In the real world, people trade with each other, not with “the market.” And some of them, at least, are price makers. To make things worse, textbooks generally allude to some mysterious “invisible hand” that allocates goods optimally. They wrongly attribute this idea to Adam Smith and make use of his authority so that students accept this magical way of thinking as a kind of proof.

Perfect competition in the general equilibrium mode is perhaps an interesting model for describing a central planner who is trying to find an efficient allocation of resources using prices as signals that guide price taker households and firms. But students should be told that the course they follow—on “general competitive analysis”—is irrelevant for understanding market economies.

Emmanuelle Benicourt & Bernard Guerrien

I can’t but agree with these two eminent French mathematical economists. You could, of course, as Brad DeLong has asserted, consider modern neoclassical economics to be in fine shape ”as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism” and you manage to put a blind eye to all the caveats to its general equilibrium models — markets must be in equilibrium and competitive, the goods traded must be excludable and non-rival, etc, etc. The list of caveats soon becomes impressively large — and not very much value is left of ”modern neoclassical economics” if you ask me …

what ifStill — almost a century and a half after Léon Walras founded neoclassical general equilibrium theory — ”modern neoclassical economics” hasn’t been able to show that markets move economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. One however has to ask oneself — what good does that do?

As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is negligible. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming ”Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up.

Continuing to model a world full of agents behaving as economists — ”often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away) is a gross misallocation of intellectual resources and time.

The Bernanke-Summers imbroglio

10 april, 2015 kl. 18:24 | Publicerat i Economics | 6 kommentarer

As no one interested in macroeconomics has failed to notice, Ben Bernanke is having a debate with Larry Summers on what’s behind the slow recovery of growth rates since the financial crisis of 2007.

To Bernanke it’s basically a question of a savings glut.

To Summers it’s basically a question of a secular decline in the level of investment.

To me the debate is actually a non-starter, since they both rely on a loanable funds theory and a Wicksellian notion of a ”natural” rate of interest — ideas that have been known to be dead wrong for at least 80 years …

Let’s start with the Wicksellian connection and consider what Keynes wrote in General Theory:

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest, namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of Wicksell’s ‘natural rate of interest’, which was, according to him, the rate which would preserve the stability of some, not quite clearly specified, price-level.

I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate of interest for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the ‘natural’ rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. I had not then understood that, in certain conditions, the system could be in equilibrium with less than full employment.

I am now no longer of the opinion that the [Wicksellian] concept of a ‘natural’ rate of interest, which previously seemed to me a most promising idea, has anything very useful or significant to contribute to our analysis. It is merely the rate of interest which will preserve the status quo; and, in general, we have no predominant interest in the status quo as such.

And when it comes to the loanable funds theory, this is really in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credit, determined by supply and demand — as Bertil Ohlin put it — ”in the same way as the price of eggs and strawberries on a village market.”

loanIn the traditional loanable funds theory — as presented in mainstream macroeconomics textbooks  — the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving is the supply of loanable funds, investment is the demand for loanable funds and assumed to be negatively related to the interest rate. Lowering households’ consumption means increasing savings that via a lower interest.

From a more Post-Keynesian-Minskyite point of view the problems with the standard presentation and formalization of the loanable funds theory are quite obvious.

As already noticed by James Meade decades ago, the causal story told to explicate the accounting identities used gives the picture of ”a dog called saving wagged its tail labelled investment.” In Keynes’s view — and later over and over again confirmed by empirical research — it’s not so much the interest rate at which firms can borrow that causally determines the amount of investment undertaken, but rather their internal funds, profit expectations and capacity utilization.

As is typical of most mainstream macroeconomic formalizations and models, there is pretty little mention of real world phenomena, like e. g. real money, credit rationing and the existence of multiple interest rates, in the loanable funds theory. Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something it definitely is not. As emphasized especially by Minsky, to understand and explain how much investment/loaning/crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

The loanable funds theory means that the interest rate is endogenized by assuming that Central Banks can (try to) adjust it in response to an eventual output gap. This, of course, is essentially nothing but an assumption of Walras’ law being valid and applicable, and that a fortiori the attainment of equilibrium is secured by the Central Banks’ interest rate adjustments. From a realist Keynes-Minsky point of view this can’t be considered anything else than a belief resting on nothing but sheer hope. [Not to mention that more and more Central Banks actually choose not to follow Taylor-like policy rules.] The age-old belief that Central Banks control the money supply has more an more come to be questioned and replaced by an ”endogenous” money view, and I think the same will happen to the view that Central Banks determine ”the” rate of interest.

A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. To Keynes this was seriously wrong:

gtThe classical theory of the rate of interest [the loanable funds theory] seems to suppose that, if the demand curve for capital shifts or if the curve relating the rate of interest to the amounts saved out of a given income shifts or if both these curves shift, the new rate of interest will be given by the point of intersection of the new positions of the two curves. But this is a nonsense theory. For the assumption that income is constant is inconsistent with the assumption that these two curves can shift independently of one another. If either of them shift, then, in general, income will change; with the result that the whole schematism based on the assumption of a given income breaks down … In truth, the classical theory has not been alive to the relevance of changes in the level of income or to the possibility of the level of income being actually a function of the rate of the investment.

There are always (at least) two parts in an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no ”direct and immediate” automatic interest mechanism at work in modern monetary economies. What this ultimately boils done to is — iter — that what happens at the microeconomic level — both in and out of equilibrium —  is not always compatible with the macroeconomic outcome. The fallacy of composition (the ”atomistic fallacy” of Keynes) has many faces — loanable funds is one of them.

Contrary to the loanable funds theory, finance in the world of Keynes and Minsky precedes investment and saving. Highlighting the loanable funds fallacy, Keynes wrote in ”The Process of Capital Formation” (1939):

Increased investment will always be accompanied by increased saving, but it can never be preceded by it. Dishoarding and credit expansion provides not an alternative to increased saving, but a necessary preparation for it. It is the parent, not the twin, of increased saving.

So, in way of conclusion, what I think both Bernanke and Summers ”forget” when they hold to the loanable funds theory and the Wicksellian concept of a ”natural” rate of interest, is the Keynes-Minsky wisdom of truly acknowledging that finance — in all its different shapes — has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies, and acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

I may be too bold, but I’m willing to take the risk, and so recommend both Bernanke and Summers to make the following addition to their reading lists …

It should be emphasized that the equality between savings and investment … will be valid under all circumstances.kalecki In particular, it will be independent of the level of the rate of interest which was customarily considered in economic theory to be the factor equilibrating the demand for and supply of new capital. In the present conception investment, once carried out, automatically provides the savings necessary to finance it. Indeed, in our simplified model, profits in a given period are the direct outcome of capitalists’ consumption and investment in that period. If investment increases by a certain amount, savings out of profits are pro tanto higher …

One important consequence of the above is that the rate of interest cannot be determined by the demand for and supply of new capital because investment ‘finances itself.’

Nicholas Kaldor on putting the cart before the horse fallacy

10 april, 2015 kl. 15:15 | Publicerat i Economics | 2 kommentarer

Foreseeing the future is difficult. But sometimes it seems as though some people get it terribly right …


Some day the nations of Europe may be ready to merge their national identities and create a new European Union – the United States of Europe. If and when they do, a European Government will take over all the functions which the Federal government now provides in the U.S., or in Canada or Australia. This will involve the creation of a “full economic and monetary union”. But it is a dangerous error to believe that monetary and economic union can precede a political union or that it will act (in the words of the Werner report) “as a leaven for the evolvement of a political union which in the long run it will in any case be unable to do without”. For if the creation of a monetary union and Community control over national budgets generates pressures which lead to a breakdown of the whole system it will prevent the development of a political union, not promote it.

Nicholas Kaldor (1971)

What’s the difference between heterodox and orthodox economics?

9 april, 2015 kl. 21:25 | Publicerat i Economics | 2 kommentarer

2014-introduction-to-heterodox-and-post-keynesian-economics-14-638

Marc Lavoie comes up with this table — in his new book Post-Keynesian Economics: New Foundations — when trying to identify what are the essential differences between these two research programmes in economics. Interesting and provocative.

Data mining — a Keynesian perspective

9 april, 2015 kl. 18:03 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Data mining — a Keynesian perspective

istheseptuagintaIt will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.

J M Keynes

Vad är ränta?

9 april, 2015 kl. 09:01 | Publicerat i Economics | Kommentarer inaktiverade för Vad är ränta?

p1I radions P1 sändes igår kväll ett program om ränta. Yours truly medverkade och försökte reda ut varför vi har ränta, vad som styr hur hög eller låg den är, och vilka samhällsekonomiska effekter som dagens negativa räntor kan ge upphov till.

On randomness and information (wonkish)

8 april, 2015 kl. 08:23 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för On randomness and information (wonkish)

 

Random walk simulation (wonkish)

7 april, 2015 kl. 13:05 | Publicerat i Economics | Kommentarer inaktiverade för Random walk simulation (wonkish)

 

Modern macroeconomics — an intellectually lazy ideology

6 april, 2015 kl. 15:10 | Publicerat i Economics | 4 kommentarer

I would defend using the assumption of rational choice as long as one realises that it is not a description of reality.

But there is one area where for 30 years economists – and others – have been making that mistake. That is unfortunately, of course, in the financial markets. Practitioners and policy makers acted as if the strong form of the Efficient Markets Hypothesis held true – in other words that prices instantly reflect all relevant information about the future – even though this evidently defies reality …

spock_in_2012_by_rabittooth-d4p2io9I think an honest conventionally-trained economist has to at least acknowledge that we grew intellectually lazy about this. Although we all knew at some level that the rational choice assumption was being made to bear too much weight, very few economists openly challenged its everyday use in justifying public policy decisions. Very few of us put this weight on it in our own work. But not all that many economists challenged its pervasive use in the public policy world …

The financial and economic crisis also spells a crisis for certain areas of economics, or approaches to economics. Financial economics and macroeconomics are particularly vulnerable. They are the subject areas where the consequences of the standard assumptions have been most damaging, because they are actually least valid. Financial market traders are not remotely like Star Trek’s Mr Spock, making rational calculations unaffected by emotion or by the decisions of other people. Macroeconomics – the study of how millions of individual decisions aggregate into economy-wide measures – is essentially ideological. How macroeconomists answer a question like ‘What will be the effect of cutting the budget deficit on growth next year?’ depends on their political views. This is not remotely a scientific area of the discipline. The consensus about macroeconomics during what’s been described as ‘the Great Moderation’ of the 1990s has entirely broken down.

Diane Coyle

On the consistency of microfounded macromodels

4 april, 2015 kl. 18:16 | Publicerat i Economics | 1 kommentar

”New Keynesian” macroeconomist Simon Wren-Lewis has a post up on his blog, trying to answer a question posed by Brad DeLong, on why microfounded models dominate modern macro:

Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies …

16527659-Abstract-word-cloud-for-Microfoundations-with-related-tags-and-terms-Stock-PhotoWhy are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency …

I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there … That makes discussion difficult, but I’m not sure it makes it impossible.

Indeed — this is certainly a question of methodology. And it shows the danger of neglecting methodological issues — issues mainstream economists regularly have almost put an honour in neglecting.

Being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance, etc.). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity are important more specifically also when it comes to microfounded macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Yours truly and people like Tony Lawson have for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

an-unconvenient-truthWithin mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

To have valid evidence is not enough. What economics needs is sound evidence. Why? Simply because the premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid, but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Hicks abandoned IS-LM 35 years ago. Now it’s time for Krugman!

3 april, 2015 kl. 18:35 | Publicerat i Economics | 3 kommentarer

As we all know Paul Krugman is very fond of referring to and defending the old and dear IS-LM model.

John Hicks, the man who invented it in his 1937 Econometrica review of Keynes’ General TheoryMr. Keynes and the ‘Classics’. A Suggested Interpretation – returned to it in an article in 1980 – IS-LM: an explanation – in Journal of Post Keynesian Economics. Self-critically he wrote:

I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

Back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. He wasn’t.

What Hicks acknowledges in 1980 is basically that his original review totally ignored the very core of Keynes’ theory – uncertainty. In doing this he actually turned the train of macroeconomics on the wrong tracks for decades. It’s about time that neoclassical economists – as Krugman, Mankiw, or what have you – set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to non-ergodicity and uncertainty?

In a recent op-ed dated March 14, “John and Maynard’s Excellent Adventure”, Paul Krugman defends John Hicks’ original 1937 interpretation of Keynes’s General Theory that cast macroeconomics within a general equilibrium framework, but without the current insistence on the micro foundations that so concerns today’s general equilibrium macro theorists …

hicksbbc But while we agree with Krugman’s criticism of the hordes of “micro-foundation” revisionists that now dominate economics, we are curious why he did not mention Sir John Hicks recantation of IS-LM analysis (see “IS-LM: An Explanation”, Journal of Post Keynesian Economics, 3 (2) (Winter 1980-81)). Many of us remain deeply sceptical about the usefulness of the IS-LM framework for interpreting a real world characterized by uncertainty, crises, and institutional transformations that hardly bring the economy towards any equilibrium, never mind “general” equilibrium. But even if we abstract from these complications with the usual excuse of rendering the analysis simple for pedagogic purposes, the original Hicksian IS-LM model and its various textbook extensions (usually constructed with some sort of Phillips curve add-on) are extremely problematic. The difficulties have really little to do with the view that it’s too aggregative by representing only three markets: product, money, and bond markets – which is the criticism to which Krugman seems to be pre-emptively alluding in his article.

The first and obvious problem is that, even in the three-market aggregative model, there can never be such a thing, even at the conceptual level, called general equilibrium. To get that we must presume that there are independent functions of investment and saving and, at the same time, independent demand and supply functions for money. But one of the most basic criticisms that Keynes himself had come to recognize immediately after writing the General Theory is that the supply of money is not some exogenous variable that can be independently pitted against a distinct demand for money function. In a sophisticated monetary economy, the supply of money must be treated as a purely endogenous variable as many modern post-Keynesians and also neo-Wicksellians have come to recognize. Hence, the idea of money market equilibrium is meaningless, since one cannot conceptually ever be out of equilibrium when the two cannot be defined independently of one another …

Heterodox economists have traditionally rejected the IS-LM approach for many such reasons; but, even if one were to hold one’s nose, in an uncertain world in which investment is governed by animal spirits (and therefore I being interest inelastic unless inclusive of household spending) and in a world of endogenous money, at best, the IS curve can be represented by a vertical line (or a more elastic relation when including household spending, an IS’ curve). In much the same way, the LM curve can be represented by a horizontal line at any level of interest rates set by the central bank (as shown in the figure below). What insights can such a tool of analysis really offer economists? It suggests that an increase in autonomous spending will generate increases in output without any “crowding out” effect arising through higher interest rates. But one hardly needs an IS-LM framework whose truly central feature is the role played by interest rates to infer that! In our humble opinion, Hicksian IS-LM analysis cannot offer us very much and this is why even Sir John Hicks himself eventually abandoned it almost 35 years ago. And so should Paul Krugman!

Mario Seccareccia & Marc Lavoie

Sarah

3 april, 2015 kl. 11:15 | Publicerat i Varia | Kommentarer inaktiverade för Sarah

 

Monetarist obsessions

3 april, 2015 kl. 09:17 | Publicerat i Economics | 1 kommentar

Milton Friedman had an obessession with money, and some of us thought it went a little bit too far sometimes …

friedmancash_625-620x396Everything reminds Milton of the money supply. Well everything reminds me of sex, but I keep it out of my papers.

Robert Solow

 

« Föregående sidaNästa sida »

Blogga med WordPress.com.
Entries och kommentarer feeds.