IS-LM is bad economics no matter what Krugman says

20 Mar, 2013 at 14:26 | Posted in Economics | 13 Comments

Paul Krugman has a post up on his blog once again defending “the whole enterprise of Keynes/Hicks macroeconomic theory” and especially his own somewhat idiosyncratic version of IS-LM.

The main problem is simpliciter that there is no such thing as a Keynes-Hicks macroeconomic theory!

So, let us get some things straight.

There is nothing in the post-General Theory writings of Keynes that suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory in the 1937 QJE-article there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. So of course there can’t be any “vindication for the whole enterprise of Keynes/Hicks macroeconomic theory” – simply because “Keynes/Hicks” never existed.

And it gets even worse!

John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General TheoryMr. Keynes and the ‘Classics’. A Suggested Interpretation – returned to it in an article in 1980 – IS-LM: an explanation – in Journal of Post Keynesian Economics. Self-critically he wrote:

I accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

 
The editor of JPKE, Paul Davidson, gives the background to Hicks’s article:

I originally published an article about Keynes’s finance motive — which in 1937 Keynes added to his other liquidity preference motives (transactions, precautionary, speculative motives) , I showed that adding this finance motive required that Hicks’s IS curve and LM curves to be interdependent — and thus when the IS curve shifted so would the LM curve.
Hicks and I then discussed this when we met several times.
When I first started to think about the ergodic vs. nonergodic dischotomy, I sent to Hicks some preliminary drafts of articles I would be writing about nonergodic processes. Then John and I met several times to discuss this matter further and I finally convinced him to write the article — which I published in the Journal of Post Keynesian Economics– in which he renounces the IS-LM apparatus. Hicks then wrote me a letter in which he thought the word nonergodic was wonderful and said he wanted to lable his approach to macroeconomics as nonergodic!

So – back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. In 1980 he openly admits he wasn’t.

What Hicks acknowledges in 1980 is basically that his original review totally ignored the very core of Keynes’ theory – uncertainty. In doing this he actually turned the train of macroeconomics on the wrong tracks for decades. It’s about time that neoclassical economists – as Krugman, Mankiw, or what have you – set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to non-ergodicity and uncertainty?

Paul Krugman persists in talking about a Keynes-Hicks-IS-LM-model that really never existed. It’s deeply disappointing. You would expect more from a Nobel prize winner.

Svensk skola – en av de mest segregerade i världen

20 Mar, 2013 at 10:29 | Posted in Education & School | Comments Off on Svensk skola – en av de mest segregerade i världen

Internationell forskning pekar på farorna med full valfrihet i det svenska skolsystemet. Sedan friskolereformen i början av 90-talet har det svenska skattefinansierade skolsystemet utvecklats till det mest marknadsanpassade och konkurrensutsatta i världen. Enligt Henry Levin, professor i utbildningsekonomi på lärarhögskolan vid Columbia University i USA, finns stora faror med detta, och han varnar för att det nuvarande svenska skolsystemet driver på segregationen ytterligare.

Lyssna på dagens P1-intervju med Levin här.

Misunderstanding the p-value – here we go again

19 Mar, 2013 at 20:59 | Posted in Statistics & Econometrics | 6 Comments

A non-trivial part of teaching statistics is made up of learning students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p values – really are, still most students misinterpret them.

Giving a statistics course for the Swedish National Research School in History, I asked the students at the exam to explain how one should correctly interpret p-values. Although the correct definition is p(data|null hypothesis), a majority of the students either misinterpreted the p value as being the likelihood of a sampling error (which of course is wrong, since the very computation of the p value is based on the assumption that sampling errors are what causes the sample statistics not coinciding with the null hypothesis) or that the p value is the probability of the null hypothesis being true, given the data (which of course also is wrong, since that is p(null hypothesis|data) rather than the correct p(data|null hypothesis)).

This is not to blame on students’ ignorance, but rather on significance testing not being particularly transparent (conditional probability inference is difficult even to those of us who teach and practice it). A lot of researchers fall pray to the same mistakes. So – given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on weird backward logic that students and researchers usually don’t understand?

That media often misunderstand what p-values and significance testing are all about is well-known. Andrew Gelman gives a recent telling example:

The New York Times has a feature in its Tuesday science section, Take a Number … Today’s column, by Nicholas Balakar, is in error. The column begins:

“When medical researchers report their findings, they need to know whether their result is a real effect of what they are testing, or just a random occurrence. To figure this out, they most commonly use the p-value.”

This is wrong on two counts. First, whatever researchers might feel, this is something they’ll never know. Second, results are a combination of real effects and chance, it’s not either/or.

Perhaps the above is a forgivable simplification, but I don’t think so; I think it’s a simplification that destroys the reason for writing the article in the first place. But in any case I think there’s no excuse for this, later on:

“By convention, a p-value higher than 0.05 usually indicates that the results of the study, however good or bad, were probably due only to chance.”

This is the old, old error of confusing p(A|B) with p(B|A). I’m too rushed right now to explain this one, but it’s in just about every introductory statistics textbook ever written. For more on the topic, I recommend my recent paper, P Values and Statistical Practice, which begins:

“The casual view of the P value as posterior probability of the truth of the null hypothesis is false and not even close to valid under any reasonable model, yet this misunderstanding persists even in high-stakes settings … The formal view of the P value as a probability conditional on the null is mathematically correct but typically irrelevant to research goals (hence, the popularity of alternative—if wrong—interpretations) …”

I can’t get too annoyed at science writer Bakalar for garbling the point—it confuses lots and lots of people—but, still, I hate to see this error in the newspaper.

On the plus side, if a newspaper column runs 20 times, I guess it’s ok for it to be wrong once—we still have 95% confidence in it, right?

Statistical significance doesn’t say that something is important or true. And since there already are far better and more relevant testing that can be done (see e. g. here and here), it is high time to give up on this statistical fetish. 

The limits to probabilistic reasoning

19 Mar, 2013 at 17:40 | Posted in Statistics & Econometrics, Theory of Science & Methodology | Comments Off on The limits to probabilistic reasoning

Probabilistic reasoning in science – especially Bayesianism – reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there are strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.

probIn many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of KeynesA Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by probabilistically reasoning Bayesian economists.

In an interesting article on his blog, John Kay shows that these strictures on probabilistic-reductionist reasoning do not only apply to everyday life and science, but also to the law:

English law recognises two principal standards of proof. The criminal test is that a charge must be “beyond reasonable doubt”, while civil cases are decided on “the balance of probabilities”.

The meaning of these terms would seem obvious to anyone trained in basic statistics. Scientists think in terms of confidence intervals – they are inclined to accept a hypothesis if the probability that it is true exceeds 95 per cent. “Beyond reasonable doubt” appears to be a claim that there is a high probability that the hypothesis – the defendant’s guilt – is true. Perhaps criminal conviction requires a higher standard than the scientific norm – 99 per cent or even 99.9 per cent confidence is required to throw you in jail. “On the balance of probabilities” must surely mean that the probability the claim is well founded exceeds 50 per cent.

And yet a brief conversation with experienced lawyers establishes that they do not interpret the terms in these ways. One famous illustration supposes you are knocked down by a bus, which you did not see (that is why it knocked you down). Say Company A operates more than half the buses in the town. Absent other evidence, the probability that your injuries were caused by a bus belonging to Company A is more than one half. But no court would determine that Company A was liable on that basis.

A court approaches the issue in a different way. You must tell a story about yourself and the bus. Legal reasoning uses a narrative rather than a probabilistic approach, and when the courts are faced with probabilistic reasoning the result is often a damaging muddle …

When I have raised these issues with people with scientific training, they tend to reply that lawyers are mostly innumerate and with better education would learn to think in the same way as statisticians. Probabilistic reasoning has become the dominant method of structured thinking about problems involving risk and uncertainty – to such an extent that people who do not think this way are derided as incompetent and irrational …

It is possible – common, even – to believe something is true without being confident in that belief. Or to be sure that, say, a housing bubble will burst without being able to attach a high probability to any specific event, such as “house prices will fall 20 per cent in the next year”. A court is concerned to establish the degree of confidence in a narrative, not to measure a probability in a model.

Such narrative reasoning is the most effective means humans have developed of handling complex and ill-defined problems … Probabilistic thinking … often fails when we try to apply it to idiosyncratic events and open-ended problems. We cope with these situations by telling stories, and we base decisions on their persuasiveness. Not because we are stupid, but because experience has told us it is the best way to cope. That is why novels sell better than statistics texts.

Guess I must be doing something right

19 Mar, 2013 at 11:31 | Posted in Varia | Comments Off on Guess I must be doing something right

happy-cartoon-boy-jumping-and-smiling3 Yours truly launched this blog two years ago. The number of visitors has  increased steadily. From having only a couple of hundred visits per month at the start, I’m now having almost 60 000 visits per month. A blog is sure not a beauty contest, but given the rather “wonkish” character of the blog – with posts mostly on economic theory, statistics, econometrics, theory of science and methodology – it’s rather gobsmacking that so many are interested and take their time to read and comment on it. I am – of course – truly awed, honoured and delighted!

Insupportable equilibrium

19 Mar, 2013 at 10:57 | Posted in Economics, Theory of Science & Methodology | Comments Off on Insupportable equilibrium

Theoretical physicist Mark Buchanan has some interesting reflections on equilibrium thought in economics in his upcoming book Forecast: What Physics, Meteorology and the Natural Sciences Can Teach Us About Economics:

balancepencilFor several decades, academics have assumed that the economy is in a stable equilibrium. Distilled into a few elegant lines of mathematics by the economists Kenneth Arrow and Gerard Debreu back in the 1950s, the assumption has driven most thinking about business cycles and financial markets ever since. It informs the idea, still prevalent on Wall Street, that markets are efficient — that the greedy efforts of millions of individuals will inevitably push prices toward some true fundamental value.

Problem is, all efforts to show that a realistic economy might actually reach something like the Arrow-Debreu equilibrium have met with failure. Theorists haven’t been able to prove that even trivial, childlike models of economies with only a few commodities have stable equilibria. There is no reason to think that the equilibrium so prized by economists is anything more than a curiosity.

It’s as if mathematical meteorologists found beautiful equations for a glorious atmospheric state with no clouds or winds, no annoying rain or fog, just peaceful sunshine everywhere. In principle, such an atmospheric state might exist, but it tells us nothing about the reality we care about: our own weather …

We’ll never understand economies and markets until we get over the nutty idea that they alone — unlike almost every other complex system in the world — are inherently stable and have no internal weather. It’s time we began learning about the socioeconomic weather, categorizing its storms, and learning either how to prevent them or how to see them coming and protect ourselves against them.

Cuts – the wrong cure

19 Mar, 2013 at 08:28 | Posted in Economics, Politics & Society | Comments Off on Cuts – the wrong cure

 

Inequality continues to grow – even in Sweden

18 Mar, 2013 at 19:21 | Posted in Economics, Politics & Society | 4 Comments

Inequality continues to grow all over the world.
And in case you think it’s different in e. g. Sweden, you should take a look at some new data from Statistics Sweden.

The Gini coefficient is a measure of inequality (where a higher number signifies greater inequality) and graphing with Gretl we get this for the disposable income distribution:
SwedenGini1980to2011

Sometimes a graph says more than a thousand words …

I would say that what we see happen in Sweden is deeply disturbing. The rising inequality is outrageous – not the least since it has to a large extent to do with income and wealth increasingly being concentrated in the hands of a very small and privileged elite.

Societies where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implode. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed. It’s high time to put an end to this the worst Juggernaut of our time!

EconTalk transmogrifies Keynes

18 Mar, 2013 at 14:20 | Posted in Economics | Comments Off on EconTalk transmogrifies Keynes

econtalkYesterday, on my way home on train after conferencing in Stockholm, I tried to beguile the way by listening to a podcast of EconTalk where Garett Jones of George Mason University talked with EconTalk host Russ Roberts about the ideas of Irving Fisher on debt and deflation.

Jones’s thoughts on Fisher were thought-provoking and interesting, but in the middle of the discussion Roberts started to ask questions on the relation between Fisher’s ideas and those of Keynes, saying more or less something like “Keynes generated a lot of interest in his idea that the labour market doesn’t clear … because the price for labour does not adjust, i. e. wages are ‘sticky’ or ‘inflexible’.”

This is of course pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage rigidities, he certainly did not hold the view that wage rigidity was the reason behind high unemployment and other macroeconomic problems. To Keynes, recessions, depressions and faultering labour markets were not basically a problem of “sticky wages.”

Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will – according to Keynes – accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and – as Keynes noted – unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.

In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the “fallacy of composition.” Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.

Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. But to Keynes it would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen by Keynes as a general substitute for an expansionary monetary or fiscal policy.

Even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

So, what Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.

To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.

To mainstream neoclassical theory the kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. Keynes on the other hand writes in General Theory:

The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …

The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemployment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …

Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.

gtUnfortunately, Roberts’s statement is not the only example of this kind of utter nonsense on Keynes. Similar distortions of Keynes’s views can be found in , e. g., the economics textbooks of the “New Keynesian” – a grotesque misnomer – Greg Mankiw. How is this possible? Probably because these economists have but a very superficial acquaintance with Keynes’s own works, and rather depend on second-hand sources like Hansen, Samuelson, Hicks and the likes.

Fortunately there is a solution to the problem. Keynes books are still in print. Read them!!

Inequality and well-being

16 Mar, 2013 at 11:42 | Posted in Economics, Politics & Society | Comments Off on Inequality and well-being

pickett
Source

How to argue with economists

15 Mar, 2013 at 08:57 | Posted in Economics | 1 Comment

argueIn the increasingly contentious world of pop economics, you … may find yourself in an argument with an economist. And when this happens, you should be prepared, because many of the arguments that may seem at first blush to be very powerful and devastating are, in fact, pretty weak tea …
Principle 1: Credentials are not an argument.

Example: “You say Theory X is wrong…but don’t you know that Theory X is supported by Nobel Prize winners A, B, and C, not to mention famous and distinguished professors D, E, F, G, and H?”

Suggested Retort: Loud, barking laughter.

Alternative Suggested Retort: “Richard Feynman said that ‘Science is the belief in the ignorance of experts.’ And you’re not going to argue with HIM, are you?”

Reason You’re Right: Credentials? Gimme a break. Nobody accepts received wisdom from sages these days. Show me the argument!

Principle 2: “All theories are wrong” is false.

Example: “Sure, Theory X fails to forecast any variable of interest or match important features of the data. But don’t you know that all models are wrong? I mean, look at Newton’s Laws…THOSE ended up turning out to be wrong, ha ha ha.”

Suggested Retort: Empty an entire can of Silly String onto anyone who says this. (I carry Silly String expressly for this purpose.)

Alternative Suggested Retort: “Yeah, well, when your theory is anywhere near as useful as Newton’s Laws, come back and see me, K?”

Reason You’re Right: To say models are “wrong” is fatuous semantics; philosophically, models can only have degrees of predictive power within domains of validity. Newton’s Laws are only “wrong” if you are studying something very small or moving very fast. For most everyday applications, Newton’s Laws are very, very right.

Principle 3: “We have theories for that” is not good enough.

Example: “How can you say that macroeconomists have ignored Phenomenon X? We have theories in which X plays a role! Several, in fact!”

Suggested Retort: “Then how come no one was paying attention to those theories before Phenomenon X emerged and slapped us upside the head?”

Reason You’re Right: Actually, there are two reasons. Reason 1 is that it is possible to make many models to describe any phenomenon, and thus there is no guarantee that Phenomenon X is correctly describe by Theory Y rather than some other theory, unless there is good solid evidence that Theory Y is right, in which case economists should be paying more a lot attention to Theory Y. Reason 2 is that if the profession doesn’t have a good way to choose which theories to apply and when, then simply having a bunch of theories sitting around gathering dust is a little pointless.

Principle 4: Argument by accounting identity almost never works.

Example: “But your theory is wrong, because Y = C + I + G!”

Suggested Retort: “If my theory violates an accounting identity, wouldn’t people have noticed that before? Wouldn’t this fact be common knowledge?”

Reason You’re Right: Accounting identities are mostly just definitions. Very rarely do definitions tell us anything useful about the behavior of variables in the real world. The only exception is when you have a very good understanding of the behavior of all but one of the variables in an accounting identity, in which case of course it is useful. But that is a very rare situation indeed.

Principle 5: The Efficient Markets Hypothesis does not automatically render all models useless.

Example: “But if your model could predict financial crises, then people could use it to conduct a riskless arbitrage; therefore, by the EMH, your model cannot predict financial crises.”

Suggested Retort: “By your logic, astrophysics can never predict when an asteroid is going to hit the Earth.”

Reason You’re Right: Conditional predictions are different than unconditional predictions. A macro model that is useful for making policy will not say “Tomorrow X will happen.” It will say “Tomorrow X will happen unless you do something to stop it.” If policy is taken to be exogenous to a model (a “shock”), then the EMH does not say anything about whether you can see an event coming and do something about it.

Principle 6: Models that only fit one piece of the data are not very good models.

Example: “Sure, this model doesn’t fit facts A, B, and C, but it does fit fact D, and therefore it is a ‘laboratory’ that we can use to study the impact of changes in the factors that affect D.”

Suggested Retort: “Nope!”

Reason You’re Right: Suppose you make a different model to fit each phenomenon. Only if all your models don’t interact will you be able to use each different model to study its own phenomenon. And this is highly unlikely to happen. Also, it’s generally pretty easy to make a large number of different models that fit any one given fact, but very hard to make models that fit a whole bunch of facts at once. For these reasons, many philosophers of science claim that science theories should explain a whole bunch of phenomena in terms of some smaller, simpler subset of underlying phenomena. Or, in other words, wrong theories are wrong.

Principle 7: The message is not the messenger.

Example: “Well, that argument is being made by Person X, who is obviously just angry/a political hack/ignorant/not a real economist/a commie/stupid/corrupt.”

Suggested Retort: “Well, now it’s me making the argument! So what are you going to say about me?”

Reason You’re Right: This should be fairly obvious, but people seem to forget it. Even angry hackish ignorant stupid communist corrupt non-economists can make good cogent correct arguments (or, at least, repeat them from some more reputable source!). Arguments should be argued on the merits. This is the converse of Principle 1.

There are, of course, a lot more principles than these … The set of silly things that people can and will say to try to beat an interlocutor down is, well, very large. But I think these seven principles will guard you against much of the worst of the silliness.

Noah Smith

Lönesänkarna

13 Mar, 2013 at 16:11 | Posted in Economics, Politics & Society | Comments Off on Lönesänkarna

Efter den uppmärksammade SVT dokumentären Lönesänkarna (se här) skrev en av Svenska Dagbladets ledarskribenter, Ivar Arpi, att på “1970-talet var kapitalandelen, jämfört med löneandelen, nästan noll” och att sedan 1970-talet har “det stora flertalet fått det bättre – även de med lägst inkomster.”

Och detta grodors plums och ankors plask ska man behöva läsa år 2013.

Och som om det inte var nog med detta skriver Per Krusell – ledamot av Kungliga Vetenskapsakademien och sedan 2004 ordinarie ledamot av Kommittén för Sveriges Riksbanks pris i ekonomisk vetenskap till Alfred Nobels minne – att han sett programmet och att fasan han kände handlade om “känslan av att som ekonom ‘ha missat något, i vår analys av samhällsekonomin.”  Efter att ha funderat en stund mådde dock professorn betydligt bättre eftersom han då  insett att “det ligger väldigt lite i den huvudsakliga tesen i programmet, dvs att en ‘överenskommelse’ om att sänka löneandelen skett, kanske med syfte att öka sysselsättningen, men att en sysselsättningsökning sedan uteblev.”

Snacka går ju alltid, men det är ändå bättre om man vet vad man pratar om och kan belägga sina uttalanden. Så här ser det nämligen ut:

Källa: SWIID 3.0

(Ju lägre Ginikoefficient, desto jämlikare inkomstfördelning. Sedan 1981 har ojämlikheten i inkomstfördelningen i Sverige trendmässigt ökat brant.)

Källa: IFN, Roine och Waldenström 2008
Grafik: Idégrafik

I den ekonomisk-politiska debatten hör man ofta marknadsfundamentalismens förespråkare likt Svenska Dagbladets ledarskribenter säga att ojämlikhet inte är något problem. Anledningen sägs i huvudsak vara två.

Pro primo – hästskitsteoremet – enligt vilket sänkta skatter och ökad välfärd för de rika ändå så småningom sipprar ner till de fattiga. Göd hästen och fåglarna kan äta sig mätta på spillningen.

Pro secondo – att så länge alla har samma chans att bli rika är ojämlikheten oproblematisk.

Hästskitsteoremet (“trickle-down effect”) visade omfattande forskning redan under Thatcher-Reagan-eran hörde mytvärlden till.

Och för några år sedan visade Alan Krueger – ekonomiprofessor vid Princeton-universitetet – med sin Gatsbykurva att även det andra försöket till försvar av ojämlikhet hör hemma i sagornas värld:

[På den vertikala axeln visas hur mycket en enprocentig ökning i din fars inkomster påverkar dina förväntade inkomster (ju högre tal, desto lägre förväntad social rörlighet), och på den horisontella axeln visas Ginikoefficienten, som mäter ojämlikhet (ju högre tal, desto högre ojämlikhet)]

Tydligare än så här går det knappt att se att jämlika länder också är de med störst social rörlighet – och att det därför börjar bli dags att ta itu med de ökade inkomst- och förmögenhetsklyftorna. Så även i Sverige, där nyreviderade data från SCB visar hur utvecklingen av disponibel inkomst per konsumtionsenhet (exklusive kapitalvinst efter deciler, samtliga personer 1995-2010, medelvärden i tusen kr per k.e. i 2010 års priser) de senaste åren har sett ut:

Källa: SCB och egna beräkningar

Och än värre är det om man tittar på förmögenhetsutvecklingen.

Ojämlikheten ökar i Sverige. Att det är så beror i hög grad på politiska beslut – som att exempelvis skära i a-kassa och sjukfärsäkring. Men det är också ett uttryck för ett ideologiskifte som under trettio års tid förvandlat Sverige från ett föregångsland vad gäller jämlikhet till att bli ett av de länder där inkomst- och förmögenhetsklyftorna ökar mest i världen.

För dem som i likhet med Ivar Arpi och en del mer eller mindre prominenta nationalekonomer inte tror att det är så illa i Sverige när det gäller inkomst- och förmögenhetsfördelning, föreslår jag att kika lite närmre på diagrammet nedan över hur genomsnittsinkomsterna (uttryckt i 2009 års prisnivå) för de övre 0.1% och de lägsta 90% utvecklats sedan 1980 i Sverige:

.

Källa: The World Top Incomes Database

Det är hög tid att sätta stopp för det nya klassamhället. Det är hög tid att se till att klyftorna slutar växa i det svenska samhället. Det är hög tid att det nyliberala systemskiftet i Sverige upphör!

Mästarens återkomst

12 Mar, 2013 at 22:40 | Posted in Economics | 4 Comments

För den som vill förstå och kunna förklara finansiella och ekonomiska kriser är Robert Skidelskys Mästarens återkomst (Karnevals förlag, 2011) en utomordentlig startpunkt. Genom att lyfta fram Keynes teorier om osäkerhetens och förväntningarnas roll ger Skidelsky en nyttig motbild till den verklighetsfrämmande och modellartade bild av ekonomin som den förhärskande nationalekonomiska teorin ger.
Skidelsky visar övertygande att den neoklassiska makroteorins banérförare – vare sig det är en Robert Lucas, Thomas Sargent eller Greg Mankiw – inte bara misslyckats med att förutspå eller förklara den nuvarande krisen. De har i själva verket med sina teorier och modeller aktivt bidragit till den. I sitt förord skriver författaren:

Vissa recensenter har anklagat mig för att ge en vulgärversion av de ortodoxa teorierna som jag anser ha varit själva ursprunget till krisen. Det har hävdats att jag inte tagit tillräcklig hänsyn till de förbehåll och undantag till teorin om effektiva marknader som erkänts av dess egna akademiska förkämpar och inte heller till de många varierande åsikter som existerar inom den ekonomiska professionen. Mitt försvar inför den senare sanklagelsen är att det är Chicagoskolans teorier som dominerat under de senaste 30 åren och de som har tänkt annorlunda har blivit marginaliserade inom professionen. När det gäller den första anklagelsepunkten är det alltid i sin vulgära version som teorier tillämpas och det borde vara ett test på god ekonomisk teori att dess vulgarisering inte leder till usel politik.

Sporadic blogging

12 Mar, 2013 at 11:48 | Posted in Varia | Comments Off on Sporadic blogging

Touring again. This time conferencing in the most beautiful capital in the world – Stockholm.
Regular blogging will be resumed early next week.
 
sodermalm

Naket och självutlämnande

11 Mar, 2013 at 17:54 | Posted in Varia | Comments Off on Naket och självutlämnande

 

 
Naken och självutlämnande musik.
En rak höger i solar plexus.
Hjärtats mörker.
Peter LeMarc imponerade för tjugofem år sedan. Det gör han fortfarande.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.