Statistical inference

30 January, 2014 at 15:40 | Posted in Statistics & Econometrics | Comments Off on Statistical inference

Sampling distributions are the key to understanding inferential statistics. Once you’ve grasped how we use sampling distributions for making hypothesis testing possible, well, then you’ve understood the most important part of the logic of statistical inference — and the rest is really just a piece of cake!

Advertisements

We Praise Thee

30 January, 2014 at 14:36 | Posted in Varia | Comments Off on We Praise Thee

 

Preferences that make economic models explode

30 January, 2014 at 08:45 | Posted in Economics | 4 Comments

Commenting on experiments — showing time-preferences-switching framing effects — performed by experimental economist David Eil, Noah Smith writes:

Now, here’s the thing…it gets worse … I’ve heard whispers that a number of researchers have done experiments in which choices can be re-framed in order to obtain the dreaded negative time preferences, where people actually care more about the future than the present! Negative time preferences would cause most of our economic models to explode, and if these preferences can be created with simple re-framing, then it bodes ill for the entire project of trying to model individuals’ choices over time.

valuefunctionThis matters a lot for finance research. One of the big questions facing finance researchers is why asset prices bounce around so much. The two most common answers are A) time-varying risk premia, and B) behavioral “sentiment”. But Eil’s result, and other results like it, could be bad news for both efficient-market theory and behavioral finance. Because if aggregate preferences themselves are unstable due to a host of different framing effects, then time-varying risk premia can’t be modeled in any intelligible way, nor can behavioral sentiment be measured. In other words, the behavior of asset prices may truly be inexplicable (since we can’t observe all the multitude of things that might cause framing effects).

It’s a scary thought to contemplate, but to dismiss the results of experiments like Eil’s would be a mistake! It may turn out that the whole way modern economics models human behavior is good only in some situations, and not in others.

Bad news indeed. But hardly new.

In neoclassical theory preferences are standardly expressed in the form of a utility function. But although the expected utility theory has been known for a long time to be both theoretically and descriptively inadequate, neoclassical economists all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

What most of them try to do in face of the obvious theoretical and behavioural inadequacies of the expected utility theory, is to marginally mend it. But that cannot be the right attitude when facing scientific anomalies. When models are plainly wrong, you’d better replace them! As Matthew Rabin and Richard Thaler have it in Risk Aversion:

It is time for economists to recognize that expected utility is an ex-hypothesis, so that we can concentrate our energies on the important task of developing better descriptive models of choice under uncertainty.

In his modern classic Risk Aversion and Expected-Utility Theory: A Calibration Theorem Matthew Rabin  writes:

Using expected-utility theory, economists model risk aversion as arising solely because the utility function over wealth is concave. This diminishing-marginal-utility-of-wealth theory of risk aversion is psychologically intuitive, and surely helps explain some of our aversion to large-scale risk: We dislike vast uncertainty in lifetime wealth because a dollar that helps us avoid poverty is more valuable than a dollar that helps us become very rich.

Yet this theory also implies that people are approximately risk neutral when stakes are small … While most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for negligible stakes, but for quite sizable and economically important stakes. Economists often invoke expected-utility theory to explain substantial (observed or posited) risk aversion over stakes where the theory actually predicts virtual risk neutrality.While not broadly appreciated, the inability of expected-utility theory to provide a plausible account of risk aversion over modest stakes has become oral tradition among some subsets of researchers, and has been illustrated in writing in a variety of different contexts using standard utility functions …

Expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes. Moreover, when the specific structure of expected-utility theory is used to analyze situations involving modest stakes — such as in research that assumes that large-stake and modest-stake risk attitudes derive from the same utility-for-wealth function — it can be very misleading.

In a similar vein, Daniel Kahneman writes — in Thinking, Fast and Slow — that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by changes in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have an almost axiomatic status within economic theory?

The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

The works of people like Rabin, Thaler and Kahneman, show that expected utility theory is indeed transmogrifying truth. It’s an “ex-hypthesis”  — or as Monty Python has it:

ex-ParrotThis parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off the twig! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-PARROT!!

Encounters with R. A. Fisher

29 January, 2014 at 19:46 | Posted in Statistics & Econometrics | Comments Off on Encounters with R. A. Fisher

 

Is calibration really a scientific advance? I’ll be dipped!

29 January, 2014 at 12:47 | Posted in Economics | Comments Off on Is calibration really a scientific advance? I’ll be dipped!

Noah Smith had a post up yesterday lamenting Nobel laureate Ed Prescott:

The 2004 prize went partly to Ed Prescott, the inventor of Real Business Cycle theory. That theory assumes that monetary policy doesn’t have an effect on GDP. Since RBC theory came out in 1982, a number of different people have added “frictions” to the model to make it so that monetary policy does have real effects. But Prescott has stayed true to the absolutist view that no such effects exist. In an email to a New York Times reporter, he very recently wrote the following:

“It is an established scientific fact that monetary policy has had virtually no effect on output and employment in the U.S. since the formation of the Fed,” Professor Prescott, also on the faculty of Arizona State University, wrote in an email. Bond buying [by the Fed], he wrote, “is as effective in bringing prosperity as rain dancing is in bringing rain.”

Wow! Prescott definitely falls into the category of people whom Miles Kimball and I referred to as “purist” Freshwater macroeconomists. Prescott has made some…odd…claims in recent years, but these recent remarks were totally consistent with his prize-winning research.

Odd claims indeed. True. There are many kinds of useless economics held in high regard within mainstream economics establishment today. Few – if any – are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Interviewed by Seppo Honkapohja and Lee Evans — in  Macroeconomic Dynamics (2005, vol. 9) — Thomas Sargent answered the question if calibration was an advance in macroeconomics:

In many ways, yes … The unstated case for calibration was that it was a way to continue the process of acquiring experience in matching rational expectations models to data by lowering our standards relative to maximum likelihood, and emphasizing those features of the data that our models could capture. Instead of trumpeting their failures in terms of dismal likelihood ratio statistics, celebrate the features that they could capture and focus attention on the next unexplained feature that ought to be explained. One can argue that this was a sensible response… a sequential plan of attack: let’s first devote resources to learning how to create a range of compelling equilibrium models to incorporate interesting mechanisms. We’ll be careful about the estimation in later years when we have mastered the modelling technology…

But is the Lucas-Kydland-Prescott-Sargent calibration really an advance?

Let’s see what two eminent econometricians have to say. In Journal of Economic Perspective (1996, vol. 10) Lars Peter Hansen and James J. Hickman writes:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Error-probabilistic statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

And this is the verdict of Paul Krugman :

The point is that if you have a conceptual model of some aspect of the world, which you know is at best an approximation, it’s OK to see what that model would say if you tried to make it numerically realistic in some dimensions.

But doing this gives you very little help in deciding whether you are more or less on the right analytical track. I was going to say no help, but it is true that a calibration exercise is informative when it fails: if there’s no way to squeeze the relevant data into your model, or the calibrated model makes predictions that you know on other grounds are ludicrous, something was gained. But no way is calibration a substitute for actual econometrics that tests your view about how the world works.

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Sargent and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

Instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is, as Paul Krugman has it, that economists often mistake

beauty, clad in impressive looking mathematics, for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

Or as Noah Smith puts it:

But OK, suppose for a moment – just imagine – that somewhere, on some other planet, there was a group of alien macroeconomists who made a bunch of theories that were completely wrong, and were not even close to anything that could actually describe the business cycles on that planet. And suppose that the hypothetical aliens kept comparing their nonsense theories to data, and they kept getting rejected by the data, but the aliens still found the nonsense theories very cool and very a prioriconvincing, and they kept at it, finding “puzzles”, estimating parameter values, making slightly different nonsense models, etc., in a neverending cycle of brilliant non-discovery.

Now tell me: In principle, how should those aliens tell the difference between their situation, and our own? That’s the question that I think we need to be asking, and that a number of people on the “periphery” of macro are now asking out loud.

Det avreglerade järnvägssystemet har havererat — tillsätt en kriskommission!

28 January, 2014 at 12:41 | Posted in Politics & Society | 2 Comments

Yours truly har idag — tillsammans med bl. a. Jan Du Rietz, Sven Jernberg och Hans Albin Larsson — en artikel i Götborgs-Posten om det svenska järnvägseländet:

L-TRAFIKVERKET2_0 Staten har ”satsat” ett par miljarder kronor i dagens penningvärde på att lägga ned och riva banor, triangelspår och mötesspår, vilket minskat järnvägens kapacitet och gett ett stelt och störningskänsligt trafiksystem. Det har ihop med ofta misskötta och olämpliga tåg och otillräcklig redundans i tekniska hjälpsystem gett inställda och försenade tåg vilket blir kostsamt för resenärerna, näringslivet och samhället. Alla problem gör dessutom att allt fler kunder flyr tågtrafiken.

Bildandet av Trafikverket hotar bli dödsstöten för järnvägen. Verket har fått ett för en myndighet omöjligt politiskt uppdrag när det gäller att välja mellan väg och järnväg. Trafikverket har inte heller uppgiften att restaurera järnvägssystemet och saknar dessutom kompetens för detta.

En kriskommission med extraordinära befogenheter som kan koncentrera sig på järnvägens problem krävs. Regeringen bör bland annat kalla in experter från fungerande ”järnvägsländer” för att klara en nödvändig omstrukturering av hela järnvägssystemet.

Ode à la patrie

26 January, 2014 at 16:28 | Posted in Varia | 2 Comments

 

The failure of DSGE macroeconomics

24 January, 2014 at 10:56 | Posted in Economics | 5 Comments

As 2014 begins, it’s clear enough that any theory in which mass unemployment or (in the US case) withdrawal from the labour force can only occur in the short run is inconsistent with the evidence. Given that unions are weaker than they have been for a century or so, and that severe cuts to social welfare benefits have been imposed in most countries, the traditional rightwing explanation that labour market inflexibility [arising from minimum wage laws or unions], is the cause of unemployment, appeals only to ideologues (who are, unfortunately, plentiful) …

wrong-tool-by-jerome-awAfter the Global Financial Crisis, it became clear that the concessions made by the New Keynesians were ill-advised in both theoretical and political terms. In theoretical terms, the DSGE models developed during the spurious “Great Moderation” were entirely inconsistent with the experience of the New Depression. The problem was not just a failure of prediction: the models simply did not allow for depressions that permanently shift the economy from its previous long term growth path. In political terms, it turned out that the seeming convergence with the New Classical school was an illusion. Faced with the need to respond to the New Depression, most of the New Classical school retreated to pre-Keynesian positions based on versions of Say’s Law (supply creates its own demand) that Say himself would have rejected, and advocated austerity policies in the face of overwhelming evidence that they were not working …

Relative to DSGE, the key point is that there is no unique long-run equilibrium growth path, determined by technology and preferences, to which the economy is bound to return. In particular, the loss of productive capacity, skills and so on in the current depression is, for all practical purposes, permanent. But if there is no exogenously determined (though maybe still stochastic) growth path for the economy, economic agents (workers and firms) can’t make the kind of long-term plans required of them in standard life-cycle models. They have to rely on heuristics and rules of thumb … This is, in my view, the most important point made by post-Keynesians and ignored by Old Old Keynesians.

John Quiggin/Crooked Timber

On DSGE and the art of using absolutely ridiculous modeling assumptions

23 January, 2014 at 23:08 | Posted in Economics, Theory of Science & Methodology | 4 Comments

Reading some of the comments — by Noah Smith, David Andolfatto and others — on my post Why Wall Street shorts economists and their DSGE models, I — as usual — get the feeling that mainstream economists when facing anomalies think that there is always some further “technical fix” that will get them out of the quagmire. But are these elaborations and amendments on something basically wrong really going to solve the problem? I doubt it. Acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

When criticizing the basic workhorse DSGE model for its inability to explain involuntary unemployment, some DSGE defenders maintain that later elaborations — e.g. newer search models — manage to do just that. I strongly disagree. One of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — nothing but nonsense on stilts?

The whole strategy reminds me not so little of the following little tale:

Time after time you hear people speaking in baffled terms about mathematical models that somehow didn’t warn us in time, that were too complicated to understand, and so on. If you have somehow missed such public displays of throwing the model (and quants) under the bus, stay tuned below for examples.
But this is far from the case – most of the really enormous failures of models are explained by people lying …
truth_and_lies
A common response to these problems is to call for those models to be revamped, to add features that will cover previously unforeseen issues, and generally speaking, to make them more complex.

For a person like myself, who gets paid to “fix the model,” it’s tempting to do just that, to assume the role of the hero who is going to set everything right with a few brilliant ideas and some excellent training data.

Unfortunately, reality is staring me in the face, and it’s telling me that we don’t need more complicated models.

If I go to the trouble of fixing up a model, say by adding counterparty risk considerations, then I’m implicitly assuming the problem with the existing models is that they’re being used honestly but aren’t mathematically up to the task.

If we replace okay models with more complicated models, as many people are suggesting we do, without first addressing the lying problem, it will only allow people to lie even more. This is because the complexity of a model itself is an obstacle to understanding its results, and more complex models allow more manipulation …

I used to work at Riskmetrics, where I saw first-hand how people lie with risk models. But that’s not the only thing I worked on. I also helped out building an analytical wealth management product. This software was sold to banks, and was used by professional “wealth managers” to help people (usually rich people, but not mega-rich people) plan for retirement.

We had a bunch of bells and whistles in the software to impress the clients – Monte Carlo simulations, fancy optimization tools, and more. But in the end, the bank’s and their wealth managers put in their own market assumptions when they used it. Specifically, they put in the forecast market growth for stocks, bonds, alternative investing, etc., as well as the assumed volatility of those categories and indeed the entire covariance matrix representing how correlated the market constituents are to each other.

The result is this: no matter how honest I would try to be with my modeling, I had no way of preventing the model from being misused and misleading to the clients. And it was indeed misused: wealth managers put in absolutely ridiculous assumptions of fantastic returns with vanishingly small risk.

Cathy O’Neil

Soros & the theory of reflexivity

23 January, 2014 at 10:17 | Posted in Theory of Science & Methodology | 2 Comments

jemThe Journal of Economic Methodology has published a special issue on the theory of reflexivity developed by George Soros.

The issue includes a new article by Soros and responses and critiques from 18 leading scholars in economics and the history and philosophy of science.

The issue can be accessed free of charge here.

Why Wall Street shorts economists and their DSGE models

22 January, 2014 at 11:15 | Posted in Economics | 34 Comments

Blogger Noah Smith recently did an informal survey to find out if financial firms actually use the “dynamic stochastic general equilibrium” models that encapsulate the dominant thinking about how the economy works. The result? Some do pay a little attention, because they want to predict the actions of central banks that use the models. In their investing, however, very few Wall Street firms find the DSGE models useful …

titanic-sinking-underwaterThis should come as no surprise to anyone who has looked closely at the models. Can an economy of hundreds of millions of individuals and tens of thousands of different firms be distilled into just one household and one firm, which rationally optimize their risk-adjusted discounted expected returns over an infinite future? There is no empirical support for the idea. Indeed, research suggests that the models perform very poorly …

Why does the profession want so desperately to hang on to the models? I see two possibilities. Maybe they do capture some deep understanding about how the economy works … More likely, economists find the models useful not in explaining reality, but in telling nice stories that fit with established traditions and fulfill the crucial goal of getting their work published in leading academic journals …

Knowledge really is power. I know of at least one financial firm in London that has a team of meteorologists running a bank of supercomputers to gain a small edge over others in identifying emerging weather patterns. Their models help them make good profits in the commodities markets. If economists’ DSGE models offered any insight into how economies work, they would be used in the same way. That they are not speaks volumes.

Mark Buchanan/Bloomberg

[h/t Jan Milch]

Splendid article!

The unsellability of DSGE — private-sector firms do not pay lots of money to use DSGE models — is a strong argument against DSGE. But it is not the most damning critique of it.

In the basic DSGE models the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

The D WordAlthough this picture of unemployment as a kind of self-chosen optimality, strikes most people as utterly ridiculous, there are also, unfortunately, a lot of neoclassical economists out there who still think that price and wage rigidities are the prime movers behind unemployment. DSGE models basically explains variations in employment (and a fortiori output) with assuming nominal wages being more flexible than prices – disregarding the lack of empirical evidence for this rather counterintuitive assumption.

Lowering nominal wages would not  clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. It would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen as a general substitute for an expansionary monetary or fiscal policy. And even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

The classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong. Flexible wages would probably only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labour market.

Obviously it’s rather embarrassing that the kind of DSGE models “modern” macroeconomists use cannot incorporate such a basic fact of reality as involuntary unemployment. Of course, working with representative agent models, this should come as no surprise. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

To me, this — the inability to explain involuntary unemployment — is the most damning critique of DSGE.

Added 23:00 GMT: Paul Davidson writes in a lovely comment on this article:

In explaining why Samuelson’s “old” neoclassical synthesis Keynesianism and New Keynesianism theories have nothing to do with Keynes’s General Theory of Employment, I have continually quoted Keynes [from page 257 of The General Theory] who wrote “For the Classical Theory has been accustomed to rest the supposedly self-adjusting character of the economic system on the assumed fluidity of money-wages; and when there is rigidity, to lay on this rigidity the blame for maladjustment … My difference from this theory is primarily a difference of analysis”. This is in a chapter entitled “Changes in Money Wages” where Keynes explain why changes in money wages can not guarantee full employment.

When in a published debate with Milton Friedman in the JPKE – later published as a book entitled MILTON FRIEDMAN’S MONETARY FRAMEWORK: A DEBATE WITH HIS CRITICS I pointed out this chapter of the General Theory to Milton his response was that Davidson refers to many chapters in the back of the General Theory that have some interesting and relevant comments – but are not part of Keynes’s theory, while fixity of wages and prices are essential to understanding Keynes.

In a verbal discussion with Paul Samuelson many years ago, I pointed out this chapter to Samuelson. His response was he found the General Theory “unpalatable” but liked the policy implications and therefore he [Samuelson] merely assumed the General Theory was a Walrasian system with fixity of wages and prices!

Neoklassiska fantasifoster

22 January, 2014 at 09:41 | Posted in Economics | Comments Off on Neoklassiska fantasifoster

mattpiskNågon ses piska en dammig matta med en stor pinne.

Av detta drar man slutsatsen att stora pinnar lämpar sig väl för att rengöra dammiga ting, som säg, fönster. De flesta skulle uppfatta slutsatsen som galen.

Varför?

Rimligen för att vi vet tillräckligt om hur pinnar och fönster är beskaffade, för att dels se problemet med den specifika rengöringstekniken, dels kunna laborera med alternativa möjligheter att uppnå målet: ett rent fönster.

Liknelsen ovan presenteras av den brittiska ekonomen Tony Lawson. Han fortsätter: Låt oss nu se framför oss en situation där pinne-rengör-fönster-modellen testas upprepade gånger, med produktion av glasbitar som följd. Vad om slutsatsen då blir att man måste ”försöka lite hårdare”.

Kanske har det varit fel fönster man har prövat med, kanske är man snart i mål. Anhängare av en sådan teori skulle onekligen kunna benämnas dogmatiker.

De skulle i varje fall inte få tjänstgöra som vaktmästare särskilt länge. Men detta är, konstaterar Lawson, dessvärre en fungerande analogi för tillståndet inom den samtida, ortodoxa nationalekonomin. Ja, utom detta med den sparkade vaktmästaren då.

Lawson skrev det här i mitten av 1990-talet. Sedan dess har ganska lite hänt. Det vill säga, när det gäller utgångspunkterna för den ekonomivetenskap som undervisas på universiteten, som dominerar den akademiska disciplinens kommandohöjder och som levererar premisser till politiska och administrativa beslut.

I verkligheten har ganska mycket hänt. Bland annat återkommande finans- och valutakriser runt om i världen, spekulativa ”innovationer” som bidrog till att blåsa upp bubblor som spruckit och skvätt ned hundratals miljoner människor, pågående ”krishanteringar” som puttar hela samhällen ned i långvarig fattigdom och stagnation.

Allt detta medan ledande ekonomer har skrivit ut recept byggda på fantasier om självkorrigerande marknader. Pinne möter fönster. Hela institutioner diskuterar swing-tekniker.

Ali Esbati/Magasinet Arena

Chant d’exil

21 January, 2014 at 13:14 | Posted in Varia | Comments Off on Chant d’exil

 

De döda skall inte tiga men tala

20 January, 2014 at 19:43 | Posted in Politics & Society, Varia | Comments Off on De döda skall inte tiga men tala
Till Fadime Sahindal, född 2 april 1975 i Turkiet, mördad 21 januari 2002 i Sverige

fadimeI Sverige har vi länge okritiskt omhuldat en ospecificerad och odefinierad mångkulturalism. Om vi med mångkulturalism menar att det i vårt samhälle finns flera olika kulturer, ställer detta inte till med problem. Då är vi alla mångkulturalister. Men om vi med mångkulturalism menar att det med kulturell tillhörighet och identitet också kommer specifika moraliska, etiska och politiska rättigheter och skyldigheter, talar vi om något helt annat. Då talar vi om normativ multikulturalism. Och att acceptera normativ mångkulturalism, innebär också att tolerera oacceptabel intolerans, eftersom den normativa mångkulturalismen innebär att specifika kulturella gruppers rättigheter kan komma att ges högre dignitet än samhällsmedborgarens allmänmänskliga rättigheter – och därigenom indirekt bli till försvar för dessa gruppers (eventuella) intolerans. I ett normativt mångkulturalistiskt samhälle kan institutioner och regelverk användas för att inskränka människors frihet utifrån oacceptabla och intoleranta kulturella värderingar.

Den normativa mångkulturalismen innebär att individer på ett oacceptabelt sätt reduceras till att vara passiva medlemmar av kultur- eller identitetsbärande grupper. Men tolerans innebär inte att vi måste ha en värderelativistisk inställning till identitet och kultur. De som i vårt samhälle i handling visar att de inte respekterar andra människors rättigheter, kan inte räkna med att vi ska vara toleranta mot dem.

Om vi ska värna om det moderna demokratiska samhällets landvinningar kan samhället inte omhulda en normativ mångkulturalism. I ett modernt demokratiskt samhälle måste rule of law gälla – och gälla alla!

Mot dem som i vårt samhälle vill tvinga andra att leva efter deras egna religiösa, kulturella eller ideologiska trosföreställningar och tabun, ska samhället vara intolerant. Mot dem som vill tvinga samhället att anpassa lagar och regler till den egna religionens, kulturens eller gruppens tolkningar, ska samhället vara intolerant.


DE DÖDA

De döda skall icke tiga men tala.
Förskingrad plåga skall finna sin röst,
och när cellernas råttor och mördarnas kolvar
förvandlats till aska och urgammalt stoft
skall kometens parabel och stjärnornas vågspel
ännu vittna om dessa som föll mot sin mur:
tvagna i eld men inte förbrunna till glöd,
förtrampade slagna men utan ett sår på sin kropp,
och ögon som stirrat i fasa skall öppnas i frid,
och de döda skall icke tiga men tala.

Om de döda skall inte tigas men talas.
Fast stympade strypta i maktens cell,
glasartade beledda i cyniska väntrum
där döden har klistrat sin freds propaganda,
skall de vila länge i samvetets montrar.
balsamerade av sanning och tvagna i eld,
och de som redan har stupat skall icke brytas,
och den som tiggde nåd i ett ögonblicks glömska
skall resa sig och vittna om det som inte brytes,
för de döda skall inte tiga men tala.

Nej, de döda skall icke tiga men tala.
De som kände triumf på sin nacke skall höja sitt huvud,
och de som kvävdes av rök skall se klart,
de som pinades galna skall flöda som källor,
de som föll för sin motsats skall själva fälla,
de som dräptes med bly skall dräpa med eld,
de som vräktes av vågor skall själva bli storm.
Och de döda skall icke tiga men tala.

                                           Erik Lindegren

The pernicious impact of the widening wealth gap

20 January, 2014 at 18:19 | Posted in Politics & Society | 1 Comment

The 85 richest people on the planet have accumulated as much wealth between them as half of the world’s population, political and financial leaders have been warned ahead of their annual gathering in the Swiss resort of Davos.

The tiny elite of multibillionaires, who could fit into a double-decker bus, have piled up fortunes equivalent to the wealth of the world’s poorest 3.5bn people, according to a new analysis by Oxfam. The charity condemned the “pernicious” impact of the steadily growing gap between a small group of the super-rich and hundreds of millions of their fellow citizens, arguing it could trigger social unrest.

inequality-cartoon2It released the research on the eve of the World Economic Forum, starting on Wednesday, which brings together many of the most influential figures in international trade, business, finance and politics including David Cameron and George Osborne. Disparities in income and wealth will be high on its agenda, along with driving up international health standards and mitigating the impact of climate change.

Oxfam said the world’s richest 85 people boast a collective worth of $1.7trn (£1trn). Top of the pile is Carlos Slim Helu, the Mexican telecommunications mogul, whose family’s net wealth is estimated by Forbes business magazine at $73bn. He is followed by Bill Gates, the Microsoft founder and philanthropist, whose worth is put at $67bn and is one of 31 Americans on the list.

Nigel Morris/The Independent

[h/t Jan Milch]

Walked-out Harvard economist and George Bush advisor  Greg Mankiw wrote an article last year on the “just desert” of the one percent, arguing that a market economy is some kind of moral free zone where, if left undisturbed, people get what they “deserve.”

This should come as no surprise. Most neoclassical economists actually have a more or less Panglossian view on unfettered markets. Add to that a neoliberal philosophy of a Robert Nozick or a David Gauthier, and you get Mankiwian nonsense on the growing inequality.

A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed.

Top 30 Heterodox Economics Blogs

19 January, 2014 at 17:48 | Posted in Varia | 2 Comments

TOP30_SLIDER-470x260

I. Post Keynesian Blogs
(1) Debt Deflation, Steve Keen
http://www.debtdeflation.com/blogs/

(2) Post Keynesian Economics Study Group
http://www.postkeynesian.net/
This is not strictly a blog, but it is a great resource!

(3) Real-World Economics Review Blog
http://rwer.wordpress.com/

(4) Naked Keynesianism
http://nakedkeynesianism.blogspot.com/

(5) Lars P. Syll’s Blog
https://larspsyll.wordpress.com/
Lars P. Syll’s blog is an excellent resource, and the posts are wide-ranging and frequent.

(6) Philip Pilkington, Fixing the Economists
http://fixingtheeconomists.wordpress.com/
Philip Pilkington (of Nakedcapitalism.com) has started blogging here again. A great blog.

(7) Thoughts on Economics, Robert Vienneau
http://robertvienneau.blogspot.com/
Robert Vienneau’s blog has lots of advanced posts on Post Keynesianism economic theory.

(8) Unlearningeconomics Blog
http://unlearningeconomics.wordpress.com/
I believe “Unlearningeconomics” has wound down the blog recently, which is a pity because it was a great blog.

(9) Social Democracy for the 21st Century
http://socialdemocracy21stcentury.blogspot.com/

(10) Ramanan, The Case For Concerted Action
http://www.concertedaction.com/

(11) Yanis Varoufakis, Thoughts for the Post-2008 World
http://yanisvaroufakis.eu/

(12) Dr. Thomas Palley, PhD. in Economics (Yale University)
http://www.thomaspalley.com/
Unfortunately, Thomas Palley only has new posts infrequently, but it is a good read.

(13) Debtonation.org, Ann Pettifor blog
http://www.debtonation.org/

II. Modern Monetary Theory (MMT)/Neochartalism 
(14) Billy Blog, Bill Mitchell
http://bilbo.economicoutlook.net/blog/

(15) New Economic Perspectives
http://neweconomicperspectives.org/

(16) Mike Norman Economics Blog
http://mikenormaneconomics.blogspot.com/

(17) Warren Mosler, The Center of the Universe
http://moslereconomics.com/

(18) Centre of Full Employment and Equity (CofFEE)
http://e1.newcastle.edu.au/coffee/

III. Other Heterodox Blogs and Resources
(19) Prime, Policy Research in Macroeconomics
http://www.primeeconomics.org/

(20) Michael Hudson
http://michael-hudson.com/

(21) New Economics Foundation
http://www.neweconomics.org/

(22) Heteconomist.com
http://heteconomist.com/

(23) Econospeak Blog
http://econospeak.blogspot.com/

(24) James Galbraith
http://utip.gov.utexas.edu/JG/publications.html

(25) Robert Skidelsky’s Official Website
http://www.skidelskyr.com/

(26) The Other Canon
http://www.othercanon.org/

(27) Levy Economics Institute of Bard College
http://www.levyinstitute.org/

(28) Multiplier Effect, Levy Economics Institute Blog
http://www.multiplier-effect.org/

(29) John Quiggin
http://johnquiggin.com/

(30) The Progressive Economics Forum
http://www.progressive-economics.ca/

Lord Keynes

The Minsky moment of the Swedish housing bubble

17 January, 2014 at 20:20 | Posted in Economics | 1 Comment

The Swedish housing market and the Swedish economy have become much more fragile because of recent (post 1999) developments in the Swedish housing market. Houses have increasingly become assetized. Home ownership as well as mortgage-indebtedness of Swedish households has increased while the house price level (about +125%) increased dramatically compared with either the wage level (about +45% for an 1,33 jobs, two children family, courtesy of Eurostat) or the consumer price level (about +27%). This means that more households will experience larger declines in net worth when prices go down while the probability of such an occurrence has increased exactly because of the high level of prices and the increase in indebtedness of households. Which means that remarks of Lars Svensson that the Swedish housing market is sound because house prices can be explained by fundamentals are beside the point. It´s not about a bubble, yes or no. It´s all about fragility …

And fragility has increased. A ‘Minsky moment’ can change the relation between fundamentals (incomes, the price level, interest rates, demographics, taxes) and the amount which households are allowed to lend overnight …

When, after such a moment, house price decreases lower the perceived collateral value of other houses this can easily lead to a deflationary house price spiral which will have larger consequences when mortgage debt levels of households are higher and more households have mortgage debt. The Netherlands post Lehman are a perfect example … Dutch economists had an excuse. They had probably not yet read Minsky while the Reinhart and Rogoff  ‘this time is different’ book, which spells out the inherent historical instability of our monetary system, still had to appear. But Svensson has not. And surely not as such events have happened before and even as recently as around 1990, in Sweden.

Merijn Knibbe

On limiting model assumptions in econometrics (wonkish)

17 January, 2014 at 11:05 | Posted in Statistics & Econometrics | 3 Comments

In Andrew Gelman’s and Jennifer Hill’s Data Analysis Using Regression and Multilevel/Hierarchical Models, the authors list the assumptions of the linear regression model. On top of the list is validity and additivity/linearity, followed by different assumptions pertaining to error charateristics.

Yours truly can’t but concur, especially on the “decreasing order of importance” of the assumptions. But then, of course, one really has to wonder why econometrics textbooks — almost invariably — turn this order of importance upside-down and don’t have more thorough discussions on the overriding importance of Gelman/Hill’s two first points …

Since econometrics doesn’t content itself with only making “optimal predictions,” but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — and most important of these are validity and  additivity.

Let me take the opportunity to cite one of my favourite introductory statistics textbooks on one further reason these assumptions are made — and why they ought to be much more argued for on both epistemological and ontological grounds when used (emphasis added):

In a hypothesis test … the sample comes from an unknown population. If the population is really unknown, it would suggest that we do not know the standard deviation, and therefore, we cannot calculate the standard error. gravetterTo solve this dilemma, we have made an assumption. Specifically, we assume that the standard deviation for the unknown population (after treatment) is the same as it was for the population before treatment.

Actually this assumption is the consequence of a more general assumption that is part of many statistical procedure. The general assumption states that the effect of the treatment is to add a constant amount to … every score in the population … You should also note that this assumption is a theoretical ideal. In actual experiments, a treatment generally does not show a perfect and consistent additive effect.

A standard view among econometricians is that their models — and the causality they may help us to detect — are only in the mind. From a realist point of view, this is rather untenable. The reason we as scientists are interested in causality is that it’s a part of the way the world works. We represent the workings of causality in the real world by means of models, but that doesn’t mean that causality isn’t a fact pertaining to relations and structures that exist in the real world. If it was only “in the mind,” most of us couldn’t care less.

The econometricians’ nominalist-positivist view of science and models, is the belief that science can only deal with observable regularity patterns of a more or less lawlike kind. Only data matters and trying to (ontologically) go beyond observed data in search of the underlying real factors and relations that generate the data is not admissable. All has to take place in the econometric mind’s model since the real factors and relations according to the econometric (epistemologically based) methodology are beyond reach since they allegedly are both unobservable and unmeasurable. This also means that instead of treating the model-based findings as interesting clues for digging deepeer into real structures and mechanisms, they are treated as the end points of the investigation.

The critique put forward here is in line with what mathematical statistician David Freedman writes in Statistical Models and Causal Inference (2010):

In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …

Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …

Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

Econometrics is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures like regression analysis may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Most advocates of econometrics and regression analysis want to have deductively automated answers to fundamental causal questions. Econometricians think – as David Hendry expressed it in Econometrics – alchemy or science? (1980) – they “have found their Philosophers’ Stone; it is called regression analysis and is used for transforming data into ‘significant results!'” But as David Freedman poignantly notes in Statistical Models: “Taking assumptions for granted is what makes statistical techniques into philosophers’ stones.” To apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality in econometrics and regression analysis.

Without requirements of depth, explanations most often do not have practical significance. Only if we search for and find fundamental structural causes, can we hopefully also take effective measures to remedy problems like e.g. unemployment, poverty, discrimination and underdevelopment. A social science must try to establish what relations exist between different phenomena and the systematic forces that operate within the different realms of reality. If econometrics is to progress, it has to abandon its outdated nominalist-positivist view of science and the belief that science can only deal with observable regularity patterns of a more or less law-like kind. Scientific theories ought to do more than just describe event-regularities and patterns – they also have to analyze and describe the mechanisms, structures, and processes that give birth to these patterns and eventual regularities.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes writes (emphasis added) in Treatise on Probability (1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Forecasting and prediction — the illusion of control

16 January, 2014 at 19:23 | Posted in Economics, Statistics & Econometrics | 1 Comment

 

Yours truly on microfoundations in Real-World Economics Review

15 January, 2014 at 14:20 | Posted in Economics | 1 Comment

contents66-1
Yours truly has a paper on microfoundations — Micro versus Macro — in the latest issue of Real-World Economics Review (January 2014).

To read the other papers of the new issue of RWER — click here.

Paul Krugman finally admits there is a housing bubble in Sweden

15 January, 2014 at 10:47 | Posted in Economics | 2 Comments

In a post up on his blog a couple of months ago, Paul Krugman wrote about “Mysterious Swedes” (emphasis added):

The Riksbank raised rates sharply even though inflation was below target and falling, and has only partially reversed the move even though the country is now flirting with Japanese-style deflation. Why? Because it fears a housing bubble.

This kind of fits the H.L. Mencken definition of Puritanism: “The haunting fear that someone, somewhere, may be happy.” … The underlying deficiency of demand will call for pedal-to-the-medal monetary policy as a norm. But bubbles will happen — and central bankers, always looking for reasons to snatch away punch bowls, will use them as excuses to tighten.

Now Paul Krugman — interviewed (video starts after the short commercial) when last week conferencing in Sweden — admits that we have a housing bubble in Sweden. Good. Or as Keynes had it: “When the facts change, I change my mind. What do you do, sir?”

The core assumption of ‘modern’ macro — totally FUBAR

14 January, 2014 at 23:15 | Posted in Economics | 5 Comments

A couple of months ago yours truly had a post up criticizing sorta-kinda “New Keynesian” Paul Krugman for arguing that the problem with the academic profession is that some macro-economists aren’t “bothered to actually figure out” how the New Keynesian model with its Euler conditions —  “based on the assumption that people have perfect access to capital markets, so that they can borrow and lend at the same rate” — really works. According to Krugman, this shouldn’t  be hard at all — “at least it shouldn’t be for anyone with a graduate training in economics.”

If people (not the representative agent) at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in “New Keynesian” macromodels gonna help us?

Noah Smith now has an extremely interesting piece up on his blog that essentially corroborates yours truly’s disdain for the DSGE modelers’ obsession with Euler equations. As with so many other assumptions in ‘modern’ macroeconomics, the Euler equations don’t fit reality:

fubar1For the uninitiated, the Consumption Euler Equation is sort of like the Flux Capacitor that powers all modern “DSGE” macro models … Basically, it says that how much you decide to consume today vs. tomorrow is determined by the interest rate (which is how much you get paid to put off your consumption til tomorrow), the time preference rate (which is how impatient you are) and your expected marginal utility of consumption (which is your desire to consume in the first place). When the equation appears in a macro model, “you” typically means “the entire economy”.

This equation underlies every DSGE model you’ll ever see, and drives much of modern macro’s idea of how the economy works. So why is Eichenbaum, one of the deans of modern macro, pooh-poohing it?

Simple: Because it doesn’t fit the data. The thing is, we can measure people’s consumption, and we can measure interest rates. If we make an assumption about people’s preferences, we can just go see if the Euler Equation is right or not!

[Martin] Eichenbaum was kind enough to refer me to the literature that tries to compare the Euler Equation to the data. The classic paper is Hansen and Singleton (1982), which found little support for the equation. But Eichenbaum also pointed me to this 2006 paper by Canzoneri, Cumby, and Diba of Georgetown (published version here), which provides simpler but more damning evidence against the Euler Equation …

[T]he Euler Equation says that if interest rates are high, you put off consumption more. That makes sense, right? Money markets basically pay you not to consume today. The more they pay you, the more you should keep your money in the money market and wait to consume until tomorrow.

But what Canzoneri et al. show is that this is not how people behave. The times when interest rates are high are times when people tend to be consuming more, not less.

OK, but what about that little assumption that we know people’s preferences? What if we’ve simply put the wrong utility function into the Euler Equation? Could this explain why people consume more during times when interest rates are high?

Well, Canzoneri et al. try out other utility functions that have become popular in recent years. The most popular alternative is habit formation … But when Canzoneri et al. put in habit formation, they find that the Euler Equation still contradicts the data …

Canzoneri et al. experiment with other types of preferences, including the other most popular alternative … No matter what we assume that people want, their behavior is not consistent with the Euler Equation …

If this paper is right … then essentially all modern DSGE-type macro models currently in use are suspect. The consumption Euler Equation is an important part of nearly any such model, and if it’s just wrong, it’s hard to see how those models will work.

Amen.

Why economic forecasting is such a worthless waste of time

14 January, 2014 at 19:10 | Posted in Economics, Statistics & Econometrics | 1 Comment

Mainstream neoclassical economists often maintain – usually referring to the methodological individualism of Milton Friedman – that it doesn’t matter if the assumptions of the models they use are realistic or not. What matters is if the predictions are right or not.

If so, then the only conclusion we can make is – throw away the garbage! Because, oh dear, oh dear, how wrong they have been!

When Simon Potter a couple of years ago analyzed the predictions that the Federal Reserve Bank of New York did on the development of real GDP and unemployment for the years 2007-2010, it turned out that the predictions were wrong with respectively 5.9% and 4.4%  – which is equivalent to 6 millions of unemployed:

Economic forecasters never expect to predict precisely. One way of measuring the accuracy of their forecasts is against previous forecast errors. When judged by forecast error performance metrics from the macroeconomic quiescent period that many economists have labeled the Great Moderation, the New York Fed research staff forecasts, as well as most private sector forecasts for real activity before the Great Recession, look unusually far off the mark.

One source for such metrics is a paper by Reifschneider and Tulip (2007). They analyzed the forecast error performance of a range of public and private forecasters over 1986 to 2006 (that is, roughly the period that most economists associate with the Great Moderation in the United States).

On the basis of their analysis, one could have expected that an October 2007 forecast of real GDP growth for 2008 would be within 1.3 percentage points of the actual outcome 70 percent of the time. The New York Fed staff forecast at that time was for growth of 2.6 percent in 2008. Based on the forecast of 2.6 percent and the size of forecast errors over the Great Moderation period, one would have expected that 70 percent of the time, actual growth would be within the 1.3 to 3.9 percent range. The current estimate of actual growth in 2008 is -3.3 percent, indicating that our forecast was off by 5.9 percentage points.

Using a similar approach to Reifschneider and Tulip but including forecast errors for 2007, one would have expected that 70 percent of the time the unemployment rate in the fourth quarter of 2009 should have been within 0.7 percentage point of a forecast made in April 2008. The actual forecast error was 4.4 percentage points, equivalent to an unexpected increase of over 6 million in the number of unemployed workers. Under the erroneous assumption that the 70 percent projection error band was based on a normal distribution, this would have been a 6 standard deviation error, a very unlikely occurrence indeed.

In other words – the “rigorous” and “precise” mathematical-statistical forecasting models were wrong. And the rest of us have to pay for it.

Potter is not the only one who lately has criticized the forecasting business. British economist John Kay comes to essentially the same conclusion when scrutinizing it from a somewhat more theoretical angle:

The analysis of probability originates in games of chance, in which the rules are sufficiently simple and well-defined that the game can be repeated in more or less identical form over and over again. If you toss a fair coin repeatedly, it will come up heads about 50 per cent of the time. If you can be bothered, you can verify that fact empirically. Perhaps more strikingly, the theory of probability tells you that if you repeatedly toss that coin 50 times you will get 23 or more heads about 67 per cent of the time, and you can verify that prediction empirically too.

It is a stretch, but perhaps not a very long stretch, to extend this analysis of frequency to single events, and to say that the probability that England will win the toss in the fifth Ashes cricket test against Australia is 50 per cent. And that the probability the home side will win the toss at least 23 times in a decade of five-test match series is 67 per cent. Tosses to start sporting contests are repeated at similar events, and theory and experience validate the probabilistic approach.

rain_1422245c

Perhaps one could stretch the approach further and apply it to the probability of rain … Despite difficulties – and popular derision comparable to that experienced by economic forecasters – weather forecasters do rather well …

In contrast, severe recessions, property bubbles and bank failures are relatively infrequent, and calibration by economists has come to mean tweaking models to better explain the past rather than revising them to better predict the future – a particularly dangerous methodology when there are many reasons to think that the underlying structure of the economy is in a state of constant flux.

The further one moves from mechanisms that are well understood and events that are frequently repeated, the less appropriate is the use of probabilistic language. What does it mean to say: “I am 90 per cent certain that the extinction of the dinosaurs was caused by an object hitting the earth at Yucatán?” Not, I think, that on 90 per cent of occasions on which the dinosaurs were wiped out, the cause was an asteroid landing in what is now Mexico. There is a difference – often elided – between a probability and a degree of confidence in a forecast. It is one reason why we are better at avoiding drizzle than financial crises.

In a similar vein John Mingers writes:

It is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

Magnus and Morgan conducted a further experiment which involved eight expert teams, from different universities, analysing the same sets of data each using their own particular methodology. The data concerned the demand for food in the US and in the Netherlands and was based on a classic study by Tobin (1950) augmented with more recent data. The teams were asked to estimate the income elasticity of food demand and to forecast per capita food consumption. In terms of elasticities, the lowest estimates were around 0.38 whilst the highest were around 0.74 – clearly vastly different especially when remembering that these were based on the same sets of data. The forecasts were perhaps even more extreme – from a base of around 4000 in 1989 the lowest forecast for the year 2000 was 4130 while the highest was nearly 18000!

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not confirm with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

This also further underlines how important it is in social sciences — and economics in particular — to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in his seminal A Treatise on Probability (1921).

treatprobAccording to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

How strange that social scientists and mainstream economists as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for measurable quantities one puts a blind eye to qualities and looks the other way.

So why do companies and governments continue with this expensive, but obviously worthless, activity?

A couple of weeks ago yours truly was interviewed by a public radio journalist working on a series on Great Economic ThinkersWe were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these cocksure economists with their “rigorous” and “precise” mathematical-statistical-econometric models are so wrong again and again — why do they persist wasting time on it?

In a discussion on uncertainty and the hopelessness of accurately modeling what will happen in the real world — in M. Szenberg’s Eminent Economists: Their Life Philosophies — Nobel laureate Kenneth Arrow comes up with what is probably the most plausible reason:

It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. cloudsAn incident illustrates both uncer-tainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’

Statistical inference and the poverty of econometric assumptions

14 January, 2014 at 14:50 | Posted in Statistics & Econometrics | 1 Comment

[T]he authors take as their text a principle of Haavelmo that every testable economic theory should provide a precise formulation of the joint probability distribution of all observable variables to which it refers. It can be argued, however, that Haavelmo’s principle is sounder than the program for realizing it worked out in this book. For, as noted above, what we are asked to assume is that the precept can be carried out in economics by techniques which are established for linear systems, serially independent disturbances, error-free observations, and samples of a size not generally obtainable in economic time series today. In view of such limitations, anyone using these techniques must find himself appealing at every stage less to what theory is saying to him than to what solvability requirements demand of him. Certain it is that the empirical work of this school yields numerous instances in which open questions of economics are resolved in a way that saves a mathematical theorem.
AssumptionsStill, there are doubtless many who will be prepared to make the assumptions required by this theory on pragmatic grounds. We cannot know in advance how well or badly they will work, and they commend themselves on the practical test of convenience. Moreover, as the authors point out, a great many models are compatible with what we know in economics – that is to say, do not violate any matters on which economists are agreed. Attractive as this view is, it fails to draw a necessary distinction between what is assumed and what is merely proposed as hypothesis. This distinction is forced upon us by an obvious but neglected fact of statistical theory: the matters “assumed” are put wholly beyond test, and the entire edifice of conclusions (e.g., about identifiability, optimum properties of the estimates, their sampling distributions, etc.) depends absolutely on the validity of these assumptions. The great merit of modern statistical inference is that it makes exact and efficient use of what we know about reality to forge new tools of discovery, but it teaches us painfully little about the efficacy of these tools when their basis of assumptions is not satisfied. It may be that the approximations involved in the present theory are tolerable ones; only repeated attempts to use them can decide that issue. Evidence exists that trials in this empirical spirit are finding a place in the work of the econometric school, and one may look forward to substantial changes in the methodological presumptions that have dominated this field until now.

Millard Hastay

Dum och dummare i DN om skolkonkurrens och friskolor

13 January, 2014 at 21:44 | Posted in Education & School | Comments Off on Dum och dummare i DN om skolkonkurrens och friskolor

2011-10-26-dumb_and_dumber-533x299
 
Gabriel Heller Sahlgren, Philip Booth och Henrik Jordahl skriver idag på DN:s debattsida apropå Pisarapporten:

I början av december förra året uppdagades att svenska elever fortsätter att falla som stenar i den internationella undersökningen Pisa … I debatten hörs samtidigt krav på att vi bör anamma ett tillvägagångssätt av ”best practice” och kopiera framgångsrecept från andra länder …

Till grund för förslagen ligger ofta ett kapitel i OECD:s Pisarapport som sägs förklara varför vissa utbildningssystem presterar bättre än andra. Detta kan synas rimligt. För nog borde rapporten som rankar länderna även ha svar på vad som förklarar deras framgångar och misslyckanden?

Men tyvärr är det inte så enkelt. Kapitlet i fråga består i stort sett av enkla korrelationsanalyser. Men bara för att det finns ett statistiskt samband behöver det inte finnas ett orsakssamband. Detta påpekas också av författarna … För att avgöra vad som orsakar vad krävs långt bättre metoder än de som används i OECD:s rapport.

Ett exempel på hur fel det kan bli gäller skolvalets och konkurrensens effekter. I Pisarapporten läser vi att det inte finns någon relation mellan länders resultat och andelen elever i fristående skolor. Samma slutsats dras av Andreas Schleicher, vice chef för utbildningsfrågor vid OECD, som nyligen hävdade att avsaknaden av en sådan relation visar att konkurrens inte ökar elevers prestationer. Svenska pedagoger och debattörer på vänsterkanten har tagit ett steg längre och hävdat att skolvalet ligger bakom kunskapsfallet i internationella undersökningar. En av dessa är Magnus Oskarsson, projektledare för Pisa i Sverige, trots att huvudorganisationen OECD alltså inte finner något stöd för detta.

Samtidigt motsägs båda dessa påståenden av den nationalekonomiska skolforskningen … Forskningsmetoderna som används är inte helt invändningsfria, men de är långt mycket bättre än de som används i OECD:s egna analyser.

Varför fortsätter då debattörer och politiker att hänvisa till OECD-rapporten och hög- och lågpresterande länders särdrag i sina förslag till reformer? En anledning är att de troligtvis inte vet bättre. Analyser av forskning kräver först och främst en förståelse för vad som är bra och vad som är dåligt – och rigorösa studier, i nationalekonomi och andra ämnen, är tyvärr inte speciellt lättfattliga.

Samtidigt tar det också tid att fördjupa sig i vad många anser vara tråkiga och komplicerade analyser av data. Det går snabbare och är roligare att läsa enkla beskrivningar av resultaten och titta på grafer än att studera metodologi och regressionstabeller. Men det senare är nödvändigt för att komma åt orsakssambanden.

Låt mig börja med att slå fast att jag helt delar debattörernas uppfattning vad avser våra begränsade möjligheter att dra kausala slutsatser utifrån rena korrelationer.

Så långt är jag med dem.

Men — återigen får vi i grund och botten höra den gamla vanliga självgratulerande visan — nationalekonomisk skolforskning “visar” (garderat med en till intet förpliktigande utsaga om att forskningsmetoderna som används sägs vara inte “helt invändningsfria”) att fler friskolor leder till bättre resultat. Problemet kvarstår, för i grund är det man säger — trots åberopade “rigorösa studier” — lika ifrågasättbart som de “vänstersidans” tolkningar av Pisa-resultaten som man kritiserar!

Låt mig förklara varför jag anser att det den åberopade “nationalekonomiska skolforskningen” säger om skolkonkurrens och friskolor är lika mycket “fel” som “vänstertolkningarna” — och samtidigt försöka reda ut vad forskning och data verkligen säger om skolkonkurrens och friskolors effekter på skolors och elevers resultat.

När vi i Sverige 1992 genomförde en friskolereform fick familjer därigenom över lag större möjlighet att själva välja var man ville sätta sina barn i skola. I linje med det av Milton Friedman redan på 1950-talet förespråkade införandet av skolpeng (voucher) underlättades etablerandet av friskolor väsentligt.

Friskolorna har som följd av denna friskolereform – inte minst på senare år – ökat sin andel av skolmarknaden markant. Idag utbildas mer än 10 % av landets grundskoleelever vid en friskola och nästan 25 % av gymnasieeleverna får sin utbildning vid friskolor.

Friskoleexpansionen har dock rent geografiskt sett väldigt olika ut. Idag saknar lite mer än en tredjedel av kommunerna friskolor på grundskolenivå och två tredjedelar av kommunerna saknar friskolor på gymnasienivå. Och i genomsnitt har elever vid friskolor föräldrar med högre utbildningsnivå och inkomster än eleverna vid kommunala skolor.

Mot bland annat denna bakgrund har det bland forskare, utbildningsanordnare, politiker m.fl. blivit intressant att försöka undersöka vilka konsekvenser friskolereformen haft.

Nu är det självklart inte helt lätt att göra en sådan bedömning med tanke på hur mångfacetterade och vittomfattande de mål är som satts upp för skolverksamheten i Sverige.

Ett vanligt mål som man fokuserat på är elevernas prestationer i form av uppnående av olika kunskapsnivåer. När man genomförde friskolereformen var ett av de ofta framförda argumenten att friskolorna skulle höja elevernas kunskapsnivåer, både i friskolorna (”den direkta effekten”) och – via konkurrenstrycket – i de kommunala skolorna (”den indirekta effekten”). De kvantitativa mått man använt för att göra dessa värderingar är genomgående betyg och/eller resultat på nationella prov.

Vid en första anblick kan det kanske förefalla trivialt att göra sådana undersökningar. Det är väl bara att – kan det tyckas – plocka fram data och genomföra nödiga statistiska tester och regressioner. Riktigt så enkelt är det nu inte. I själva verket är det väldigt svårt att få fram entydiga kausala svar på den här typen av frågor.

Ska man entydigt kunna visa att det föreligger effekter och att dessa är ett resultat av just friskolornas införande – och inget annat – måste man identifiera och därefter kontrollera för påverkan från alla ”störande bakgrundsvariabler” av typen föräldrars utbildning, socioekonomisk status, etnicitet, geografisk hemhörighet, religion m.m. – så att vi kan vara säkra på att det inte är skillnader i dessa variabler som är de i fundamental mening verkliga kausalt bakomliggande förklaringarna till eventuella genomsnittliga effektskillnader.

Idealt sett skulle vi, för att verkligen vinnlägga oss om att kunna göra en sådan kausalanalys, vilja genomföra ett experiment där vi plockar ut en grupp elever och låter dem gå i friskolor och efter en viss tid utvärderar effekterna på deras kunskapsnivåer. Sedan skulle vi vrida tillbaka klockan och låta samma grupp av elever istället gå i kommunala skolor och efter en viss tid utvärdera effekterna på deras kunskapsnivåer. Genom att på detta experimentvis kunna isolera och manipulera undersökningsvariablerna så att vi verkligen kan säkerställa den unika effekten av friskolor – och inget annat – skulle vi kunna få ett exakt svar på vår fråga.

Eftersom tidens pil bara går i en riktning inser var och en att detta experiment aldrig går att genomföra i verkligheten.

Det nästbästa alternativet skulle istället vara att slumpmässigt dela in elever i grupper: en med elever som får gå i friskolor (”treatment”) och en med elever som får gå i kommunala skolor (”control”). Genom randomiseringen förutsätts bakgrundsvariablerna i genomsnitt vara identiskt likafördelade i de båda grupperna (så att eleverna i de båda grupperna i genomsnitt inte skiljer sig åt i vare sig observerbara eller icke-observerbara hänseenden) och därigenom möjliggöra en kausalanalys där eventuella genomsnittliga skillnader mellan grupperna kan återföras på (”förklaras av”) om man gått i friskola eller i kommunal skola.

Bland de forskare som förspråkar randomiserade studier (”randomized controlled trials”) – RCT – framhålls ofta att införandet av en ny policy/åtgärdsprogram – betygssystem, skolpeng m.m. – ska vara väglett av bästa möjliga evidens och att RCT tillhandahåller just detta. En ideal RCT bevisar att detta åtgärdsprogram kausalt bidrog till ett visst utfall, i en viss grupp, i en viss population. Om villkoren för en ideal RCT är uppfyllda följer med deduktiv nödvändighet att åtgärdsprogrammet kausalt medverkade till utfallet hos åtminstone några av enheterna i studien. Själva undersökningens design borgar för att undersökningsresultaten är tillförlitliga utan att man behöver explicitgöra kausala bakgrunds- och stödfaktorer. Randomiseringen garanterar att dessa bakgrunds-och stödfaktorer är ”lika-fördelade” för både ”behandlingsgruppen” och ”kontrollgruppen”, vilket gör att man inte behöver känna till vilka dessa kausala bakgrunds- och stödfaktorer är. Man behöver inte ens känna till om de över huvud existerar.

Till grund för RCT ligger att man (givet ett antal förenklande antaganden som vi inte ska problematisera här) kan beskriva den underliggande kausala principen för implementering av policy/åtgärdsprogram av olika slag på följande vis:

Yi <= Ai + A2Y0i + A3BiXi + A4Zi,

där <= betecknar en kausal orsaksverkan från högerledskvantiteterna på vänsterledskvantiteten, Yi är utfallet, Xi är policyvariabeln, Ai är konstanter som anger hur stor den effekt de efterföljande variablerna har på Yi är, Yoi är utfallsvariabelns ”basnivå” för i, Bi är alla de olika faktorer som bidrar till att Xi kausalt ger upphov till en effekt på Yi, Zi representerar alla andra faktorer som utöver Xi additivt bidrar till att påverka Yi.

Här föreligger som bekant många olika källor till felbedömningar när vi utifrån denna kausalmodell ska implementera en policy. Tron att man kan påverka Xi för att ändra utfallet Yi kan slå fel genom att implementeringen påverkar den föregivet stabila underliggande kausala strukturen (här främst representerade av Bi och Zi). Xi interagerar med andra variabler på ett sätt som kan innebära att policyimplementeringen de facto ger upphov till en ny struktur där de tidigare föreliggande relationerna helt enkelt inte längre (oförändrat) är för handen.

I normalfallet är de ansvariga för policyförändringar i första hand intresserade av vad förändringen i genomsnitt bidrar med i utfallet i den studerade populationen. Förutsättningarna för att kunna göra en sådan bedömning avhänger på ett kritiskt sätt möjligheterna av att på något vis hantera (kontrollera för) interaktionen mellan policyvariabeln och de kausala bakgrunds- och stödfaktorerna.

RCT löser (idealt) detta, som vi sett, genom att via randomisering dela in populationen i en behandlingsgrupp och en kontrollgrupp och därigenom mer eller mindre garantera att fördelningen av Yo, Bi och Zi är desamma i dessa båda grupper. Om det efter en (ideal) implementering av den nya policyn föreligger en skillnad i Yi mellan de två grupperna, måste det föreligga en genuin kausal orsak-verkan-relation hos åtminstone någon eller några av de individer som ingår i populationen. Poängen är här alltså att även om vi inte vet vad som ingår i Bi och Zi, så kan vi ändå uttala oss om policy-variabelns inverkan på utfallet i kausala termer.

Låt oss anta att vi har lyckats genomföra en ideal RCT och alltså kan vara säkra på att den enda kausala verkan som föreligger är begränsad till att vara den mellan policyvariabeln X och dess inverkan på utfallsvariabeln Y. Vad vi då har lyckats etablera är att i en specifik undersökt situation, i en viss population, så är den genomsnittliga behandlingseffekten lika med differensen mellan utfallen för behandlings- respektive kontrollgruppen (detta innebär att en behandling kan innebära att många får det mycket ”sämre” och att några få får det ”bättre”, men att det i genomsnitt blir ”bättre”). ”Behandlingseffekten” W kan då skrivas som

W = A3E[Bi](XT – XK),

där E[] är en förväntningsvärdesoperator (genomsnitt) och XT och XK är värdet på behandlingsvariabeln i behandlings-respektive kontrollgruppen.

För vem är detta relevant? Om vi implementerar X här för oss – kan vi verkligen vara säkra på att vi får samma genomsnittliga effekt? Nej. Eftersom E[Bi] är ett genomsnitt över alla de olika faktorer som bidrar till att Xi kausalt ger upphov till en effekt på Yi, måste vi nämligen också veta hur dessa faktorer är fördelade i den nya populationen. Det föreligger inga som helst a priori skäl att anta att fördelningen av den typen av bakgrunds- och stödfaktorer skulle se likadan ut här hos oss som där för dem i den ursprungliga RCT-populationen.

Detta innebär att man kan ifrågasätta om RCT är evidentiellt relevanta när vi exporterar resultaten från ”experimentsituationen” till en ny målpopulation. Med andra konstellationer av bakgrunds- och stödfaktorer säger oss den genomsnittliga effekten av en behandlingsvariabel i en RCT troligen inte mycket, och kan därför inte heller i någon större utsträckning vägleda oss i frågan om vi ska genomföra en y policy/åtgärdsprogram eller ej.

RCT borgar helt enkelt inte för att en föreslagen policy är generellt tillämpar. Inte ens om man kan anföra goda skäl för att betrakta policyvariabeln som strukturellt stabil, eftersom stabilitetskravet främst måste gälla BiXi och inte Xi.

Förespråkare för RCT brukar åberopa ett antagande om att målpopulationen måste vara ”lik” den ursprungliga RCT-populationen för att berättiga ”exportlicensen”. Men ett sådant åberopande för oss inte speciellt långt eftersom det sällan specificeras i vilka dimensioner och i vilken utsträckning ”likheten” ska föreligga.

Så även om man lyckats genomföra en ideal RCT, så innebär detta dock inte att man därigenom har några som helst skäl att tro att undersökningsresultaten är externt valida i meningen att de förbehållslöst utgör en broslagning från att det fungerade i population A till att det också kommer att fungera i population B.

När man genomför en RCT ”laddar” man så att säga tärningarna. Men om man ska implementera ett åtgärdsprogram i en annan population än den i vilken RCT genomfördes (kastar andra tärningar) hjälper detta oss föga. Vi måste fråga oss hur och varför fungerar policyn/åtgärdsprogrammet. Att det fungerar i en kontext garanterar inte att det fungerar i en annan kontext, och då kan frågor om hur och varför hjälpa oss en bra bit på vägen att förstå varför ett åtgärdsprogram som fungerar i population A inte fungerar i population B. Inte minst när det gäller sociala och ekonomiska åtgärdsprogram spelar kausala bakgrunds- och stödfaktorer ofta en avgörande roll. Utan kunskap om dessa är det hart när omöjligt att förstå varför och hur ett åtgärdsprogram fungerar – och därför för oss RCT realiter inte så långt som dess förespråkar vill ge sken av.

Att i slutna system eller kliniska experiment anta att man befinner sig i nästintill ideala försöksvillkor låter sig kanske göras, men att i öppna system eller sociala sammanhang tro sig ha nästintill full kontroll över alla kausala alla bakgrunds- och stödvariabler är oftast just inget annat än en tro. När det då visar sig inte fungera, har vi ingen vägledning av RCT.

Det är som när diskmaskinen slutat fungera hemma i köket. I normalfallet fungerar den problemfritt. Och vi vet att miljontals andra har diskmaskiner som också fungerar. Men när de inte fungerar får vi kalla på en reparatör eller själva undersöka maskinen och se om vi kan hitta felet. Vi försöker lokalisera var i maskineriet det har hängt upp sig, vilka mekanismer som fallerar o s v. Kanske glömde vi bara slå på strömmen. Eller kanske motorn havererat på grund av dålig ventilation och underhåll. I vilket fall som helst hjälper det oss föga att veta att maskinen under ideala förhållanden fungerar. Här måste vi börja tänka själva och inte bara förlita oss på att maskinen brukar fungera när den lämnar produktionsbandet (som ju är konstruerat just för att maskinerna ska fungera). Att tillverkaren gör stickprov för att säkerställa statistiskt acceptabla felmarginaler hjälper inte mig när min maskin ”lagt av”.

När åtgärdsprogrammet inte visar sig fungera på det sätt RCT gett oss skäl tro, har förespråkarna inget mer att komma med än att kanske föreslå ännu fler RCT. Då är det nog mer framkomligt att tänka själv och fundera över vad som gått fel och inte förlita sig på att fler ideala randomiseringar på något magiskt sätt ska lösa problemet. För det gör de inte. Hur många gånger du än släpper kritan framme vid tavlan så faller den aldrig i golvet om det står ett bord i vägen. Då är det bättre att tänka själv kring varför och hur. Då kan vi flytta bordet och visa att gravitationskraften de facto får kritan att falla till golvet.

RCT kan aldrig utgöra annat än en möjlig startpunkt för att göra relevanta bedömningar av om policy/åtgärdsprogram som fungerat där för dem är effektiva här för oss. RCT är inget trumfkort. Det utgör ingen ”gold standard” för att besvara kausala policy-frågor.

För att kunna ge goda argument för att vad som fungerar där för dem också ska fungera här för oss måste vi ha empiriska evidens och kunskaper om kausala variabler som bidrar till att generera det eftersökta utfallet. I annat fall kan vi inte på ett adekvat sätt bedöma om resultaten i RCT där för dem är relevanta här för oss

Så – denna typ av undersökningar är visserligen möjliga att genomföra, men de är i praktiken svåra att få till stånd och dessutom ofta kostsamma. I praktiken får man ofta nöja sig med att genomföra experiment där elever i en grupp ”matchas” mot elever i en annan grupp – på så sätt att varje individ i den första gruppen motsvaras av en individ i den andra gruppen, som är så ”identiskt lik” som möjligt den förra med avseende på alla kända bakgrundsvariabler, så att eventuella effektskillnader i så hög grad som möjligt kan återföras på variabeln friskola/kommunal skola.

Till detta kommer att även där det är möjligt att genomföra dessa typer av randomiserings- och matchningsexperiment är värdet av dem problematiskt eftersom undersökningspopulation genomgående är relativt små och den artificiella inramningen gör att möjligheterna att ”exportera” resultaten (”extern validitet”) till andra populationer än den undersökta ofta är förhållandevis små. Därtill kommer – när det mer specifikt handlar om utbildning – att utbildning är en mångdimensionell och heterogen verksamhet som är svår att mäta och värdera med enkla operationaliserbara kriterier och mätinstrument, vilket ytterligare försvårar möjligheterna att på säkra grunder hävda att man har på fötterna för att exportera forskningsresultat från en kontext till en annan (som exempelvis Cartwright & Hardie (2012), som understryker denna problematik med några väl valda exempel från just utbildningsområdet). De svårfångade kvalitetsaspekterna på denna typ av verksamhet gör också att det hela tiden föreligger incitament för aktörer att ta vägen om kvalitetsförsämringar och allehanda former av manipulationer på vissa områden för att eventuellt satsa tid och resurser för nå mål på andra mer lättmätta områden.

Det i särklass vanligaste undersökningsförfarandet är – som debattörrna lyfter fram – att man genomför en traditionell multipel regressionsanalys baserad på så kallade minstakvadrat (OLS) eller maximum likelihood (ML) skattningar av observationsdata, där man försöker ”konstanthålla” ett antal specificerade bakgrundsvariabler för att om möjligt kunna tolka regressionskoefficienterna i kausala termer. Vi vet att det föreligger risk för ett ”selektionsproblem” eftersom de elever som går på friskolor ofta skiljer sig från de som går på kommunala skolor vad avser flera viktiga bakgrundsvariabler, kan vi inte bara rakt av jämföra de två skolformerna kunskapsnivåer för att därur dra några säkra kausala slutsatser. Risken är överhängande att de eventuella skillnader vi finner och tror kan förklaras av skolformen, i själva verket helt eller delvis beror på skillnader i de bakomliggande variablerna (t.ex. bostadsområde, etnicitet, föräldrars utbildning, m.m.)

Ska man försöka sig på att sammanfatta de regressionsanalyser som genomförts är resultatet att de kausala effekter på elevers prestationer man tyckt sig kunna identifiera av friskolor genomgående är små (och ofta inte ens statistiskt signifikanta på gängse signifikansnivåer). Till detta kommer också att osäkerhet råder om man verkligen kunnat konstanthålla alla relevanta bakgrundsvariabler och att därför de skattningar som gjorts ofta i praktiken är behäftade med otestade antaganden och en icke-försumbar osäkerhet och ”bias” som gör det svårt att ge en någorlunda entydig värdering av forskningsresultatens vikt och relevans. Enkelt uttryckt skulle man kunna säga att många – kanske de flesta – av de effektstudier av detta slag som genomförts, inte lyckats skapa tillräckligt jämföra grupper, och att – eftersom detta strikt sett är absolut nödvändigt för att de statistiska analyser man de facto genomför ska kunna tolkas på det sätt man gör – värdet av analyserna därför är svårt att fastställa. Det innebär också – och här ska man även väga in möjligheten av att det kan föreligga bättre alternativa modellspecifikationer (speciellt vad gäller ”gruppkonstruktionerna” i de använda urvalen) – att de ”känslighetsanalyser” forskare på området regelmässigt genomför, inte heller ger någon säker vägledning om hur pass ”robusta” de gjorda regressionsskattningarna egentligen är. Vidare är det stor risk för att de latenta, bakomliggande, ej specificerade variabler som representerar karakteristika som ej är uppmätta (intelligens, attityd, motivation m.m.) är korrelerade med de oberoende variabler som ingår i regressionsekvationerna och därigenom leder till ett problem med endogenitet.

I en studie av Anders Böhlmark och Mikael Lindahl (2012))  – Har den växande friskolesektorn varit bra för elevernas utbildningsresultat på kort och lång sikt? – har man med utgångspunkt i främst multipla regressionsanalyser av det ovan angivna slaget, menat sig bl.a. kunna visa att friskolereformen inneburit – först och främst beroende på ”spridnings- och konkurrenseffekter” – att genomsnittsresultateten över tiden för alla elever – alltså inte bara för de som går i friskolor – har ökat mest i de kommuner där andelen elever som går i friskolor har ökat mycket i förhållande till kommuner där andelen elever som går i friskolor har ökat mindre eller kanske inte alls.

Kort sagt – ökningen av andelen friskole-elever i en kommun ger i genomsnitt positiva effekter på elevernas utbildnings-resultat. Av resultaten fram-kommer dock att effekten för den enskilde individen av att gå i en friskola, istället för i en kommunal skola, bara står för en liten del den totala effekten. Lejonparten bedöms vara en positiv externalitetseffekt i form av en ökad konkurrens som gynnar alla elever. Regressionsanalysen möjliggör dock inte ett uteslutande av att det också kan föreligga en segregations- och sorteringseffekt i form av att friskolereformen gjort elevgrupperna på de olika skolorna mer ”homogena” och detta på olika sätt kan ha påverkat elevprestationerna i positiv riktning.

Resultaten har både av forskarna själva och av andra tolkats som belägg för att friskolereformen och den ökade konkurrensen är bra för det svenska skolsystemet i sin helhet. Tidigare svensk “nationalekonomisk skolforskning” har visat på liknande resultat.

Två framstående amerikanska forskare som under flera decennier forskat om friskolor skriver i en utvärdering (L. Barrow & C. E. Rouse (2008), ”School vouchers: Recent findings and unanswered questions.” Economic Perspectives No. 3.) av vad den amerikanska forskningen visar på området  att det inte är uppenbart att ”friskoleforskarna” med sina undersökningsmetoder på ett adekvat sätt har kunnat väga in eller neutralisera betydelsen av skillnader som faktiskt föreligger mellan elever i friskolor respektive kommunala skolor. Ja, man går t.o.m. så långt att man menar att de flesta fall av de små effekter som man i forskningen funnit ”inte är statistiskt signifikant skilda från noll och därför i själva verket kan vara ett rent slumpmässigt resultat.”

USA:s kanske främste utvärderare på området konkluderar på liknande sätt i en amerikansk utvärdering av friskolor (P. Wolf et al. (2010). “Evaluation of the DC Opportunity Scholarship Program: Final Report,” U.S. Department of Education) att ”effekterna varit små och osäkra.”

Och i en nyligen genomförd genomlysning av  följderna av det svenska friskoleexperimentet skriver Henry M. Levin – “distinguished economist and director of the National Center for the Study of Privatization in Education” vid Teachers College, Columbia University – följande:

  • On the criterion of productive efficiency, the research studies show virtually no difference in achievement between public and independent schools for comparable students. Measures of the extent of competition in local areas also show a trivial relation to achievement. The best study measures the potential choices, public and private, within a particular geographical area. For a 10 percent increase in choices, the achievement difference is about one-half of a percentile. Even this result must be understood within the constraint that the achievement measure is not based upon standardized tests, but upon teacher grades. The so-called national examination result that is also used in some studies is actually administered and graded by the teacher with examination copies available to the school principal and teachers well in advance of the “testing”. Another study found no difference in these achievement measures between public and private schools, but an overall achievement effect for the system of a few percentiles. Even this author agreed that the result was trivial …
  • With respect to equity, a comprehensive, national study sponsored by the government found that socio-economic stratification had increased as well as ethnic and immigrant segregation. This also affected the distribution of personnel where the better qualified educators were drawn to schools with students of higher socio-economic status and native students. The international testing also showed rising variance or inequality in test scores among schools. No evidence existed to challenge the rising inequality. Accordingly, I rated the Swedish voucher system as negative on equity.

Sammantaget verkar den enda rimliga slutsatsen vara att forskningen inte generellt kunnat belägga att införandet av friskolor och ökad skolkonkurrens lett till några större effektivitetsvinster eller påtagligt ökade kunskapsnivåer hos eleverna i stort. De uppmätta effekterna är små och beror till stor del på hur de använda modellerna specificeras och hur de ingående variablerna mäts och vilka av dem som ”konstanthålls”. Det går således inte heller att säkerställa att de effekter man tyckt sig kunna detektera vad gäller resultatförbättringar i friskolor skulle bero på friskolorna som sådana. Metodologiskt har det visat sig vara svårt att konstruera robusta och bra kvalitetsmått och mätinstrument som möjliggör en adekvat hantering av alla de olika faktorer – observerbara och icke-observerbara – som påverkar konkurrensen mellan skolformerna och ger upphov till eventuella skillnader i elevprestationer mellan skolformerna. Följden blir att de små effekter man (i vissa undersökningar) kunnat konstatera föreligga sällan är behäftade med någon högre grad av evidentiell ”warrant”. Mycket av forskningsresultaten baseras på både otestade och i grunden otestbara modellantaganden (t.ex. vad avser linearitet, homogenitet, additivitet, icke-förekomst av interaktionsrelationer, oberoende, bakgrundskontextuell neutralitet m.m.) Resultaten är genomgående av en tentativ karaktär och de slutsatser forskare, politiker och opinionsbildare kan dra av dem bör därför återspeglas i en ”degree of belief” som står i paritet med denna deras epistemologiska status.

Alltså: beläggen för att den konkurrens som friskolereformen ledde till skulle bidragit till att höja kvaliteten i skolan verkar vara ytterst osäkra och med avseende på effektstorlek nästintill obefintliga, i varje fall om man med kvalitet menar vad eleverna lär sig. Detta förefaller också vara i linje med vad stora delar av den internationella forskningslitteraturen finner. Till detta kan man väl också foga att de undersökningar som gjorts bara kan uttala sig om vad som gäller i genomsnitt. Bakom ett högt genomsnitt kan – som tidigare konstaterat – dölja sig flera svagpresterande enskilda skolor som vägs upp av några få högpresterande.

Why is there a Nobel Prize in Economics?

12 January, 2014 at 09:54 | Posted in Economics | 1 Comment

Nobel Prize medalThe Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, usually — incorrectly — referred to as the Nobel Prize in Economics, is an award for outstanding contributions to the field of economics. The Prize in Economics was established and endowed by Sweden’s central bank Sveriges Riksbank in 1968 on the occasion of the bank’s 300th anniversary.The first award was given in 1969. The award is presented in Stockholm at an annual ceremony on December 10.

As of 2012 the prize had been given to 71 individuals. Of all laureates, 56 have been (by birth or by naturalisation) US citizens — that is: 79 %. The University of Chicago has had 26 affiliated laureates — that is 37 %. Only 5 laureates have come from outside North America or Western Europe — that is: 7 %. Only 1 woman has got the prize — that is: 1.4 %. The world is really a small place when it comes to economics …

Looking at whom the prize is given to, says quite a lot about what kind of prize this is. But looking at whom the prize is not given to, says perhaps even more.

The great Romanian-American mathematical statistician and economist Nicholas Georgescu-Roegen (1906-1994) argued in his epochal The Entropy Law and the Economic Process (1971) that the economy was actually a giant thermodynamic system in which entropy increases inexorably and our material basis disappears. If we choose to continue to produce with the techniques we have developed, then our society and earth will disappear faster than if we introduce small-scale production, resource-saving technologies and limited consumption.

Following Georgescu-Roegen, ecological economists have argued that industrial society inevitably leads to increased environmental pollution, energy crisis and an unsustainable growth.

Georgescu-Roegen and ecological economics have turned against the neoclassical theory’s obsession with purely monetary factors. The monetary reductionism easily makes you ignore other factors having a bearing on human interaction with the environment.

I wonder if this isn’t the crux of the matter. To assert such a thing really is to swear in the neoclassical establishment church and nullifies any chances of getting the prestigious prize.

Twenty years ago, after a radio debate with one of the members of the prize committee, I asked why Georgescu-Roegen hadn’t got the prize. The answer was – mirabile dictu – that he “never founded a school.” I was surprised, to say the least, and wondered if he possibly had heard of the environmental movement. Well, he had – but it was “the wrong kind of school”! Can it be stated much clearer than this what it’s all about? If you haven’t worked within the neoclassical paradigm – then you are more or less excluded a priori from being eligible for the The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel!

Walrasian theory — little better than nonsense

11 January, 2014 at 18:04 | Posted in Economics | Comments Off on Walrasian theory — little better than nonsense

You enquire whether or not Walras was supposing that exchanges actually take place at the prices originally proposed when the prices are not equilibrium prices. The footnote which you quote convinces me that he assuredly supposed that they did not take place except at the equilibrium prices … All the same, I shall hope to convince you some day that Walras’ theory and all the others along those lines are little better than nonsense!

Letter from J. M. Keynes to N. Georgescu-Roegen, December 9, 1934

Why DSGE is such a spectacularly useless waste of time

10 January, 2014 at 18:54 | Posted in Economics | 5 Comments

Noah Smith has a nice piece up today on what he considers the most “damning critique of DSGE:”

If DSGE models work, why don’t people use them to get rich?

When I studied macroeconomics in grad school, I was told something along these lines:

“DSGE models are useful for policy advice because they (hopefully) pass the Lucas Critique. If all you want to do is forecast the economy, you don’t need to pass the Lucas Critique, so you don’t need a DSGE model.”

This is usually what I hear academic macroeconomists say when asked to explain the fact that essentially no one in the private sector uses DSGE models. Private-sector people can’t set economic policy, the argument goes, so they don’t need Lucas Critique-robust models.

The problem is, this argument is wrong. If you have a model that both A) satisfies the Lucas Critique and B) is a decent model of the economy, you can make huge amounts of money. This is because although any old spreadsheet can be used to make unconditional forecasts of the economy, you need Lucas-robust models to make good policy-conditional forecasts …

So now let’s get to the point of this post. As far as I’m aware, private-sector firms don’t hire anyone to make DSGE models, implement DSGE models, or even scan the DSGE literature. There are a lot of firms that make macro bets in the finance industry – investment banks, macro hedge funds, bond funds. To my knowledge, none of these firms spends one thin dime on DSGE. I’ve called and emailed everyone I could think of who knows what financial-industry macroeconomists do, and they’re all unanimous – they’ve never heard of anyone in finance using a DSGE model …

So maybe they’re just using the wrong DSGE models? Maybe they’re using Williamson (2012) instead of Williamson (2013) … But if finance-industry people can’t know which DSGE model to use, how can policymakers or policy advisors?

In other words, DSGE models … have failed a key test of usefulness. Their main selling point – satisfying the Lucas critique – should make them very valuable to industry. But industry shuns them.

Many economic technologies pass the industry test. Companies pay people lots of money to use auction theory. Companies pay people lots of money to use vector autoregressions. Companies pay people lots of money to use matching models. But companies do not, as far as I can tell, pay people lots of money to use DSGE to predict the effects of government policy …

As I see it, this is currently the most damning critique of the whole DSGE paradigm.

Although I think the unsellability of DSGE — private-sector firms do not pay lots of money to use DSGE models — is a strong argument against DSGE, it is not a killing magic bullet or the most damning critique of it.

unemployment

In the basic DSGE models the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

Although this picture of unemployment as a kind of self-chosen optimality, strikes most people as utterly ridiculous, there are also, unfortunately, a lot of neoclassical economists out there who still think that price and wage rigidities are the prime movers behind unemployment. DSGE models basically explains variations in employment (and a fortiori output) with assuming nominal wages being more flexible than prices – disregarding the lack of empirical evidence for this rather counterintuitive assumption.

Keynes held a completely different view. Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.

In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the fallacy of composition. Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.

Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. It would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen as a general substitute for an expansionary monetary or fiscal policy. And even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

The classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong. To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labour market.

People calling themselves “New Keynesians” ought to be rather embarrassed by the fact that the kind of DSGE models they use, cannot incorporate such a basic fact of reality as involuntary unemployment. Of course, working with representative agent models, this should come as no surprise. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

To me, this — the inability to explain involuntary unemployment — is the most damning critique of DSGE.

Michael Sandel on the commodification of society

10 January, 2014 at 16:53 | Posted in Politics & Society | 5 Comments

 

The Scandinavian Housing Bubble

10 January, 2014 at 00:01 | Posted in Economics | 10 Comments

Housing-bubble-markets-flatten-a-bit-530Where do housing bubbles come from? There are of course many different explanations, but one of the more fundamental mechanisms at work is easy to explain with the following arbitrage argument:

Assume you have a sum of money (A) that you want to invest. You can put the money in a bank and receive a yearly interest (r) on the money. Disregarding — for the sake of simplicity — risks, asset depreciations and transaction costs that may possibly be present, rA should equal the income you would alternatively receive if instead you buy a house with a down-payment (A) and let it out during a year with a rent (h) plus changes in house price (dhp) — i. e.

rA = h + dhp

Dividing both sides of the equation with the house price (hp) we get

hp = h/[r(A/hp) – (dhp/hp)]

From this equation it’s clear that if you expect house prices (hp) to increase, house prices will increase. It’s this kind of self generating cumulative process à la Wicksell-Myrdal that is the core of the housing bubble. Unlike the usual commodities markets where demand curves usually point downwards, on asset markets they often point upwards, and therefore give rise to this kind of instability. And, the greater leverage (the lower A/hp), the greater increase in prices.

In case you think this is just barren theoretical speculations without bearing on reality, I suggest you take a look at this graph:
 
The Scandinavian Housing Bubble
 
The Scandinavian housing markets are living on borrowed time. It’s really high time to take away the punch bowl.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.