Lärarutbildningens kris

28 February, 2013 at 20:09 | Posted in Education & School | 1 Comment

Yours truly har idag en artikel – Kunskap och lärarutbildningens kris - i nättidningen Skola och samhälle. Läs den!

tomma-klassrum

Phelps smacks down on Lucas’ rational expectations putsch

28 February, 2013 at 14:49 | Posted in Economics | 2 Comments

Question: In a new volume with Roman Frydman, “Rethinking Expectations: The Way Forward for Macroeconomics,” you say the vast majority of macroeconomic models over the last four decades derailed your “microfoundations” approach. Can you explain what that is and how it differs from the approach that became widely accepted by the profession?

rethinkAnswer: In the expectations-based framework that I put forward around 1968, we didn’t pretend we had a correct and complete understanding of how firms or employees formed expectations about prices or wages elsewhere. We turned to what we thought was a plausible and convenient hypothesis. For example, if the prices of a company’s competitors were last reported to be higher than in the past, it might be supposed that the company will expect their prices to be higher this time, too, but not that much. This is called “adaptive expectations:” You adapt your expectations to new observations but don’t throw out the past. If inflation went up last month, it might be supposed that inflation will again be high but not that high.

Q: So how did adaptive expectations morph into rational expectations?

A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let’s be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what’s the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted — thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.

Q: One of the issues I have with rational expectations is the assumption that we have perfect information, that there is no cost in acquiring that information. Yet the economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.

A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.

Bloomberg

 

Phelps’ critique is much in line with the one yours truly put forward in his Real-World Economics Review article Rational expectations – a fallacious foundation for macroeconomics in a non-ergodic world (2012).

Inference to the Best Explanation

28 February, 2013 at 09:18 | Posted in Theory of Science & Methodology | Leave a comment

 

If you only have time to read one book on IBE, this is the one:
Peter Lipton, Inference to the Best Explanation, 2nd edition, Routledge, 2004.

Too big to fail?

28 February, 2013 at 08:52 | Posted in Economics | Leave a comment

 

Ergodicity for Dummies

25 February, 2013 at 18:03 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 3 Comments

Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people. The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

charles-schulz-peanuts-think-bigThe importance of ergodicity becomes manifest when you think about how we all infer various things, how we draw some conclusion about something while having information about something else. For example, one goes once to a restaurant and likes the fish and next time he goes to the same restaurant and orders chicken, confident that the chicken will be good. Why is he confident? Or one observes that a newspaper has printed some inaccurate information at one point in time and infers that the newspaper is going to publish inaccurate information in the future. Why are these inferences ok, while others such as “more crimes are committed by black persons than by white persons, therefore each individual black person is not to be trusted” are not ok?

The answer is that the ensemble of articles published in a newspaper is more or less ergodic, while the ensemble of black people is not at all ergodic. If one searches how many mistakes appear in an entire newspaper in one issue, and then searches how many mistakes one news editor does over time, one finds the two results almost identical (not exactly, but nonetheless approximately equal). However, if one takes the number of crimes committed by black people in a certain day divided by the total number of black people, and then follows one random-picked black individual over his life, one would not find that, e.g. each month, this individual commits crimes at the same rate as the crime rate determined over the entire ensemble. Thus, one cannot use ensemble statistics to properly infer what is and what is not probable that a certain individual will do.

Vlad Tarko

On Bayesianism, uncertainty and consistency in “large worlds”

25 February, 2013 at 14:36 | Posted in Theory of Science & Methodology | Leave a comment

The view that Bayesian decision theory is only genuinely valid in a small world was asserted very firmly by Leonard Savage when laying down the principles of the theory in his path-breaking Foundations of Statistics. He makes the distinction between small and large worlds in a folksy way by quoting the proverbs ”Look before you leap” and ”Cross that bridge when you come to it”. You are in a small world if it is feasible always to look before you leap. You are in a large world if there are some bridges that you cannot cross before you come to them.

consistencyAs Savage comments, when proverbs conflict, it is pro-verbially true that there is some truth in both—that they apply in different contexts. He then argues that some decision situations are best modeled in terms of a small world, but others are not. He explicitly rejects the idea that all worlds can be treated as small as both ”ridiculous” and ”preposterous” … Frank Knight draws a similar distinction between making decision under risk or uncertainty …

Bayesianism is understood [here] to be the philosophical principle that Bayesian methods are always appropriate in all decision problems, regardless of whether the relevant set of states in the relevant world is large or small. For example, the world in which financial economics is set is obviously large in Savage’s sense, but the suggestion that there might be something questionable about the standard use of Bayesian updating in financial models is commonly greeted with incredulity or laughter.

Someone who acts as if Bayesianism were correct will be said to be a Bayesianite. It is important to distinguish a Bayesian like myself—someone convinced by Savage’s arguments that Bayesian decision theory makes sense in small worlds—from a Bayesianite. In particular, a Bayesian need not join the more extreme Bayesianites in proceeding as though:

• All worlds are small.
• Rationality endows agents with prior probabilities.
• Rational learning consists simply in using Bayes’ rule to convert a set of prior
probabilities into posterior probabilities after registering some new data.

Bayesianites are often understandably reluctant to make an explicit commitment to these principles when they are stated so baldly, because it then becomes evident that they are implicitly claiming that David Hume was wrong to argue that the principle of scientific induction cannot be justified by rational argument …

Bayesianites believe that the subjective probabilities of Bayesian decision theory can be reinterpreted as logical probabilities without any hassle. Its adherents therefore hold that Bayes’ rule is the solution to the problem of scientific induction. No support for such a view is to be found in Savage’s theory—nor in the earlier theories of Ramsey, de Finetti, or von Neumann and Morgenstern. Savage’s theory is entirely and exclusively a consistency theory. It says nothing about how decision-makers come to have the beliefs ascribed to them; it asserts only that, if the decisions taken are consistent (in a sense made precise by a list of axioms), then they act as though maximizing expected utility relative to a subjective
probability distribution …

A reasonable decision-maker will presumably wish to avoid inconsistencies. A Bayesianite therefore assumes that it is enough to assign prior beliefs to as decisionmaker, and then forget the problem of where beliefs come from. Consistency then forces any new data that may appear to be incorporated into the system via Bayesian updating. That is, a posterior distribution is obtained from the prior distribution using Bayes’ rule.

The naiveté of this approach doesn’t consist in using Bayes’ rule, whose validity as a piece of algebra isn’t in question. It lies in supposing that the problem of where the priors came from can be quietly shelved.

Savage did argue that his descriptive theory of rational decision-making could be of practical assistance in helping decision-makers form their beliefs, but he didn’t argue that the decision-maker’s problem was simply that of selecting a prior from a limited stock of standard distributions with little or nothing in the way of soulsearching. His position was rather that one comes to a decision problem with a whole set of subjective beliefs derived from one’s previous experience that may or may not be consistent …

But why should we wish to adjust our gut-feelings using Savage’s methodology? In particular, why should a rational decision-maker wish to be consistent? After all, scientists aren’t consistent, on the grounds that it isn’t clever to be consistently wrong. When surprised by data that shows current theories to be in error, they seek new theories that are inconsistent with the old theories. Consistency, from this point of view, is only a virtue if the possibility of being surprised can somehow be eliminated. This is the reason for distinguishing between large and small worlds. Only in the latter is consistency an unqualified virtue.

Ken Binmore

Lecturing Wall Street Bankers

20 February, 2013 at 18:20 | Posted in Varia | 1 Comment

 

Ich glaub’ das zu träumen die Mauer Im Rücken war kalt

20 February, 2013 at 13:51 | Posted in Varia | Leave a comment

 

Sporadic blogging from Berlin

16 February, 2013 at 16:39 | Posted in Varia | 2 Comments

Berlin has lately become a second hometown to me. Time for a new sojourn there. Regular blogging will be resumed late next week.

Winter is not my season. I’m already longing for when the view from my library once again looks like this:

clemens

Grodors plums och ankors plask i Svenska Dagbladet

16 February, 2013 at 10:57 | Posted in Economics, Politics & Society | 3 Comments

Svenska Dagbladet är en bra tidning – om man bara slapp dessa pellejönsar som skriver på dess ledarsida.

Efter den uppmärksammade SVT dokumentären Lönesänkarna (som jag hade en post om igår) skriver en av tidningens ledarskribenter, Ivar Arpi, idag att på “1970-talet var kapitalandelen, jämfört med löneandelen, nästan noll” och att sedan 1970-talet har “det stora flertalet fått det bättre –  även de med lägst inkomster.”

Och detta grodors plums och ankors plask ska man behöva läsa år 2013. Herre du milde!

Snacka går ju alltid, men det är ändå bättre om man vet vad man pratar om. Så här ser det nämligen ut:

Källa: SWIID 3.0

(Ju lägre Ginikoefficient, desto jämlikare inkomstfördelning. Sedan 1981 har ojämlikheten i inkomstfördelningen i Sverige trendmässigt ökat brant.)

Källa: IFN, Roine och Waldenström 2008
Grafik: Idégrafik

I den ekonomisk-politiska debatten hör man ofta marknadsfundamentalismens förespråkare likt svenska Dagbladets ledarskribenter säga att ojämlikhet inte är något problem. Anledningen sägs i huvudsak vara två.

Pro primo - hästskitsteoremet - enligt vilket sänkta skatter och ökad välfärd för de rika ändå så småningom sipprar ner till de fattiga. Göd hästen och fåglarna kan äta sig mätta på spillningen.

Pro secondo – att så länge alla har samma chans att bli rika är ojämlikheten oproblematisk.

Hästskitsteoremet (“trickle-down effect”) visade omfattande forskning redan under Thatcher-Reagan-eran hörde mytvärlden till.

Och för några år sedan visade  Alan Krueger – ekonomiprofessor vid Princeton-universitetet – med sin Gatsbykurva att även det andra försöket till försvar av ojämlikhet hör hemma i sagornas värld:

[På den vertikala axeln visas hur mycket en enprocentig ökning i din fars inkomster påverkar dina förväntade inkomster (ju högre tal, desto lägre förväntad social rörlighet), och på den horisontella axeln visas Ginikoefficienten, som mäter ojämlikhet (ju högre tal, desto högre ojämlikhet)]

Tydligare än så här går det knappt att se att jämlika länder också är de med störst social rörlighet – och att det därför börjar bli dags att ta itu med de ökade inkomst- och förmögenhetsklyftorna. Så även i Sverige, där nyreviderade data från SCB visar hur utvecklingen av disponibel inkomst per konsumtionsenhet (exklusive kapitalvinst efter deciler, samtliga personer 1995-2010, medelvärden i tusen kr per k.e. i 2010 års priser) de senaste åren har sett ut:

Källa: SCB och egna beräkningar

Och än värre är det om man tittar på förmögenhetsutvecklingen.

Ojämlikheten ökar i Sverige. Att det är så beror i hög grad på politiska beslut – som att exempelvis skära i a-kassa och sjukfärsäkring. Men det är också ett uttryck för ett ideologiskifte som under trettio års tid förvandlat Sverige från ett föregångsland vad gäller jämlikhet till att bli ett av de länder där inkomst- och förmögenhetsklyftorna ökar mest i världen.

För dem som i likhet med  Ivar Arpi inte tror att det är så illa i Sverige när det gäller inkomst- och förmögenhetsfördelning, föreslår jag att kika lite närmre på diagrammet nedan över hur genomsnittsinkomsterna (uttryckt i 2009 års prisnivå) för de övre 0.1% och de lägsta 90% utvecklats sedan 1980 i Sverige:

.

Källa: The World Top Incomes Database

Det är hög tid att sätta stopp för det nya klassamhället. Det är hög tid att se till att klyftorna slutar växa i det svenska samhället. Det är hög tid att det nyliberala systemskiftet i Sverige upphör!

Next Page »

Create a free website or blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.