Lärarutbildningens kris

28 February, 2013 at 20:09 | Posted in Education & School | 1 Comment

Yours truly har idag en artikel – Kunskap och lärarutbildningens kris - i nättidningen Skola och samhälle. Läs den!

tomma-klassrum

Phelps smacks down on Lucas’ rational expectations putsch

28 February, 2013 at 14:49 | Posted in Economics | 2 Comments

Question: In a new volume with Roman Frydman, “Rethinking Expectations: The Way Forward for Macroeconomics,” you say the vast majority of macroeconomic models over the last four decades derailed your “microfoundations” approach. Can you explain what that is and how it differs from the approach that became widely accepted by the profession?

rethinkAnswer: In the expectations-based framework that I put forward around 1968, we didn’t pretend we had a correct and complete understanding of how firms or employees formed expectations about prices or wages elsewhere. We turned to what we thought was a plausible and convenient hypothesis. For example, if the prices of a company’s competitors were last reported to be higher than in the past, it might be supposed that the company will expect their prices to be higher this time, too, but not that much. This is called “adaptive expectations:” You adapt your expectations to new observations but don’t throw out the past. If inflation went up last month, it might be supposed that inflation will again be high but not that high.

Q: So how did adaptive expectations morph into rational expectations?

A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let’s be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what’s the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted — thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.

Q: One of the issues I have with rational expectations is the assumption that we have perfect information, that there is no cost in acquiring that information. Yet the economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.

A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.

Bloomberg

 

Phelps’ critique is much in line with the one yours truly put forward in his Real-World Economics Review article Rational expectations – a fallacious foundation for macroeconomics in a non-ergodic world (2012).

Inference to the Best Explanation

28 February, 2013 at 09:18 | Posted in Theory of Science & Methodology | Leave a comment

 

If you only have time to read one book on IBE, this is the one:
Peter Lipton, Inference to the Best Explanation, 2nd edition, Routledge, 2004.

Too big to fail?

28 February, 2013 at 08:52 | Posted in Economics | Leave a comment

 

Bayesian decision theory – a critique

26 February, 2013 at 18:35 | Posted in Economics | Leave a comment

A farmer is offered a choice between, on the one hand, getting a horse if it is raining tomorrow and a cow if it is not raining and, on the other hand, a cow if it is raining and a horse if it is not. He prefers getting a horse to getting a cow; this is a ‘pure preference’. But which of the offered alternatives does he prefer? Assume that he professes to be indifferent as between them. How shall we then understand his attitude?

To this question there is an answer, first proposed by F. P. Ramsey, which has later come to play a great role in so-called Bayesian decision theory …
 
Ramsey thought that an attitude of indifference here means that the person rates the two events, ‘rain’ and ‘not rain’, as equally probable. Accepting this, one can then proceed as follows:

Assume that our farmer is next presented with this option: On the one hand a horse if it is raining and a sheep if it is not raining and, on the other hand, a cow if it is raining and a hog if it is not raining. Again he says he is indifferent. This, on Ramsey’s view, means that the value to him of a cow is as much less the value of the horse as the value of a sheep is less than that of a hog. With this the way is open to a metrization of value and the introduction of utility functions. This done, one can use attiyudes of indifference in other, more complex, conditional options for defining arbitrary degrees of (subjective) probability. The product of the value of a good an dthe probability of its materialization is called expected utility. Attitudes of preference in options aim at maximizing this quantity.

Ramsey’s method is elegant and ingenious. Nevertheless, it seems to rest on a mistake. It ignores the distinction between two senses of ‘indifference’.

The farmer who, when presented with the first of the above two options, professes an attitude of indifference can do so for one or two reasons. Either he ‘simply has no idea’ about the chances of rainfall for tomorrow and therefore cannot make up his mind about which alternative is more to his advantage.
This does not mean that he thinks rain and not-rain equally likely; he simply suspends judgement. Or, he considers them equally likely and therefore judges the two alternatives to be equally advantageous. He could, for example, support his attitude with the argument that if he repeatedly opted for one of the alternatives, no matter which one, on average half the number of times he would ‘probably’ get a horse, which is to his advantage, and half the number of times a cow, which is to his disadvantage. So, therefore, he is indifferent as between the alternatives. It is, in other words not his judgement of indifference which gives meaning to the probabilities for him; but it is his prior estimate of the probabilities which determines his attitude of indifference.

Georg Henrik von Wright

Ergodicity for Dummies

25 February, 2013 at 18:03 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 3 Comments

Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people. The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

charles-schulz-peanuts-think-bigThe importance of ergodicity becomes manifest when you think about how we all infer various things, how we draw some conclusion about something while having information about something else. For example, one goes once to a restaurant and likes the fish and next time he goes to the same restaurant and orders chicken, confident that the chicken will be good. Why is he confident? Or one observes that a newspaper has printed some inaccurate information at one point in time and infers that the newspaper is going to publish inaccurate information in the future. Why are these inferences ok, while others such as “more crimes are committed by black persons than by white persons, therefore each individual black person is not to be trusted” are not ok?

The answer is that the ensemble of articles published in a newspaper is more or less ergodic, while the ensemble of black people is not at all ergodic. If one searches how many mistakes appear in an entire newspaper in one issue, and then searches how many mistakes one news editor does over time, one finds the two results almost identical (not exactly, but nonetheless approximately equal). However, if one takes the number of crimes committed by black people in a certain day divided by the total number of black people, and then follows one random-picked black individual over his life, one would not find that, e.g. each month, this individual commits crimes at the same rate as the crime rate determined over the entire ensemble. Thus, one cannot use ensemble statistics to properly infer what is and what is not probable that a certain individual will do.

Vlad Tarko

On Bayesianism, uncertainty and consistency in “large worlds”

25 February, 2013 at 14:36 | Posted in Theory of Science & Methodology | Leave a comment

The view that Bayesian decision theory is only genuinely valid in a small world was asserted very firmly by Leonard Savage when laying down the principles of the theory in his path-breaking Foundations of Statistics. He makes the distinction between small and large worlds in a folksy way by quoting the proverbs ”Look before you leap” and ”Cross that bridge when you come to it”. You are in a small world if it is feasible always to look before you leap. You are in a large world if there are some bridges that you cannot cross before you come to them.

consistencyAs Savage comments, when proverbs conflict, it is pro-verbially true that there is some truth in both—that they apply in different contexts. He then argues that some decision situations are best modeled in terms of a small world, but others are not. He explicitly rejects the idea that all worlds can be treated as small as both ”ridiculous” and ”preposterous” … Frank Knight draws a similar distinction between making decision under risk or uncertainty …

Bayesianism is understood [here] to be the philosophical principle that Bayesian methods are always appropriate in all decision problems, regardless of whether the relevant set of states in the relevant world is large or small. For example, the world in which financial economics is set is obviously large in Savage’s sense, but the suggestion that there might be something questionable about the standard use of Bayesian updating in financial models is commonly greeted with incredulity or laughter.

Someone who acts as if Bayesianism were correct will be said to be a Bayesianite. It is important to distinguish a Bayesian like myself—someone convinced by Savage’s arguments that Bayesian decision theory makes sense in small worlds—from a Bayesianite. In particular, a Bayesian need not join the more extreme Bayesianites in proceeding as though:

• All worlds are small.
• Rationality endows agents with prior probabilities.
• Rational learning consists simply in using Bayes’ rule to convert a set of prior
probabilities into posterior probabilities after registering some new data.

Bayesianites are often understandably reluctant to make an explicit commitment to these principles when they are stated so baldly, because it then becomes evident that they are implicitly claiming that David Hume was wrong to argue that the principle of scientific induction cannot be justified by rational argument …

Bayesianites believe that the subjective probabilities of Bayesian decision theory can be reinterpreted as logical probabilities without any hassle. Its adherents therefore hold that Bayes’ rule is the solution to the problem of scientific induction. No support for such a view is to be found in Savage’s theory—nor in the earlier theories of Ramsey, de Finetti, or von Neumann and Morgenstern. Savage’s theory is entirely and exclusively a consistency theory. It says nothing about how decision-makers come to have the beliefs ascribed to them; it asserts only that, if the decisions taken are consistent (in a sense made precise by a list of axioms), then they act as though maximizing expected utility relative to a subjective
probability distribution …

A reasonable decision-maker will presumably wish to avoid inconsistencies. A Bayesianite therefore assumes that it is enough to assign prior beliefs to as decisionmaker, and then forget the problem of where beliefs come from. Consistency then forces any new data that may appear to be incorporated into the system via Bayesian updating. That is, a posterior distribution is obtained from the prior distribution using Bayes’ rule.

The naiveté of this approach doesn’t consist in using Bayes’ rule, whose validity as a piece of algebra isn’t in question. It lies in supposing that the problem of where the priors came from can be quietly shelved.

Savage did argue that his descriptive theory of rational decision-making could be of practical assistance in helping decision-makers form their beliefs, but he didn’t argue that the decision-maker’s problem was simply that of selecting a prior from a limited stock of standard distributions with little or nothing in the way of soulsearching. His position was rather that one comes to a decision problem with a whole set of subjective beliefs derived from one’s previous experience that may or may not be consistent …

But why should we wish to adjust our gut-feelings using Savage’s methodology? In particular, why should a rational decision-maker wish to be consistent? After all, scientists aren’t consistent, on the grounds that it isn’t clever to be consistently wrong. When surprised by data that shows current theories to be in error, they seek new theories that are inconsistent with the old theories. Consistency, from this point of view, is only a virtue if the possibility of being surprised can somehow be eliminated. This is the reason for distinguishing between large and small worlds. Only in the latter is consistency an unqualified virtue.

Ken Binmore

Keynes on mathematical economics

25 February, 2013 at 09:42 | Posted in Economics | 1 Comment

genthBut I am unfamiliar with the methods involved and it may be that my impression that nothing emerges at the end which has not been introduced expressly or tacitly at the beginning is quite wrong … It seems to me essential in an article of this sort to put in the fullest and most explicit manner at the beginning the assumptions which are made and the methods by which the price indexes are derived; and then to state at the end what substantially novel conclusions has been arrived at … I cannot persuade myself that this sort of treatment of economic theory has anything significant to contribute. I suspect it of being nothing better than a contraption proceeding from premises which are not stated with precision to conclusions which have no clear application … [This creates] a mass of symbolism which covers up all kinds of unstated special assumptions.

Keynes to Frisch 28 November 1935

Lecturing Wall Street Bankers

20 February, 2013 at 18:20 | Posted in Varia | 1 Comment

 

Ich glaub’ das zu träumen die Mauer Im Rücken war kalt

20 February, 2013 at 13:51 | Posted in Varia | Leave a comment

 

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.