Modern macroeconomics – like Hamlet without the Prince

31 May, 2013 at 19:13 | Posted in Economics | Leave a comment

Something is rotten in the state of macroeconomics … 

And its miserable state should come as no surprise for those of you who regularly follow yours truly’s blog.

hamletForecasting is by its nature a hit-and-miss affair; economics is not—despite the apparent dogmatic certainty of some of its practitioners—an exact science. But the track record of the profession in recent years—and last year in particular —is dire. Few economists spotted the boom and most hopelessly underestimated the bust. And it’s not as if the profession’s troubles in 2012 were limited to longer-range forecasts; it was getting it wrong virtually in real time with most forecasters forced to slash their projections every few months as each quarter turned out worse than expected …

What the dismal science’s dismal record suggests is that there is something profoundly wrong with the mainstream economics profession’s understanding of how modern economies work. The models on which its forecasts are built are clearly badly flawed …

But the most important contribution to the debate is an essay by Claudio Borio, deputy head of the monetary and economics department at the Bank for International Settlements, published last moth and titled: “The Financial Cycle and Macroeconomics: What have we learned?”

In Mr. Borio’s view, the “New Keynesian Dynamic Stochastic General Equilibrium” model used by most mainstream forecasters is flawed because it assumes the financial system is frictionless: Its role is simply to allocate resources and therefore can be ignored. Although many economists now accept these assumptions are wrong, efforts to modify their models amount to little more than tinkering. What is needed is a return to out-of-fashion insights influential before World War II and kept alive since by maverick economists such as Hyman Minsky and Charles Kindleberger that recognized the central importance of the financial cycle.

Mainstream economists have been so fixated on understanding ordinary business cycles that they ignored the role that years of rising asset prices and financial sector liberalization can play in fueling credit booms. They lost sight of the fact that the financial system does more than allocate resources: It creates money—and therefore purchasing power—every time it extends a loan.

“Macroeconomics without the financial cycle is like Hamlet without the Prince,” to Mr. Borio.

Simon Nixon

My new book is out

30 May, 2013 at 07:48 | Posted in Theory of Science & Methodology | Leave a comment

what is theoryEconomics is a discipline with the avowed ambition to produce theory for the real world. But it fails in this ambition, Lars Pålsson Syll asserts in Chapter 12, at least as far as the dominant mainstream neoclassical economic theory is concerned. Overly confident in deductivistic Euclidian methodology, neoclassical economic theory lines up series of mathematical models that display elaborate internal consistency but lack clear counterparts in the real world. Such models are at best unhelpful, if not outright harmful, and it is time for economic theory to take a critical realist perspective and explain economic life in depth rather than merely modeling it axiomatically.

The state of economic theory is not as bad as Pålsson Syll describes, Fredrik Hansen retorts in Chapter 13. Looking outside the mainstream neoclassic tradition, one can find numerous economic perspectives that are open to other disciplines and manifest growing interest in methodological matters. He is confident that theoretical and methodological pluralism will be able to refresh the debate on economic theory, particularly concerning the nature of realism in economic theory, a matter about which Pålsson Syll and Hansen clearly disagree.

What is theory? consists of a multidisciplinary collection of essays that are tied together by a common effort to tell what theory is, and paired as dialogues between senior and junior researchers from the same or allied disciplines to add a trans-generational dimension to the book’s multidisciplinary approach.

The book has mainly been designed for master’s degree students and postgraduates in the social sciences and the humanities.

On the impossibility of predicting the future

28 May, 2013 at 20:02 | Posted in Statistics & Econometrics | 1 Comment

 

What is randomness?

28 May, 2013 at 08:38 | Posted in Statistics & Econometrics | 2 Comments

Modern probabilistic econometrics relies on the notion of probability. To at all be amenable to econometric analysis, economic observations allegedly have to be conceived as random events.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

In probabilistic econometrics, events and observations are as a rule interpreted as random variables as if generated by an underlying probability density function, and a fortiori – since probability density functions are only definable in a probability context – consistent with a probability. As Haavelmo (1944:iii) has it:

For no tool developed in the theory of statistics has any meaning – except , perhaps for descriptive purposes – without being referred to some stochastic scheme.

When attempting to convince us of the necessity of founding empirical economic analysis on probability models, Haavelmo – building largely on the earlier Fisherian paradigm – actually forces econometrics to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating  machine or a well constructed experimental arrangement or “chance set-up”.

Just as there is no such thing as a “free lunch,” there is no such thing as a “free probability.” To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment –there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

From a realistic point of view we really have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in econometrics – are not amenable to analyze as probabilities, simply because in the real world open systems that social sciences – including econometrics – analyze, there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot really be maintained – as in the Haavelmo paradigm of probabilistic econometrics – that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

If we agree on this, we also have to admit that probabilistic econometrics lacks a sound justification. I would even go further and argue that there really is no justifiable rationale at all for this belief that all economically relevant data can be adequately captured by a probability measure. In most real world contexts one has to argue one’s case. And that is obviously something seldom or never done by practitioners of probabilistic econometrics.

Econometrics and probability are intermingled with randomness. But what is randomness?

In probabilistic econometrics it is often defined with the help of independent trials – two events are said to be independent if the occurrence or nonoccurrence of either one has no effect on the probability of the occurrence of the other – as drawing cards from a deck, picking balls from an urn, spinning a roulette wheel or tossing coins – trials which are only definable if somehow set in a probabilistic context.

But if we pick a sequence of prices – say 2, 4, 3, 8, 5, 6, 6 – that we want to use in an econometric regression analysis, how do we know the sequence of prices is random and a fortiori being able to treat as generated by an underlying probability density function? How can we argue that the sequence is a sequence of probabilistically independent random prices? And are they really random in the sense that is most often applied in probabilistic econometrics – where X is called a random variable only if there is a sample space S with a probability measure and X is a real-valued function over the elements of S?

Bypassing the scientific challenge of going from describable randomness to calculable probability by just assuming it, is of course not an acceptable procedure. Since a probability density function is a “Gedanken” object that does not exist in a natural sense, it has to come with an export license to our real target system if it is to be considered usable.

Among those who at least honestly try to face the problem – the usual procedure is to refer to some artificial mechanism operating in some “games of chance” of the kind mentioned above and which generates the sequence. But then we still have to show that the real sequence somehow coincides with the ideal sequence that defines independence and randomness within our – to speak with science philosopher Nancy Cartwright (1999) – “nomological machine”, our chance set-up, our probabilistic model.

As the originator of the Kalman filter, Rudolf Kalman (1994:143), notes:

Not being able to test a sequence for ‘independent randomness’ (without being told how it was generated) is the same thing as accepting that reasoning about an “independent random sequence” is not operationally useful.

So why should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts (how many sides do the dice have, are the cards unmarked, etc)

If we do adhere to the Fisher-Haavelmo paradigm of probabilistic econometrics we also have to assume that all noise in our data is probabilistic and that errors are well-behaving, something that is hard to justifiably argue for as a real phenomena, and not just an operationally and pragmatically tractable assumption.

Maybe Kalman’s (1994:147) verdict that

Haavelmo’s error that randomness = (conventional) probability is just another example of scientific prejudice

is, from this perspective seen, not far-fetched.

Accepting Haavelmo’s domain of probability theory and sample space of infinite populations– just as Fisher’s (1922:311) “hypothetical infinite population, of which the actual data are regarded as constituting a random sample”, von Mises’ “collective” or Gibbs’ ”ensemble” – also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

As David Salsburg (2001:146) notes on probability theory:

[W]e assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify [this] abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Just as e. g. Keynes (1921) and Georgescu-Roegen (1971), Salsburg (2001:301f) is very critical of the way social scientists – including economists and econometricians – uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research:

Probability is a measure of sets in an abstract space of events. All the mathematical properties of probability can be derived from this definition. When we wish to apply probability to real life, we need to identify that abstract space of events for the particular problem at hand … It is not well established when statistical methods are used for observational studies … If we cannot identify the space of events that generate the probabilities being calculated, then one model is no more valid than another … As statistical models are used more and more for observational studies to assist in social decisions by government and advocacy groups, this fundamental failure to be able to derive probabilities without ambiguity will cast doubt on the usefulness of these methods.

Some wise words that ought to be taken seriously by probabilistic econometricians is also given by mathematical statistician Gunnar Blom (2004:389):

If the demands for randomness are not at all fulfilled, you only bring damage to your analysis using statistical methods. The analysis gets an air of science around it, that it does not at all deserve.

Richard von Mises (1957:103) noted that

Probabilities exist only in collectives … This idea, which is a deliberate restriction of the calculus of probabilities to the investigation of relations between distributions, has not been clearly carried through in any of the former theories of probability.

And obviously not in Haavelmo’s paradigm of probabilistic econometrics either. It would have been better if one had heeded von Mises warning (1957:172) that

the field of application of the theory of errors should not be extended too far.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine – including randomness – then the statistical inferences used, lack sound foundations!

References

Gunnar Blom et al: Sannolikhetsteori och statistikteori med tillämpningar, Lund: Studentlitteratur.

Cartwright, Nancy (1999), The Dappled World. Cambridge: Cambridge University Press.

Fisher, Ronald (1922), On the mathematical foundations of theoretical statistics. Philosophical Transactions of The Royal Society A, 222.

Georgescu-Roegen, Nicholas (1971), The Entropy Law and the Economic Process. Harvard University Press.

Haavelmo, Trygve  (1944), The probability approach in econometrics. Supplement to Econometrica 12:1-115.

Kalman, Rudolf (1994), Randomness Reexamined. Modeling, Identification and Control  3:141-151.

Keynes, John Maynard  (1973 (1921)), A Treatise on Probability. Volume VIII of The Collected Writings of John Maynard Keynes, London: Macmillan.

Pålsson Syll, Lars (2007), John Maynard Keynes. Stockholm: SNS Förlag.

Salsburg, David (2001), The Lady Tasting Tea. Henry Holt.

von Mises, Richard (1957), Probability, Statistics and Truth. New York: Dover Publications.

Capturing causality in economics (wonkish)

27 May, 2013 at 14:12 | Posted in Economics, Theory of Science & Methodology | Leave a comment

A few years ago Armin Falk and James Heckman published an acclaimed article titled “Lab Experiments Are a Major Source of Knowledge in the Social Sciences” in the journal Science. The authors – both renowned economists – argued that both field experiments and laboratory experiments are basically facing the same problems in terms of generalizability and external validity – and that a fortiori it is impossible to say that one would be better than the other.

What strikes me when reading both Falk & Heckman and advocators of field experiments – such as John List and Steven Levitt – is that field studies and experiments are both very similar to theoretical models. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. To that extent, I also believe that Falk & Heckman are right in their comments on the discussion of the field vs. experiments in terms of realism – the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. By contrast to Falk & Heckman and advocators of field experiments, as List and Levitt, I doubt the generalizability of both research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/ stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If you mainly conceive of experiments or field studies as heuristic tools, the dividing line between, say, Falk & Heckman and List or Levitt is probably difficult to perceive.

But if we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2012? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard …  Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under  very restrictive conditions!

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of  “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises – and that also goes for methods based on randomized experiments.

Harvard Statistics 110 – some classic probability problems

26 May, 2013 at 11:44 | Posted in Statistics & Econometrics | Leave a comment

 

New Keynesianism and DSGE – intellectually bankrupt enterprises

26 May, 2013 at 09:16 | Posted in Economics | Leave a comment

In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore allegedly have access to heaps of historical time-series.

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

In the end this is what it all boils down to. We all know that many activities, relations, processes and events are genuinely uncertaint. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption.

Fortunately – when you’ve got tired of  the kind of gobsmacking macroeconomics apologetics produced by so called “New Keynesian” macroeconomists – there still are some  real Keynesian macroeconomists to read!

One of them, Axel Leijonhufvud, I last met  a couple of years ago in Copenhagen, where we were invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?” Axel’s speech was later published as Keynes and the crisis and contains some very profound insights and antidotes to DSGE modeling and New “Keynesianism” :

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

axel_leijonhufvudThe real interest rate … The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.

The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedt hat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

Are economists rational?

25 May, 2013 at 15:48 | Posted in Economics | 1 Comment

nate silverNow consider what happened in November 2007. It was just one month before the Great Recession officially began …

Economists in the Survey of Professional Forecasters, a quarterly poll put out by the Federal Reserve Bank of Philadelphia, nevertheless foresaw a recession as relatively unlikely. Intead, they expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008 … This was a very bad forecast: GDP actually shrank by 3.3 percent once the financial crisis hit. What may be worse is that the economists were extremely confident in their prediction. They assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008 …

Indeed, economists have for a long time been much to confident in their ability to predict the direction of the economy … Their predictions have not just been overconfident but also quite poor in a real-world sense … Economic forecasters get more feedback than people in most other professions, but they haven’t chosen to correct for their bias toward overconfidence.

Eleni Karaindrou

24 May, 2013 at 13:29 | Posted in Varia | Leave a comment

 

Breathtaking masterpiece

24 May, 2013 at 11:07 | Posted in Varia | Leave a comment

Eleni Karaindrou’s breathtakingly beautiful “Prayer” from Theo Angelopoulos’s masterpiece The Weeping Meadow. It breaks my heart every time.
 

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.