Econ 101 theory of labour markets — not very scientific

3 December, 2016 at 18:39 | Posted in Economics | 1 Comment

OK, so what are some empirical things we know about labor markets? Here are two stylized facts that, while not completely uncontroversial, are pretty one-sided in the literature:

1. A surge of immigration does not have a big immediate negative impact on wages.

2. Modest minimum wage hikes do not have a big immediate negative impact on employment.

low-wage-snapshot-updated-05-07-2015_1
The first fact alone does not falsify the Econ 101 theory of labor markets. It could be the case that short-run labor demand is simply very elastic …

BUT, this is impossible to reconcile with the second stylized fact. If labor demand is very elastic, minimum wage should have big noticeable negative effects on employment:

By the same token, if you try to explain the second stylized fact by making both labor supply and demand very inelastic, then you contradict the first stylized fact. You just can’t explain both of these facts at the same time with this theory. It cannot be done.

So the Econ 101 theory of labor supply and labor demand has been falsified. It’s just not a useful theory for explaining labor markets in the short term (the long term might be a different story). It’s not a good approximation. It doesn’t give good qualitative intuition. And it’s especially bad for explaining the market for low-wage labor, which is the market that most of the aforementioned studies concentrate on.

What is a better theory of the labor market? Maybe general equilibrium. Maybe search and matching theory. Maybe a theory with very heterogeneous types of labor. Maybe something else.

But this theory, this simple Econ 101 short-run partial-equilibrium price theory of undifferentiated labor, has been falsified. If econ pundits, policy advisors, and other public-facing econ folks were scientifically minded, we’d stop using this model in our discussions of labor markets. We’d stop casually throwing out terms like “labor demand” without first thinking very carefully about how that concept should be applied. We’d stop using this framework to think about other policies, like overtime rules, that might affect the labor market.

Sadly, though, I bet that we will not. We will continue using this falsified theory to “organize our thoughts” – i.e., we’ll keep treating it as if it were true. So we will continue to make highly questionable policy recommendations. The fact that this theory is such a simple, clear, well-understood tool – so good for “organizing our thinking”, even if it doesn’t match reality – will keep it in use long after its sell-by date. That’s what James Kwak calls “economism”, and I call “101ism”. Whatever it’s called, it’s not very scientific.

Noah Smith

Lovely to see that at least some mainstream economists have the courage and intellectual guts to admit that they have been wrong.

But — sad to say — many economists will probably continue to peddle their falsified theories. It’s hard to kill your darlings …

buchC6The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the pre-supposition that human choice behavior is sufficiently rational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teaching of two centuries; we have not yet become a bevy of camp-following whores.

James M. Buchanan in Wall Street Journal (April 25, 1996)

NAIRU — a false hypothesis

2 December, 2016 at 20:32 | Posted in Economics | Leave a comment

51zdd7pouql-_sx323_bo1204203200_The natural rate hypothesis (NRH) is the idea that unemployment has an inherent tendency to return to some special “natural rate” that is a property of the available technology for finding jobs. It is a fact of nature, a bit like the gravitational constant in celestial mechanics. The theory of the NRH natural rate hypothesis has been taught to every economist in every top economics department for the past thirty years. As part of the package, economists learn that the natural rate cannot be influenced by fiscal or monetary policy …

Even today, the NRH is a central component of New Keynesian economics and, with very few exceptions, central bankers, politicians, and economic talking heads use the theory of the natural rate of unemployment to explain their views on the appropriate stance of monetary policy. I believe that the NRH is false, and this fact has important consequences. If central bankers are working with a false theory, they are likely to make bad decisions that affect all of our lives.

Farmer has always — as did e. g. Wicksell and Keynes — made a point of the fact that equilibrium and optimality are not the same thing. That also implies that the economy being in equilibrium does not have to be inconsistent with high and persistent unemployment rates. Farmer uses a search theoretical approach to underpin this view. Although yours truly do not share his faiblesse for the Mortensen-Pissarides-Diamond modeling of Keynesian ideas re labour markets and unemployment, it is interesting to take part of his argumentation for his view in his book.

Three suggestions to ‘save’ econometrics

29 November, 2016 at 11:33 | Posted in Economics, Statistics & Econometrics | 5 Comments

Reading an applied econometrics paper could leave you with the impression that the economist (or any social science researcher) first formulated a theory, then built an empirical test based on the theory, then tested the theory. But in my experience what generally happens is more like the opposite: with some loose ideas in mind, the econometrician runs a lot of different regressions until they get something that looks plausible, then tries to fit it into a theory (existing or new) … Statistical theory itself tells us that if you do this for long enough, you will eventually find something plausible by pure chance!

0This is bad news because as tempting as that final, pristine looking causal effect is, readers have no way of knowing how it was arrived at. There are several ways I’ve seen to guard against this:

(1) Use a multitude of empirical specifications to test the robustness of the causal links, and pick the one with the best predictive power …

(2) Have researchers submit their paper for peer review before they carry out the empirical work, detailing the theory they want to test, why it matters and how they’re going to do it. Reasons for inevitable deviations from the research plan should be explained clearly in an appendix by the authors and (re-)approved by referees.

(3) Insist that the paper be replicated. Firstly, by having the authors submit their data and code and seeing if referees can replicate it (think this is a low bar? Most empirical research in ‘top’ economics journals can’t even manage it). Secondly — in the truer sense of replication — wait until someone else, with another dataset or method, gets the same findings in at least a qualitative sense. The latter might be too much to ask of researchers for each paper, but it is a good thing to have in mind as a reader before you are convinced by a finding.

All three of these should, in my opinion, be a prerequisite for research that uses econometrics …

Naturally, this would result in a lot more null findings and probably a lot less research. Perhaps it would also result in fewer attempts at papers which attempt to tell the entire story: that is, which go all the way from building a new model to finding (surprise!) that even the most rigorous empirical methods support it.

Unlearning Economics

Good suggestions, but unfortunately there are many more deep problems with econometrics that have to be ‘solved.’

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture. The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics.

Misapplication of inferential statistics to non-inferential situations is a non-starter for doing proper science. And when choosing which models to use in our analyses, we cannot get around the fact that the evaluation of our hypotheses, explanations, and predictions cannot be made without reference to a specific statistical model or framework. The probabilistic-statistical inferences we make from our samples decisively depends on what population we choose to refer to. The reference class problem shows that there usually are many such populations to choose from, and that the one we choose decides which probabilities we come up with and a fortiori which predictions we make. Not consciously contemplating the relativity effects this choice of ‘nomological-statistical machines’ have, is probably one of the reasons econometricians have a false sense of the amount of uncertainty that really afflicts their models.

As economists and econometricians we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

Economists — and econometricians — have (uncritically and often without arguments) come to simply assume that one can apply probability distributions from statistical theory on their own area of research. However, there are fundamental problems arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels.

Of course one could arguably treat our observational or experimental data as random samples from real populations. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of populations. But this is actually nothing but hand-waving! Doing econometrics it’s always wise to remember C. S. Peirce’s remark that universes are not as common as peanuts …

The Economist — Economics prone to fads and methodological crazes

27 November, 2016 at 18:49 | Posted in Economics | 2 Comments

When a hot new tool arrives on the scene, it should extend the frontiers of economics and pull previously unanswerable questions within reach. What might seem faddish could in fact be economists piling in to help shed light on the discipline’s darkest corners. Some economists, however, argue that new methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy …

16720017-abstract-word-cloud-for-randomized-controlled-trial-with-related-tags-and-terms-stock-photoA paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist (sic!) at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm. RCTs involve randomly assigning a policy to some people and not to others, so that researchers can be sure that differences are caused by the policy. Analysis is a simple comparison of averages between the two. Mr Deaton and Ms Cartwright have a statistical gripe; they complain that researchers are not careful enough when calculating whether two results are significantly different from one another. As a consequence, they suspect that a sizeable portion of published results in development and health economics using RCTs are “unreliable”.

With time, economists should learn when to use their shiny new tools. But there is a deeper concern: that fashions and fads are distorting economics, by nudging the profession towards asking particular questions, and hiding bigger ones from view. Mr Deaton’s and Ms Cartwright’s fear is that RCTs yield results while appearing to sidestep theory, and that “without knowing why things happen and why people do things, we run the risk of worthless causal (‘fairy story’) theorising, and we have given up on one of the central tasks of economics.” Another fundamental worry is that by offering alluringly simple ways of evaluating certain policies, economists lose sight of policy questions that are not easily testable using RCTs, such as the effects of institutions, monetary policy or social norms.

The Economist

For my own take on the RCT fad — here here and here.

Om ekonomiskt vetande — Fronesis nr 54-55

26 November, 2016 at 15:37 | Posted in Economics | Leave a comment

fronesis-nr-54-55-omslag-mediumEfter den globala finanskrisen 2008 har den ekonomiska vetenskapen hamnat i blickfånget. Studentrörelser och heterodoxa ekonomer har kritiserat det dominerande ekonomiska paradigmet och krävt ökad pluralism. Den senaste tidens politiska utveckling har blottat nyliberalismens brister och aktualiserat frågan om dess koppling till den ekonomiska vetenskapen. I Fronesis nr 54–55 fördjupar vi oss i det ekonomiska vetandets förutsättningar.

Vänstern har länge inriktat sitt samhällsteoretiska och politiska intresse mot kulturella och symboliska aspekter av makt och dominans, men har mer eller mindre lämnat det ekonomiska fältet därhän. Med Fronesis nr 54–55 vill vi röra oss bortom en simpel kritik av nationalekonomin och fördjupa förståelsen av villkoren för det ekonomiska vetandet. Numret introducerar för en svensk publik en rad centrala samtida teoretiker som belyser frågorna ur olika perspektiv.

Innehåll:

Kajsa Borgnäs och Anders Hylmö: Det ekonomiska vetandet i förändring Ladda ner som PDF
Anders Hylmö: Den moderna nationalekonomin som vetenskaplig stil och disciplin
Dimitris Milonakis: Lärdomar från krisen
Marion Fourcade, Étienne Ollion och Yann Algan: Ekonomernas överlägsenhet
Kajsa Borgnäs: Utanför boxen, eller Vad är heterodox nationalekonomi?
Lars Pålsson Syll: Tony Lawson och kritiken av den nationalekonomiska vetenskapen – en introduktion
Tony Lawson: Den heterodoxa ekonomins natur
Josef Taalbi: Realistisk ekonomisk teori?
Erik Bengtsson: Den heterodoxa nationalekonomins materiella förutsättningar
Julie A. Nelson: Genusmetaforer och nationalekonomi
Linda Nyberg: Nyliberalism, politik och ekonomi
Philip Mirowski: Den politiska rörelsen som inte vågade säga sitt namn
Jason Read: En genealogi över homo oeconomicus
Kajsa Borgnäs: Den vetenskapliga ekonomins politiska makt
Daniel Hirschman och Elizabeth Popp Berman: Skapar nationalekonomer politik?
Peter Gerlach, Marika Lindgren Åsbrink och Ola Pettersson, intervjuade av Daniel Mathisen: Allt annat lika

The use of mathematics in physics and economics

26 November, 2016 at 12:31 | Posted in Economics | Leave a comment

My idea is to examine the most well-known works of a selection of the most famous neoclassical economists in the period from 1945 to the present.

mathlogo

My survey of well-known works by four famous mathematical neoclassical economists (Samuelson, Arrow, Debreu, Prescott) who all won the Nobel Prize for economics, has not revealed any precise explanations or successful predictions. This supports my conjecture that the use of mathematics in mainstream (or neoclassical) economics has not produced any precise explanations or successful predictions. This, I would claim, is the main difference between neoclassical economics and physics, where both precise explanations and successful predictions have often been obtained by the use of mathematics.

Donald Gillies

Why IS-LM doesn’t capture Keynes’ approach to the economy

25 November, 2016 at 12:39 | Posted in Economics | 2 Comments

benfineSuppose workers are unemployed. As a result, although willing to work even at lower wages, they are unable to buy consumption goods. As a result, firms are unable to sell those goods if they produced them. So they do not employ the workers who, as a consequence, do not have the wages to buy the consumption goods. The economy is caught in a vicious cycle of deficient demand. According to the IS/LM framework, this would lead to a fall in prices and wages, raise real balances and boost demand. But falling prices and wages might have the effect of both reducing effective demand and confidence, deepening rather than resolving the problem of unemployment.

These considerations raise serious doubts whether the IS/LM approach, despite being the standard representation, fully captures the Keynesian approach to the economy other than in name …

The appeal of the IS/LM lay not only in its formalisation of what is falsely taken to be Keynes’ specific contribution but also in compromising with a Walrasian approach to the economy.

hicksbbcBen Fine and Ourania Dimakou have some further interesting references for those wanting to dwell upon the question of how much Keynes really there is in Hicks’s IS-LM model.

My own view is that  IS-LM doesn’t adequately reflect the width and depth of Keynes’s insights on the workings of modern market economies for the following six reasons:

Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.

IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historic time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …

IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.

IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.

IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

6  IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.

Given this, it’s difficult not agree with Fine and Dimakou. The IS/LM approach doesn’t capture Keynes’ approach to the economy other than in name.

Endogenous growth theory — a crash course

24 November, 2016 at 17:11 | Posted in Economics | 2 Comments

 
endo

What is ergodicity?

23 November, 2016 at 10:27 | Posted in Economics | 2 Comments

Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

slide_5Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people.

The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

The importance of ergodicity becomes manifest when you think about how we all infer various things, how we draw some conclusion about something while having information about something else. For example, one goes once to a restaurant and likes the fish and next time he goes to the same restaurant and orders chicken, confident that the chicken will be good. Why is he confident? Or one observes that a newspaper has printed some inaccurate information at one point in time and infers that the newspaper is going to publish inaccurate information in the future. Why are these inferences ok, while others such as “more crimes are committed by black persons than by white persons, therefore each individual black person is not to be trusted” are not ok?

The answer is that the ensemble of articles published in a newspaper is more or less ergodic, while the ensemble of black people is not at all ergodic. If one searches how many mistakes appear in an entire newspaper in one issue, and then searches how many mistakes one news editor does over time, one finds the two results almost identical (not exactly, but nonetheless approximately equal). However, if one takes the number of crimes committed by black people in a certain day divided by the total number of black people, and then follows one random-picked black individual over his life, one would not find that, e.g. each month, this individual commits crimes at the same rate as the crime rate determined over the entire ensemble. Thus, one cannot use ensemble statistics to properly infer what is and what is not probable that a certain individual will do.

Or take an even clearer example: In an election each party gets some percentage of votes, party A gets a%, party B gets b% and so on. However, this does not mean that over the course of their lives each individual votes with party A in a% of elections, with B in b% of elections and so on …

A similar problem is faced by scientists in general when they are trying to infer some general statement from various particular experiments. When is a generalization correct and when it isn’t? The answer concerns ergodicity. If the generalization is done towards an ergodic ensemble, then it has a good chance of being correct.

Vlad Tarko

Paul Samuelson once famously claimed that the “ergodic hypothesis” is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

In this video Ole Peters shows why ergodicity is such an important concept for understanding the deep fundamental flaws of mainstream economics:

Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. cointossingSay we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

In an ergodic system time is irrelevant and has no direction. Nothing changes in any significant way; at most you will see some short-lived fluctuations. An ergodic system is indifferent to its initial conditions: if you re-start it, after a little while it always falls into the same equilibrium behavior.

arrow_of_time_1For example, say I gave 1,000 people one die each, had them roll their die once, added all the points rolled, and divided by 1,000. That would be a finite-sample average, approaching the ensemble average as I include more and more people.

Now say I rolled a die 1,000 times in a row, added all the points rolled and divided by 1,000. That would be a finite-time average, approaching the time average as I keep rolling that die.

One implication of ergodicity is that ensemble averages will be the same as time averages. In the first case, it is the size of the sample that eventually removes the randomness from the system. In the second case, it is the time that I’m devoting to rolling that removes randomness. But both methods give the same answer, within errors. In this sense, rolling dice is an ergodic system.

I say “in this sense” because if we bet on the results of rolling a die, wealth does not follow an ergodic process under typical betting rules. If I go bankrupt, I’ll stay bankrupt. So the time average of my wealth will approach zero as time passes, even though the ensemble average of my wealth may increase.

A precondition for ergodicity is stationarity, so there can be no growth in an ergodic system. Ergodic systems are zero-sum games: things slosh around from here to there and back, but nothing is ever added, invented, created or lost. No branching occurs in an ergodic system, no decision has any consequences because sooner or later we’ll end up in the same situation again and can reconsider. The key is that most systems of interest to us, including finance, are non-ergodic.

Ole Peters

Mainstream economics — sacrificing realism at the altar of mathematical purity

22 November, 2016 at 18:05 | Posted in Economics | 6 Comments

e0f5b445c333de539ad33c6a63606b56Economists are too detached from the real world and have failed to learn from the financial crisis, insisting on using mathematical models which do not reflect reality, according to the Bank of England’s chief economist Andy Haldane.

The public has lost faith in economists since the credit crunch, he said, but the profession has failed to thoroughly re-examine its failings to come up with a new model of operating.

“The various reports into the economic costs of the UK leaving the EU most likely fell at the same hurdle. They are written, in the main, by the elite for the elite,” said Mr Haldane, writing the foreword to a new book, called ‘The Econocracy: the perils of leaving economics to the experts’ …

The chief economist said that the Great Depression of the 1930s resulted in a major overhaul of economic thinking, led by John Maynard Keynes, who emerged “as the most influential economist of the twentieth century”.

But the recent financial crisis and slow recovery has not yet prompted this great re-thinking …

For now, economists need to focus on reviewing their models, accepting a diversify of thought rather than one solid orthodoxy, and on communicating more clearly.

Economists should focus on other disciplines as well as maths, he said.

“Mainstream economic models have sacrificed too much realism at the altar of mathematical purity. Their various simplifying assumptions have served aesthetic rather than practical ends,” Mr Haldane wrote.

“As a profession, economics has become too much of a methodological monoculture. And that lack of intellectual diversity cost the profession dear when the single crop failed spectacularly during the crisis.”

Tim Wallace/The Telegraph

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.