Krugman and Stiglitz — nothing but neoliberal alibis

31 January, 2017 at 20:40 | Posted in Economics | Comments Off on Krugman and Stiglitz — nothing but neoliberal alibis

Mirowski’s concern to disabuse his readers of the notion that the wing of neoliberal doctrine disseminated by neoclassical economists could ever be reformed produces some of the best sections of the book. His portrait of an economics profession in haggard disarray in the aftermath of the crisis is both comic and tragic …

verso_978-1-781683026_never_let_a_serious_crisis__pb_edition__large_300_cmyk-dc185356d27351d710223aefe6ffad0cLittle in the discipline has changed in the wake of the crisis. Mirowski thinks that this is at least in part a result of the impotence of the loyal opposition — those economists such as Joseph Stiglitz or Paul Krugman who attempt to oppose the more viciously neoliberal articulations of economic theory from within the camp of neoclassical economics. Though Krugman and Stiglitz have attacked concepts like the efficient markets hypothesis … Mirowski argues that their attempt to do so while retaining the basic theoretical architecture of neoclassicism has rendered them doubly ineffective.

First, their adoption of the battery of assumptions that accompany most neoclassical theorizing — about representative agents, treating information like any other commodity, and so on — make it nearly impossible to conclusively rebut arguments like the efficient markets hypothesis. Instead, they end up tinkering with it, introducing a nuance here or a qualification there. This tinkering causes their arguments to be more or less ignored in neoclassical pedagogy, as economists more favorably inclined toward hard neoliberal arguments can easily ignore such revisions and hold that the basic thrust of the theory is still correct. Stiglitz’s and Krugman’s arguments, while receiving circulation through the popular press, utterly fail to transform the discipline.

Paul Heideman

Advertisements

Neoliberalism — a threat to democracy

31 January, 2017 at 19:02 | Posted in Politics & Society | 3 Comments

 

Perhaps the most dangerous impact of neoliberalism is not the economic crises it has caused, but the political crisis. As the domain of the state is reduced, our ability to change the course of our lives through voting also contracts. Instead, neoliberal theory asserts, people can exercise choice through spending. But some have more to spend than others: in the great consumer or shareholder democracy, votes are not equally distributed. The result is a disempowerment of the poor and middle. As parties of the right and former left adopt similar neoliberal policies, disempowerment turns to disenfranchisement. Large numbers of people have been shed from politics.
donald-trumpChris Hedges remarks that “fascist movements build their base not from the politically active but the politically inactive, the ‘losers’ who feel, often correctly, they have no voice or role to play in the political establishment”. When political debate no longer speaks to us, people become responsive instead to slogans, symbols and sensation. To the admirers of Trump, for example, facts and arguments appear irrelevant.

George Monbiot

On the non-applicability of statistical theory

30 January, 2017 at 19:57 | Posted in Statistics & Econometrics | 1 Comment

Eminent statistician David Salsburg is rightfully very critical of the way social scientists — including economists and econometricians — uncritically and without arguments have come to simply assume that they can apply probability distributions from statistical theory on their own area of research:

9780805071344We assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify [this] abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings …

Kolmogorov established the mathematical meaning of probability: Probability is a measure of sets in an abstract space of events. All the mathematical properties of probability can be derived from this definition. When we wish to apply probability to real life, we need to identify that abstract space of events for the particular problem at hand … It is not well established when statistical methods are used for observational studies … If we cannot identify the space of events that generate the probabilities being calculated, then one model is no more valid than another … As statistical models are used more and more for observational studies to assist in social decisions by government and advocacy groups, this fundamental failure to be able to derive probabilities without ambiguity will cast doubt on the usefulness of these methods.

Wise words well worth pondering on.

As long as economists and statisticians cannot really identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is — as strongly argued by Keynes — a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

The origins of MMT

30 January, 2017 at 10:53 | Posted in Economics | 2 Comments

Many mainstream economists seem to think the idea behind Modern Monetary Theory is new and originates from economic cranks.

New? Cranks? How about reading one of the great founders of neoclassical economics – Knut Wicksell. This is what Wicksell wrote in 1898 on ‘pure credit systems’ in Interest and Prices (Geldzins und Güterpreise), 1936 (1898), p. 68f:

It is possible to go even further. There is no real need for any money at all if a payment between two customers can be accomplished by simply transferring the appropriate sum of money in the books of the bank 

A pure credit system has not yet … been completely developed in this form. But here and there it is to be found in the somewhat different guise of the banknote system

We intend therefor, as a basis for the following discussion, to imagine a state of affairs in which money does not actually circulate at all, neither in the form of coin … nor in the form of notes, but where all domestic payments are effected by means of the Giro system and bookkeeping transfers. A  thorough analysis of this purely imaginary case seems to me to be worth while, for it provides a precise antithesis to the equally imaginay case of a pure cash system, in which credit plays no part whatever [the exact equivalent of the often used neoclassical model assumption of “cash in advance” – LPS] …

For the sake of simplicity, let us then assume that the whole monetary system of a country is in the hands of a single credit institution, provided with an adequate number of branches, at which each independent economic individual keeps an account on which he can draw cheques.

What Modern Monetary Theory (MMT) basically does is exactly what Wicksell tried to do more than a hundred years ago. The difference is that today the ‘pure credit economy’  is a reality and not just a theoretical curiosity – MMT describes a fiat currency system that almost every country in the world is operating under.

And here’s another well-known economist with early ideas of the MMT variety:

[Bendixen says the] old ‘metallist’ view of money is superstitious, and Dr. Bendixen trounces it with the vigour of a convert. Money is the creation of the State; it is not true to say that gold is international currency, for international contracts are never made in terms of gold, but always in terms of some national monetary unit; there is no essential or important distinction between notes and metallic money; money is the measure of value, but to regard it as having value itself is a relic of the view that the value of money is regulated by the value of the substance of which it is made, and is like confusing a theatre ticket with the performance. With the exception of the last, the only true interpretation of which is purely dialectical, these ideas are undoubtedly of the right complexion. It is probably true that the old ‘metallist’ view and the theories of regulation of note issue based on it do greatly stand in the way of currency reform, whether we are thinking of economy and elasticity or of a change in the standard; and a gospel which can be made the basis of a crusade on these lines is likely to be very useful to the world, whatever its crudities or terminology.

J. M. Keynes, “Theorie des Geldes und der Umlaufsmittel. by Ludwig von Mises; Geld und Kapital. by Friedrich Bendixen” (review), Economic Journal, 1914

In modern times legal currencies are totally based on fiat. Currencies no longer have intrinsic value (as gold and silver). What gives them value is basically the legal status given to them by government and the simple fact that you have to pay your taxes with them. That also enables governments to run a kind of monopoly business where it never can run out of money. Hence spending becomes the prime mover and taxing and borrowing is degraded to following acts. If we have a depression, the solution, then, is not austerity. It is spending. Budget deficits are not the major problem, since fiat money means that governments can always make more of them.

Financing quantitative easing, fiscal expansion, and other similar operations, is made possible by simply crediting a bank account and thereby – by a single keystroke – actually creating money. One of the most important reasons why so many countries are still stuck in depression-like economic quagmires is that people in general – including most mainstream economists – simply don’t understand the workings of modern monetary systems. The result is totally and utterly wrong-headed austerity policies, emanating out of a groundless fear of creating inflation via central banks printing money, in a situation where we rather should fear deflation and inadequate effective demand.

Regression to the mean

30 January, 2017 at 00:28 | Posted in Statistics & Econometrics | Comments Off on Regression to the mean

Regression to the men is nothing but the universal truth of the fact that whenever we have an imperfect correlation between two scores, we have regression to the mean.

The hazards of willfully ignoring uncertainty

29 January, 2017 at 13:16 | Posted in Economics | Comments Off on The hazards of willfully ignoring uncertainty

nate silverWe forget – or willfully ignore – that our models are simplifications of the world …

One of the pervasive risks that we face in the information age … is that even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening. This syndrome is often associated with very precise-seeming predictions that are not at all accurate … This is like claiming you are a good shot because your bullets always end up in about the same place — even though they are nowhere near the target …

Financial crises – and most other failures of prediction – stem from this false sense of confidence. Precise forecasts masquerade as accurate ones, and some of us get fooled and double-down our bets.

One of the best examples of this ‘masquerading’ is the following statement by Robert Lucas (Wall Street Journal, September 19, 2007):

uN7GNNaI am skeptical about the argument that the subprime mortgage problem will contaminate the whole mortgage market, that housing construction will come to a halt, and that the economy will slip into a recession. Every step in this chain is questionable and none has been quantified. If we have learned anything from the past 20 years it is that there is a lot of stability built into the real economy.

 

Milton Friedman’s pet theory finally shown to be wrong

28 January, 2017 at 15:36 | Posted in Economics | 7 Comments

150514006_4Milton Friedman’s Permanent Income Hypothesis (PIH) says that people’s consumption isn’t affected by short-term fluctuations in incomes since people only spend more money when they think that their life-time incomes change. Believing Friedman is right, mainstream economists have for decades argued that Keynesian fiscal policies therefore are ineffectual.

As shown over and over again for the last three decades, empirical facts totally disconfirm Friedman’s hypothesis. The final nail in the coffin is new research from Harvard:

Unemployment is a particularly good setting for testing alternative models of consumption because it causes such a large change in family income. A literature starting with Akerlof and Yellen (1985), Mankiw (1985) and Cochrane (1989) has argued that because ignoring small price changes has a second-order impact on utility, a rule of thumb such as setting spending changes equal to income changes may be “near-rational.” More recently, many researchers have documented evidence of an immediate increase in spending in response to tax rebates and similar one-time payments …

We compare the path of spending during unemployment in the data to three benchmark models and find that the buffer stock model fits better than a permanent income model or a hand-to-mouth model …

To summarize, we find that families do relatively little self-insurance when unemployed as spending is quite sensitive to current monthly income. We built a new dataset to study the spending of unemployed families using anonymized bank account records from JPMCI. Using rich category-level expenditure data, we find that work-related expenses explain only a modest portion of the spending drop during unemployment. The overall path of spending for a seven-month unemployment spell is consistent with a buffer stock model where agents hold assets equal to less than one month of income at the onset of unemployment. Because unemployment is such a large shock to income, our finding that spending is highly sensitive to income overcomes the near-rationality critique applied to prior work. Finally, we document a puzzling drop in spending of 11% in the month UI benefits exhaust, suggesting that families do not prepare for benefit exhaustion.

Peter Ganong & Pascal Noel

So — now we know that consumer behaviour is influenced by short-term fluctuations in incomes and that this is true even if consumers know that their situation may well change in the future.

Since almost all modern mainstream macroeconomic theories are based on PIH –standardly used in formulating the consumption Euler equations that make up a vital part of ‘modern’ New Classical and New Keynesian macro models — these devastating findings are extremely problematic.main-qimg-1b106c1df117b1c788bd8f4089d394e3-c

In many modern macroeonomics textbooks one explicitly adapt a ‘New Keynesian’ framework, adding price rigidities and a financial system to the usual neoclassical macroeconomic set-up. Elaborating on these macromodels one soon arrives at specifying the demand side with the help of the Friedmanian Permanent Income Hypothesis and its Euler equations.

But if people — not the representative agent — at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in these ‘New Keynesian’ macromodels gonna help us?

My doubts regarding macro economic modelers’ obsession with Euler equations is basically that, as with so many other assumptions in ‘modern’ macroeconomics, Euler equations, and the PIH that they build on, don’t fit reality.

In the standard neoclassical consumption model people are basically portrayed as treating time as a dichotomous phenomenon today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? The Euler equation used implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Further, in the Euler equation we only have one interest rate, equated to the money market rate as set by the central bank. The crux is, however, that — given almost any specification of the utility function – the two rates are actually often found to be strongly negatively correlated in the empirical literature!

From a methodological pespective yours truly has to conclude that these kind microfounded macroeconomic models are a rather unimpressive attempt at legitimizing using fictitious idealizations — such as PIH and Euler equations — for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Mainstream economists usually do not want to get hung up on the assumptions that their models build on. But it is still an undeniable fact that theoretical models building on piles of known to be false assumptions — such as PIH and the Euler equations that build on it — in no way even get close to being scientific explanations. On the contrary. They are untestable and hence totally worthless from the point of view of scientific relevance.

Ganong’s and Noel’s research finally shows that mainstream macroeconomics, building on the standard neoclassical consumption model with its Permanent Income Hypothesis and Euler equations, has to be replaced with something else. Preferably with something that is both real and relevant, and not only chosen for reasons of mathematical tractability  or for more or less openly market fundamentalist ideological reasons.

‘Alternative facts’ and voter fraud

27 January, 2017 at 18:53 | Posted in Politics & Society | Comments Off on ‘Alternative facts’ and voter fraud


Oh, horrible, oh, horrible, most horrible!
What a tragedy — and what shame, all those Americans with more than two brain cells must feel today. I do suffer with them through this nightmare.

Smoke on the water

27 January, 2017 at 16:51 | Posted in Varia | Comments Off on Smoke on the water

 

The unsurpassed rock MASTERPIECE.

BTO

27 January, 2017 at 16:28 | Posted in Varia | 1 Comment

 

Successive approximations

27 January, 2017 at 16:20 | Posted in Economics | 1 Comment

In The World in the Model Mary Morgan characterizes the modelling tradition of economics as one concerned with “thin men acting in small worlds” and writes:

Strangely perhaps, the most obvious element in the inference gap for models … lies in the validity of any inference between two such different media – forward from the real world to the artificial world of the mathematical model and back again from the model experiment to the real material of the economic world. The model is at most a parallel world. The parallel quality does not seem to bother economists. But materials do matter: it matters that economic models are only representations of things in the economy, not the things themselves.

Now, a salient feature of modern mainstream economics is the idea of science advancing through the use of ‘successive approximations.’ Is this really a feasible methodology? I think not.

Most models in science are representations of something else. Models ‘stand for’ or ‘depict’ specific parts of a ‘target system’ (usually the real world). All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions made solely to secure a way of reaching deductively validated results in mathematical models – like ‘rational expectations’ or ‘representative actors’ — are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

The obvious ontological shortcoming of a basically epistemic — rather than ontological — approach such as ‘successive approximations’ is that ‘similarity’ or ‘resemblance’ tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the ‘successive approximations’ do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

So, I have to conclude that constructing ‘minimal’ economic models — or using microfounded macroeconomic models as ‘stylized facts’ or ‘stylized pictures’ somehow ‘successively approximating’ macroeconomic reality — is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

Many of the model assumptions standardly made in mainstream economics are restrictive rather than harmless and can not in any sensible meaning be considered approximations at all. Or as May Brodbeck had it:

Model ships appear frequently in bottles; model boys in heaven only.

Why Minsky matters

26 January, 2017 at 20:11 | Posted in Economics | 5 Comments

In an often cynical world, standard financial and macroeconomic quantitative models give people the benefi t of the doubt. Fundamental economic theory assumes the best of us, supposing that human beings are perfectly rational, know all the facts of a given situation, understand the risks, and optimize our behavior and portfolios accordingly. Reality, of course, is quite different. While a significant portion of individual and market behavior can be modeled reasonably well, the human emotions that drive cycles of fear and greed are not predictable and can often defy historical precedent. As a result, quantitative models sometimes fail to anticipate major macroeconomic turning points. The ongoing debt crisis in Europe is the most recent example of an extreme event shattering historical norms.

kindleOnce an extreme event occurs, standard models offer limited insight as to how the ensuing crisis could play out and how it should be managed, which is why policy responses can seem disjointed. The latest policy responses to the European crisis have been no exception. To understand and respond to a crisis like the one in Europe, perhaps we need to consider some new models that include the “human factor.” Economic historian Charles Kindleberger can offer some insight. In his book Manias, Panics, and Crashes, Kindleberger explores the anatomy of a typical financial crisis and provides a framework that considers the impact of the powerful human dynamics of fear and greed. Kindleberger’s descriptive process of the boom and bust liquidity cycle can help shed light on the current European sovereign debt saga, and perhaps illuminate whether we have in fact turned the corner on this financial crisis.

Kindleberger analyzed hundreds of financial crises dating back centuries and found them to share a common sequence of events, one that followed monetary theorist Hyman Minsky’s model of the instability of a credit system. Fundamentally, the more stable and prosperous an economic structure appears, the more leverage and speculative financing will build within the system, eventually making it highly vulnerable to a surprising, extreme collapse. Kindleberger provided the qualitative (as opposed to quantitative!) description of the Minsky Model, shown below, which is a useful snapshot of the liquidity cycle. It can be applied to Europe and any potential boom/bust candidate, including Chinese real estate, commodity prices, or investors’ recent love affair with emerging markets. Kindleberger famously dubbed this sequence a “hardy perennial,” probably because the galvanizing human conditions of fear and greed are more often than not prone to overshoot fundamental values compared to the behavior of a rational individual, which exists only in macroeconomic theory.

ScreenShot2012-03-28at3_24_51PM

Loomis Sayles

hymanFor more on Minsky, listen to BBC 4 where Duncan Weldon explains in what way Hyman Minsky’s thoughts on banking and finance offer a radical challenge to mainstream economic theory.

As a young research stipendiate in the U.S. thirty years ago, yours truly had the great pleasure and privelege of having Hyman Minsky as teacher.

He was a great inspiration at the time.

He still is.

Yours truly föreläser om nyliberalism

26 January, 2017 at 11:12 | Posted in Politics & Society | Comments Off on Yours truly föreläser om nyliberalism

forelas15974904_1354218084630079_409287222371248360_o

Smålands Nation
Kastanjegatan 7, Lund
2 februari kl. 18.00-20.00

Don’t let me be misunderstood

26 January, 2017 at 09:39 | Posted in Varia | Comments Off on Don’t let me be misunderstood

 

Donald Trump’s running war on reality

25 January, 2017 at 15:51 | Posted in Politics & Society | Comments Off on Donald Trump’s running war on reality

 

Trump and free trade

25 January, 2017 at 15:30 | Posted in Economics | 3 Comments

Dear President Trump,

Plenty of people will try to convince you that globalization and free trade could benefit everyone, if only the gains were more fairly shared …

trade-copyThis belief is shared by almost all politicians in both parties, and it’s an article of faith for the economics profession.

You are right to reject it …

It’s a fallacy based on a fantasy, and it has been ever since David Ricardo dreamed up the idea of “Comparative Advantage and the Gains from Trade” two centuries ago. The best way to prove that (apart from looking at the bitter experience of the millions of once-were-factory-workers who voted for you) is to apply real-world scepticism to the original argument in favour of free trade …

Ricardo’s model assumed that you could produce wine or cloth with only labour, but of course you can’t. You need machines as well, and machinery is specific to each industry. The essential machinery for making wine can’t be used to make anything else, if its use becomes unprofitable. It is either scrapped, sold at a large loss, or shipped overseas. Ditto a spinning jenny, or a steel mill: if making steel becomes unprofitable, the capital involved in its production is effectively destroyed …

Ricardo’s little shell and pea trick is therefore like most conventional economic theory: it’s neat, plausible, and wrong. It’s the product of armchair thinking by people who never put foot in the factories that their economic theories turned into rust buckets.

Steve Keen

As always with Keen — thought-provoking and interesting. But I think he misses the most powerful argument against the Ricardian paradigm — what counts to day is not comparative advantage, but absolute advantage.

In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have comparative advantage and importing goods in which they have a comparative disadvantage.

Heckscher-Ohlin-HO-Modern-Theory-of-International-TradeRicardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. In the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have a comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem –as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — beside the standard market clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with a constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor-prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false.

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real-world situations. As so many other mainstream mathematical models taught to economics students today, this theorem has very little to do  with the real world.

What has changed since Ricardo’s days is that the assumption of internationally immobile factors of production has been made totally untenable in our globalised world. When our modern corporations maximize their profits they do it by moving capital and technologies to where it is cheapest to produce.

So we’re actually in a situation today where absolute — not comparative — advantages rules the roost when it comes to free trade.

And in that world, what is good for corporations are not necessarily good for nations.

Poverty — the Dumb and Dumber version

25 January, 2017 at 10:20 | Posted in Economics | Comments Off on Poverty — the Dumb and Dumber version

A few years ago, two economics professors, Quamrul Ashraf and Oded Galor, published a paper, “The ‘Out of Africa’ Hypothesis, Human Genetic Diversity, and Comparative Economic Development,” that drew inferences about poverty and genetics based on a statistical pattern …

dumb_aWhen the paper by Ashraf and Galor came out, I criticized it from a statistical perspective, questioning what I considered its overreach in making counterfactual causal claims … I argued (and continue to believe) that the problems in that paper reflect a more general issue in social science: There is an incentive to make strong and dramatic claims to get published in a top journal …

I continue to think that Ashraf and Galor’s paper is essentially an analysis of three data points (sub-Saharan Africa, remote Andean countries and Eurasia). It offered little more than the already-known stylized fact that sub-Saharan African countries are very poor, Amerindian countries are somewhat poor, and countries with Eurasians and their descendants tend to have middle or high incomes.

Andrew Gelman

How to do econometrics properly

25 January, 2017 at 10:16 | Posted in Statistics & Econometrics | Comments Off on How to do econometrics properly

top-10-retail-news-thumb-610xauto-79997-600x240-1

  1. Always, but always, plot your data.
  2. Remember that data quality is at least as important as data quantity.
  3. Always ask yourself, “Do these results make economic/common sense”?
  4. Check whether your “statistically significant” results are also “numerically/economically significant”.
  5. Be sure that you know exactly what assumptions are used/needed to obtain the results relating to the properties of any estimator or test that you use.
  6. Just because someone else has used a particular approach to analyse a problem that looks like yours, that doesn’t mean they were right!
  7. “Test, test, test”! (David Hendry). But don’t forget that “pre-testing” raises some important issues of its own.
  8. Don’t assume that the computer code that someone gives to you is relevant for your application, or that it even produces correct results.
  9. Keep in mind that published results will represent only a fraction of the results that the author obtained, but is not publishing.
  10. Don’t forget that “peer-reviewed” does NOT mean “correct results”, or even “best practices were followed”.

Dave Giles

London Calling

24 January, 2017 at 20:35 | Posted in Varia | Comments Off on London Calling

 

Old loves die hard

Quand on n’a que l’amour

24 January, 2017 at 19:34 | Posted in Varia | 1 Comment


Une grande chanson d’espoir qui touche mon cœur toujours.

Public debt and economic growth

24 January, 2017 at 11:43 | Posted in Economics | 6 Comments

Towering debts, rapidly rising taxes, constant and expensive wars, a debt burden surpassing 200% of GDP. What are the chances that a country with such characteristics would grow rapidly? Almost anyone would probably say ‘none’.

And yet, these are exactly the conditions under which the Industrial Revolution took place in Britain. Britain’s government debt went from 5% of GDP in 1700 to over 200% in 1820, it fought a war in one year out of three … and taxes increased rapidly but not enough to keep pace with the rise in spending …

vothjuly15fig1

Until now, scholars mostly thought of the effect of government borrowing on growth as either neutral or negative…

In a recent paper, we argue that Britain’s borrowing binge was actually good for growth (Ventura and Voth 2015). To understand why massive debt accumulation may have accelerated the Industrial Revolution, we first consider what should have happened in an economy where entrepreneurs suddenly start to exploit a new technology with high returns. Typically, we would expect capital to chase these investment opportunities – anyone with money should have tried to put their savings into new cotton factories, iron foundries and ceramics manufacturers. Where they didn’t have the expertise to invest directly, banks and stock companies should have recycled funds to direct savings to where returns where highest.

This is not what happened. Financial intermediation was woefully inadequate – it failed to send the money where it should have gone …

By issuing bonds on a massive scale, the government effectively pioneered a way – unintentionally – to put money in the pockets of entrepreneurs in the new sectors …

The shift from investing in liming, marling, draining, and enclosure into government debt liberated resources – labour that could no longer be profitably employed in the countryside had to look for employment elsewhere. Because so much of English agricultural labour was provided by wage labourers, the switch to government debt pushed workers off the land. Unsurprisingly, wages failed to keep pace with output; real wages, adjusted for urban disamenities, probably fell over the period 1750-1830. What made life miserable for the workers, as eloquently described by Engels amongst others, was a boon to the capitalists. Their profit rates continued to rise as capital received an ever-larger share of the pie – while the share of national income going to labour and land contracted. Higher profits spelled more investment in new industries, and Britain’s industrial growth accelerated.

Jaume Ventura & Joachim Voth

The golden rule of public debt

23 January, 2017 at 16:57 | Posted in Economics | 2 Comments

Nation states borrow to provide public capital: For example, rail networks, road systems, airports and bridges. These are examples of large expenditure items that are more efficiently provided by government than by private companies.

darling-let-s-get-deeply-into-debtThe benefits of public capital expenditures are enjoyed not only by the current generation of people, who must sacrifice consumption to pay for them, but also by future generations who will travel on the rail networks, drive on the roads, fly to and from the airports and drive over the bridges that were built by previous generations. Interest on the government debt is a payment from current taxpayers, who enjoy the fruits of public capital, to past generations, who sacrificed consumption to provide that capital.

To maintain the roads, railways, airports and bridges, the government must continue to invest in public infrastructure. And public investment should be financed by borrowing, not from current tax revenues …

The debt raised by a private sector company should be strictly less than the value of assets, broadly defined. That principle does not apply to a nation state. Even if government provided no capital services, the value of its assets or liabilities should not be zero except by chance.

National treasuries have the power to transfer resources from one generation to another. By buying and selling assets in the private markets, government creates opportunities for those of us alive today to transfer resources to or from those who are yet to be born. If government issues less debt than the value of public capital, there will be an implicit transfer from current to future generations. If it owns more debt, the implicit transfer is in the other direction.

The optimal value of debt, relative to public capital, is a political decision. Public economics suggests that the welfare of the average citizen will be greatest when the growth rate is equal to the interest rate. Economists call that principle the golden rule. Democratic societies may, or may not, choose to follow the golden rule. Whatever principle the government does choose to fund its expenditure, the optimal value of public sector borrowing will not be zero, except by chance.

Roger Farmer

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an ‘optimal’ public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued — especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure — in a longer run — good borrowing preparedness and a sustained (government) bond market.

The question if public debt is good and that we may actually have to little of it is one of our time’s biggest questions. Giving the wrong answer to it will be costly.

national debt5One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome … But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.

A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.

Abba Lerner The Burden of the National Debt (1948)

Crazy

23 January, 2017 at 16:21 | Posted in Varia | Comments Off on Crazy

 

Take your epsilon and shove it!

22 January, 2017 at 17:54 | Posted in Economics | Comments Off on Take your epsilon and shove it!

Given that a normative theory is defined as a theory prescribing how a rational agent should act, neoclassical economic theory certainly has to be considered a normative theory. The problem is — besides that it standardly assumes not only rationality and selfishness, but also e. g. common knowledge of people’s utility functions — that loads of research show that people almost never act in accordance with the theory:

winner's curseThere is a tendency among economists to think of themselves, and the agents in their models, as having hard hearts … Homo economicus is usually assumed to care about wealth more than such issues as fairness and justice. In contrast, many economists think of other social scientists (and the agents in their models) as “softies.” The research on ultimatum games belies such easy characterisations. There is a “soft” tendency among the Allocators to choose 50-50 allocations, even when the risk of rejection is eliminated. Yet the behavior of Recipients, while inconsistent with economic models, is remarkably hard-nosed. They say, in effect, “Take your offer of epsilon and shove it!”

This shouldn’t come as a surprise. Why should people accept what monkeys don’t?
Watch what happens when you pay two monkeys unequally:

The inequality gap — five sickening facts

22 January, 2017 at 17:18 | Posted in Politics & Society | 2 Comments

140120171906-davos-income-inequality-oxfam-international-winnie-byanima-intv-00011911-story-top1 Just eight men own the same wealth as the 3.6 billion people who make up the poorest half of humanity. Although some of them have earned their fortune through talent or hard work, over half the world’s billionaires either inherited their wealth or accumulated it through industries prone to corruption and cronyism.

2 Seven out of 10 people live in a country that has seen a rise in inequality in the last 30 years.

3 The richest are accumulating wealth at such an astonishing rate that the world could see its first trillionaire in just 25 years. So, you would need to spend $1 million every day for 2738 years to spend $1 trillion.

4 Extreme inequality across the globe is having a tremendous impact on women’s lives. Employed women, who face high levels of discrimination in the work place, and take on a disproportionate amount of unpaid care work often find themselves at the bottom of the pile. On current trends, it will take 170 years for women to be paid the same as men.

5 Corporate tax dodging costs poor countries at least $100 billion every year. This is enough money to provide an education for the 124 million children who aren’t in school and prevent the deaths of at least six million children thanks to health care services.

Oxfam

January 20, 2017 — a date that will live in infamy

19 January, 2017 at 18:26 | Posted in Politics & Society | 9 Comments

How do you grieve for a nation? I don’t know.

But one thing I do know is that January 20 will be one of the saddest days I and all my American friends have ever experienced.

That a country that has given us presidents like George Washington, Thomas Jefferson, Abraham Lincoln, and Franklin D. Roosevelt, is going to be run by a witless clown like Donald Trump is an absolute disgrace.

don

Neoliberal a(u)ction

19 January, 2017 at 11:06 | Posted in Politics & Society | 5 Comments

 

Economic models are getting more and more sophisticated — and totally useless

18 January, 2017 at 18:27 | Posted in Economics | Comments Off on Economic models are getting more and more sophisticated — and totally useless

 

Those of us in the economics community who are impolite enough to dare question the preferred methods and models applied in mainstream economics, are as a rule met with disapproval. But although people seem to get very agitated and upset by the critique — just read the commentaries on this blog if you don’t believe me — defenders of “received theory” always say that the critique is “nothing new”, that they have always been “well aware” of the problems, and so on, and so on.

So, for the benefit of all mindless practitioners of mainstream economic modeling — who defend mainstream economics with arguments like “the speed with which macro has put finance at the center of its theories of the business cycle has been nothing less than stunning,” and re the patently ridiculous representative-agent modeling, maintain that there “have been efforts to put heterogeneity into big DSGE-type models” but that these models “didn’t get quite as far, because this kind of thing is very technically difficult to model,” and as for rational expectations admit that “so far, macroeconomists are still very timid about abandoning this pillar of the Lucas/Prescott Revolution,” but that “there’s no clear alternative” — and who don’t want to be disturbed in their doings, eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:

We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptios are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

Keynes’ critique of econometrics — as valid today as it was in 1939

17 January, 2017 at 16:40 | Posted in Statistics & Econometrics | 3 Comments

Renowned ‘error-statistician’ Aris Spanos maintains — in a comment on this blog a couple of weeks ago — that Keynes’ critique of econometrics and the reliability of inferences made when it is applied, “have been addressed or answered.”

4388529One could, of course, say that, but the valuation of the statement hinges completely on what we mean by a question or critique being ‘addressed’ or ‘answered’. As I will argue below, Keynes’ critique is still valid and unanswered in the sense that the problems he pointed at are still with us today and ‘unsolved.’ Ignoring them — the most common practice among applied econometricians — is not to solve them.

To apply statistical and mathematical methods to the real-world economy, the econometrician has to make some quite strong assumption. In a review of Tinbergen’s econometric work — published in The Economic Journal in 1939 — Keynes gave a comprehensive critique of Tinbergen’s work, focussing on the limiting and unreal character of the assumptions that econometric analyses build on:

Completeness: Where Tinbergen attempts to specify and quantify which different factors influence the business cycle, Keynes maintains there has to be a complete list of all the relevant factors to avoid misspecification and spurious causal claims. Usually this problem is ‘solved’ by econometricians assuming that they somehow have a ‘correct’ model specification. Keynes is, to put it mildly, unconvinced:

istheseptuagintaIt will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.

J M Keynes

Homogeneity: To make inductive inferences possible — and being able to apply econometrics — the system we try to analyse has to have a large degree of ‘homogeneity.’ According to Keynes most social and economic systems — especially from the perspective of real historical time — lack that ‘homogeneity.’ As he had argued already in Treatise on Probability (ch. 22), it wasn’t always possible to take repeated samples from a fixed population when we were analysing real-world economies. In many cases there simply are no reasons at all to assume the samples to be homogenous. Lack of ‘homogeneity’ makes the principle of ‘limited independent variety’ non-applicable, and hence makes inductive inferences, strictly seen, impossible since one its  fundamental logical premisses are not satisfied. Without “much repetition and uniformity in our experience” there is no justification for placing “great confidence” in our inductions (TP ch. 8).

And then, of course, there is also the ‘reverse’ variability problem of non-excitation: factors that do not change significantly during the period analysed, can still very well be extremely important causal factors.

Stability: Tinbergen assumes there is a stable spatio-temporal relationship between the variables his econometric models analyze. But as Keynes had argued already in his Treatise on Probability it was not really possible to make inductive generalisations based on correlations in one sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have shown us, it is exceedingly difficult to find and establish the existence of stable econometric parameters for anything but rather short time series.

Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions if it is possible to adequately quantify and measure things like expectations and political and psychological factors. And more than anything, he questioned — both on epistemological and ontological grounds — that it was always and everywhere possible to measure real-world uncertainty with the help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, “only lead to error and delusion.”

Independence: Tinbergen assumes that the variables he treats are independent (still a standard assumption in econometrics). Keynes argues that in such a complex, organic and evolutionary system as an economy, independence is  a deeply unrealistic assumption to make. Building econometric models from that kind of simplistic and unrealistic assumptions risk to produce nothing but spurious correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly seen, inapplicable. Mechanical probabilistic models have little leverage when applied to non-atomic evolving organic systems — such as economies.

Building econometric models can’t be a goal in itself. Good econometric models are means that make it possible for us to infer things about the real-world systems they ‘represent.’ If we can’t show that the mechanisms or causes that we isolate and handle in our econometric models are ‘exportable’ to the real-world, they are of limited value to our understanding, explanations or predictions of real-world economic systems.

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state. We do not have an invariable relation between particular bodies, but nevertheless each has on the others its own separate and invariable effect, which does not change with changing circumstances, although, of course, the total effect may be changed to almost any extent if all the other accompanying causes are different. Each atom can, according to this theory, be treated as a separate cause and does not enter into different organic combinations in each of which it is regulated by different laws …

The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts. Given, on the other hand, a number of legally atomic units and the laws connecting them, it would be possible to deduce their effects pro tanto without an exhaustive knowledge of all the coexisting circumstances.

Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he study to be linear. This is still standard procedure today, but as as Keynes writes:

It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves; indeed, it is ridiculous.

To Keynes it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).

2014+22keynes%20illo2The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.

J. M. Keynes “Ethics in Relation to Conduct” (1903)

And as even one of the founding fathers of modern econometrics — Trygve Haavelmo — wrote:

What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, additivity as being ontologically isomorphic to real-world economic systems, Keynes’ critique is still valid . As long as — as Keynes writes in a letter to Frisch in 1935 — “nothing emerges at the end which has not been introduced expressively or tacitly at the beginning,” I remain doubtful of the scientific aspirations of econometrics.

In his critique of Tinbergen, Keynes points us to the fundamental logical, epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

Econometric modeling should never be a substitute for thinking. From that perspective it is really depressing to see how much of Keynes’ critique of the pioneering econometrics in the 1930s-1940s is still relevant today.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

Keynes, letter to E.J. Broster, December 19, 1939

Calvo pricing — a ‘New Keynesian’ fairytale

16 January, 2017 at 18:50 | Posted in Economics | Comments Off on Calvo pricing — a ‘New Keynesian’ fairytale

pinnocThus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? Thus your standard New Keynesian model will calculate the expected path of consumption as the solution to some Euler equation plus an intertemporal budget constraint, with current wealth and the projected real interest rate path as the only factors that matter. This is fine if you want to demonstrate that the model can produce macroeconomic pathologies. But is it a not-stupid thing to do if you want your model to fit reality?

I remember attending the first lecture in Tom Sargent’s evening macroeconomics class back when I was in undergraduate: very smart man from whom I have learned the enormous amount, and well deserving his Nobel Prize. But…

He said … we were going to build a rigorous, micro founded model of the demand for money: We would assume that everyone lived for two periods, worked in the first period when they were young and sold what they produced to the old, held money as they aged, and then when they were old use their money to buy the goods newly produced by the new generation of young. Tom called this “microfoundations” and thought it gave powerful insights into the demand for money that you could not get from money-in-the-utility-function models.

I thought that it was a just-so story, and that whatever insights it purchased for you were probably not things you really wanted to buy. I thought it was dangerous to presume that you understood something because you had “microfoundations” when those microfoundations were wrong. After all, Ptolemaic astronomy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Brad DeLong

Brad DeLong is of course absolutely right here, and one could only wish that other ‘New Keynesian’ macroeconomists would take a similar critical approach to their own modeling endeavours …

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.