Mainstream economics — sacrificing realism at the altar of mathematical purity

22 Nov, 2016 at 18:05 | Posted in Economics | 6 Comments

e0f5b445c333de539ad33c6a63606b56Economists are too detached from the real world and have failed to learn from the financial crisis, insisting on using mathematical models which do not reflect reality, according to the Bank of England’s chief economist Andy Haldane.

The public has lost faith in economists since the credit crunch, he said, but the profession has failed to thoroughly re-examine its failings to come up with a new model of operating.

“The various reports into the economic costs of the UK leaving the EU most likely fell at the same hurdle. They are written, in the main, by the elite for the elite,” said Mr Haldane, writing the foreword to a new book, called ‘The Econocracy: the perils of leaving economics to the experts’ …

The chief economist said that the Great Depression of the 1930s resulted in a major overhaul of economic thinking, led by John Maynard Keynes, who emerged “as the most influential economist of the twentieth century”.

But the recent financial crisis and slow recovery has not yet prompted this great re-thinking …

For now, economists need to focus on reviewing their models, accepting a diversify of thought rather than one solid orthodoxy, and on communicating more clearly.

Economists should focus on other disciplines as well as maths, he said.

“Mainstream economic models have sacrificed too much realism at the altar of mathematical purity. Their various simplifying assumptions have served aesthetic rather than practical ends,” Mr Haldane wrote.

“As a profession, economics has become too much of a methodological monoculture. And that lack of intellectual diversity cost the profession dear when the single crop failed spectacularly during the crisis.”

Tim Wallace/The Telegraph

The rebel who blew up macroeconomics

22 Nov, 2016 at 15:18 | Posted in Economics | 1 Comment

rebel-rebel-logo1Paul Romer says he really hadn’t planned to trash macroeconomics as a math-obsessed pseudoscience. Or infuriate countless colleagues. It just sort of happened …

The upshot was “The Trouble With Macroeconomics,” a scathing critique that landed among Romer’s peers like a grenade. In a time of febrile politics, with anti-establishment revolts breaking out everywhere, faith in economists was already ebbing: They got blamed for failing to see the Great Recession coming and, later, to suggest effective remedies. Then, along came one of the leading practitioners of his generation, to say that the skeptics were onto something.

“For more than three decades, macroeconomics has gone backwards,” the paper began. Romer closed out his argument, some 20 pages later, by accusing a cohort of economists of drifting away from science, more interested in preserving reputations than testing their theories against reality, “more committed to friends than facts.” In between, he offers a wicked parody of a modern macro argument: “Assume A, assume B, … blah blah blah … and so we have proven that P is true.” …

Romer said he hopes at least to have set an example, for younger economists, of how scientific inquiry should proceed — on Enlightenment lines. No authority-figures should command automatic deference, or be placed above criticism, and voices from outside the like-minded group shouldn’t be ignored. He worries that those principles are at risk, well beyond his own field … And at the deepest level, he thinks it’s a misunderstanding of science that has sent so many economists down the wrong track. “Essentially, their belief was that math could tell you the deep secrets of the universe,” he said.


‘Post-real’ macroeconomics — three decades of intellectual regress

22 Nov, 2016 at 10:40 | Posted in Economics | 1 Comment

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …

67477738In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions.” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.

The noncommittal relationship with the truth revealed by these methodological evasions and the “less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.”

Paul Romer

There are many kinds of useless economics held in high regard within the mainstream economics establishment today. Few  are less deserved than the post-real macroeconomic theory — mostly connected with Finn Kydland, Robert Lucas,  Edward Prescott and Thomas Sargent — called RBC.

In Chicago economics one is cultivating the view that scientific theories has nothing to do with truth. Constructing theories and building models is not even considered an activity wth the intent of  approximating truth. For Chicago economists it is only an endeavour to organize their thoughts in a ‘useful’ manner.

What a handy view of science!

What these defenders of scientific storytelling ‘forget’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough.

Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough — not even after having put ‘New Keynesian’ sticky-price DSGE lipstick on the RBC pig.

Truth is an important concept in real science — and models based on meaningless calibrated ‘facts’ and ‘assumptions’ with unknown truth value are poor substitutes.

Public debt should not be zero. Ever!

22 Nov, 2016 at 09:26 | Posted in Economics | 3 Comments

Nation states borrow to provide public capital: For example, rail networks, road systems, airports and bridges. These are examples of large expenditure items that are more efficiently provided by government than by private companies.

darling-let-s-get-deeply-into-debtThe benefits of public capital expenditures are enjoyed not only by the current generation of people, who must sacrifice consumption to pay for them, but also by future generations who will travel on the rail networks, drive on the roads, fly to and from the airports and drive over the bridges that were built by previous generations. Interest on the government debt is a payment from current taxpayers, who enjoy the fruits of public capital, to past generations, who sacrificed consumption to provide that capital.

To maintain the roads, railways, airports and bridges, the government must continue to invest in public infrastructure. And public investment should be financed by borrowing, not from current tax revenues.

Investment in public infrastructure was, on average, equal to 4.3% of GDP in the period from 1948 through 1983. It has since fallen to 1.6% of GDP. There is a strong case to be made for increasing investment in public infrastructure. First, the public capital that was constructed in the post WWII period must be maintained in order to allow the private sector to function effectively. Second, there is a strong case for the construction of new public infrastructure to promote and facilitate future private sector growth.

The debt raised by a private sector company should be strictly less than the value of assets, broadly defined. That principle does not apply to a nation state. Even if government provided no capital services, the value of its assets or liabilities should not be zero except by chance.

National treasuries have the power to transfer resources from one generation to another. By buying and selling assets in the private markets, government creates opportunities for those of us alive today to transfer resources to or from those who are yet to be born. If government issues less debt than the value of public capital, there will be an implicit transfer from current to future generations. If it owns more debt, the implicit transfer is in the other direction.

The optimal value of debt, relative to public capital, is a political decision … Whatever principle the government does choose to fund its expenditure, the optimal value of public sector borrowing will not be zero, except by chance.

Roger Farmer

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.

The question if public debt is good and that we may actually have to little of it is one of our time’s biggest questions. Giving the wrong answer to it will be costly.

national debt5One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome. Because it is interpersonal the proper analogy is not to national debt but to international debt…. But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.

A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.

Abba Lerner The Burden of the National Debt (1948)

Slim by chocolate — a severe case of goofed p-hacking

21 Nov, 2016 at 17:12 | Posted in Statistics & Econometrics | 2 Comments

679eFrank randomly assigned the subjects to one of three diet groups. One group followed a low-carbohydrate diet. Another followed the same low-carb diet plus a daily 1.5 oz. bar of dark chocolate. And the rest, a control group, were instructed to make no changes to their current diet. They weighed themselves each morning for 21 days, and the study finished with a final round of questionnaires and blood tests …

Both of the treatment groups lost about 5 pounds over the course of the study, while the control group’s average body weight fluctuated up and down around zero. But the people on the low-carb diet plus chocolate? They lost weight 10 percent faster. Not only was that difference statistically significant, but the chocolate group had better cholesterol readings and higher scores on the well-being survey.

I know what you’re thinking. The study did show accelerated weight loss in the chocolate group—shouldn’t we trust it? Isn’t that how science works?

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.

Whenever you hear that phrase, it means that some result has a small p value. The letter p seems to have totemic power, but it’s just a way to gauge the signal-to-noise ratio in the data. The conventional cutoff for being “significant” is 0.05, which means that there is just a 5 percent chance that your result is a random fluctuation. The more lottery tickets, the better your chances of getting a false positive. So how many tickets do you need to buy?

P(winning) = 1 – (1 – p)^n

With our 18 measurements, we had a 60% chance of getting some“significant” result with p < 0.05. (The measurements weren’t independent, so it could be even higher.) The game was stacked in our favor.

It’s called p-hacking—fiddling with your experimental design and data to push p under 0.05—and it’s a big problem. Most scientists are honest and do it unconsciously. They get negative results, convince themselves they goofed, and repeat the experiment until it “works.” Or they drop “outlier” data points.

John Bohannon

Statistical inferences depend on both what actually happens and what might have happened. And Bohannon’s (in)famous chocolate con more than anything else underscores the dangers of confusing the model with reality. Or as W.V.O. Quine had it:”Confusion of sign and object is the original sin.”

There are no such things as free-standing probabilities – simply because probabilities are strictly seen only defined relative to chance set-ups – probabilistic nomological machines like flipping coins or roulette-wheels. And even these machines can be tricky to handle. Although prob(fair coin lands heads|I toss it) = prob(fair coin lands head & I toss it)|prob(fair coin lands heads) may be well-defined, it’s not certain we can use it, since we cannot define the probability that I will toss the coin given the fact that I am not a nomological machine producing coin tosses.

No nomological machine – no probability.

Econometrics — science built on beliefs and untestable assumptions

21 Nov, 2016 at 12:39 | Posted in Statistics & Econometrics | 1 Comment

What is distinctive about structural models, in contrast to forecasting models, is that they are supposed to be – when successfully supported by observation – informative about the impact of interventions in the economy. As such, they carry causal content about the structure of the economy. Therefore, structural models do not model mere functional relations supported by correlations, their functional relations have causal content which support counterfactuals about what would happen under certain changes or interventions.

causationThis suggests an important question: just what is the causal content attributed to structural models in econometrics? And, from the more restricted perspective of this paper, what does this imply with respect to the interpretation of the error term? What does the error term represent causally in structural equation models in econometrics? And finally, what constraints are imposed on the error term for successful causal inference? …

I now consider briefly a key constraint that may be necessary for the error term to meet for using the model for causal inference. To keep the discussion simple, I look only at the simplest model

y= αx+u

The obvious experiment that comes to mind is to vary x, to see by how much y changes as a result. This sounds straight forward, one changes x, y changes and one calculates α as follows.

α = ∆y/ ∆x

Everything seems straightforward. However there is a concern since u is unobservable: how does one know that u has not also changed in changing x? Suppose that u does change so that there is hidden in the change in y a change in u, that is, the change in y is incorrectly measured by

∆yfalse= ∆y + ∆u

And thus that α is falsely measured as

αfalse =∆yfalse/∆x = ∆y/ ∆x +∆u/ ∆x = α + ∆u/ ∆x

Therefore, in order for the experiment to give the correct measurement for α, one needs either to know that u has not also changed or know by how much it has changed (if it has.) Since u is unobservable it is not known by how much u has changed. This leaves as the only option the need to know that in changing x, u has not also been unwittingly changed. Intuitively, this requires that it is known that whatever cause(s) of x which are used to change x, not also be causes of any of the factors hidden in u …

More generally, the example above shows a need to constrain the error term in the equation in a non-simultaneous structural equation model as follows. It requires that each right hand variable have a cause that causes y but not via any factor hidden in the error term. This imposes a limit on the common causes the factors in the error term can have with those factors explicitly modelled …

Consider briefly the testability of the two key assumptions brought to light in this section: (i) that the error term denotes the net impact of a set of omitted causal factors and (ii) that the each error term have at least one cause which does not cause the error term. Given these assumptions directly involve the factors omitted in the error term, testing these empirically seems impossible without information about what is hidden in the error term. This places the modeller in a difficult situation, how to know that something important has not been hidden. In practice, there will always be element of faith in the assumptions about the error term, assuming that assumptions like (i) and (ii) have been met, even if it is impossible to test these conclusively.

Damien Fennell

In econometrics textbooks it is often said that the error term in the regression models used represents the effect of the variables that were omitted from the model. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:

maxresdefaultWith enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

Follies and fallacies of Chicago economics

20 Nov, 2016 at 13:28 | Posted in Economics | 2 Comments

Savings-and-InvestmentsEvery dollar of increased government spending must correspond to one less dollar of private spending. Jobs created by stimulus spending are offset by jobs lost from the decline in private spending. We can build roads instead of factories, but fiscal stimulus can’t help us to build more of both. This form of “crowding out” is just accounting, and doesn’t rest on any perceptions or behavioral assumptions.

John Cochrane

And the tiny little problem? It’s utterly and completely wrong!

What Cochrane is reiterating here is nothing but Say’s law, basically saying that savings are equal to investments, and that if the state increases investments, then private investments have to come down (‘crowding out’). As an accounting identity there is of course nothing to say about the law, but as such it is also totally uninteresting from an economic point of view. As some of my Swedish forerunners — Gunnar Myrdal and Erik Lindahl — stressed more than 80 years ago, it’s really a question of ex ante and ex post adjustments. And as further stressed by a famous English economist about the same time, what happens when ex ante savings and investments differ, is that we basically get output adjustments. GDP changes and so makes saving and investments equal ex ost. And this, nota bene, says nothing at all about the success or failure of fiscal policies!

Government borrowing is supposed to “crowd out” private investment.

william-vickrey-1914-1996The current reality is that on the contrary, the expenditure of the borrowed funds (unlike the expenditure of tax revenues) will generate added disposable income, enhance the demand for the products of private industry, and make private investment more profitable. As long as there are plenty of idle resources lying around, and monetary authorities behave sensibly, (instead of trying to counter the supposedly inflationary effect of the deficit) those with a prospect for profitable investment can be enabled to obtain financing. Under these circumstances, each additional dollar of deficit will in the medium long run induce two or more additional dollars of private investment. The capital created is an increment to someone’s wealth and ipso facto someone’s saving. “Supply creates its own demand” fails as soon as some of the income generated by the supply is saved, but investment does create its own saving, and more. Any crowding out that may occur is the result, not of underlying economic reality, but of inappropriate restrictive reactions on the part of a monetary authority in response to the deficit.

William Vickrey Fifteen Fatal Fallacies of Financial Fundamentalism

In a lecture on the US recession, Robert Lucas gave an outline of what the new classical school of macroeconomics today thinks on the latest downturns in the US economy and its future prospects.

lucasLucas starts by showing that real US GDP has grown at an average yearly rate of 3 per cent since 1870, with one big dip during the Depression of the 1930s and a big – but smaller – dip in the recent recession.

After stating his view that the US recession that started in 2008 was basically caused by a run for liquidity, Lucas then goes on to discuss the prospect of recovery from where the US economy is today, maintaining that past experience would suggest an “automatic” recovery, if the free market system is left to repair itself to equilibrium unimpeded by social welfare activities of the government.

As could be expected there is no room for any Keynesian type considerations on eventual shortages of aggregate demand discouraging the recovery of the economy. No, as usual in the new classical macroeconomic school’s explanations and prescriptions, the blame game points to the government and its lack of supply side policies.

Lucas is convinced that what might arrest the recovery are higher taxes on the rich, greater government involvement in the medical sector and tougher regulations of the financial sector. But – if left to run its course unimpeded by European type welfare state activities -the free market will fix it all.

In a rather cavalier manner – without a hint of argument or presentation of empirical facts – Lucas dismisses even the possibility of a shortfall of demand. For someone who already 30 years ago proclaimed Keynesianism dead – “people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another” – this is of course only what could be expected. Demand considerations are simply ruled out on whimsical theoretical-ideological grounds, much like we have seen other neo-liberal economists do over and over again in their attempts to explain away the fact that the latest economic crises shows how the markets have failed to deliver. If there is a problem with the economy, the true cause has to be government.

Chicago economics is a dangerous pseudo-scientific zombie ideology that ultimately relies on the poor having to pay for the mistakes of the rich. Trying to explain business cycles in terms of rational expectations has failed blatantly. Maybe it would be asking to much of freshwater economists like Lucas and Cochrane to concede that, but it’s still a fact that ought to be embarrassing. My rational expectation is that 30 years from now, no one will know who Robert Lucas or John Cochrane was. John Maynard Keynes, on the other hand, will still be known as one of the masters of economics.

Top 10 critiques of econometrics

20 Nov, 2016 at 11:01 | Posted in Statistics & Econometrics | Comments Off on Top 10 critiques of econometrics


•Achen, Christopher (1982). Interpreting and using regression. SAGE

•Berk, Richard (2004). Regression Analysis: A Constructive Critique. SAGE

•Freedman, David (1991). ‘Statistical Models and Shoe Leather’. Sociological Methodology

•Kennedy, Peter (2002).  ‘Sinning in the Basement: What are the Rules? The Ten Commandments of Applied Econometrics’. Journal of Economic Surveys

•Keynes, John Maynard (1939). ‘Professor Tinbergen’s method’. Economic Journal

•Klees, Steven (2016). ‘Inferences from regression analysis: are they valid?’ Real-World Economics Review

•Lawson, Tony (1989). ‘Realism and instrumentalism in the development of econometrics’. Oxford Economic Papers

•Leamer, Edward (1983). ‘Let’s take the con out of econometrics’. American Economic Review

•Lieberson, Stanley (1987). Making it count: the improvement of social research and theory. University of California Press

•Zaman, Asad (2012). ‘Methodological Mistakes and Econometric Consequences’. International Econometric Review

Is Dani Rodrik really a pluralist?

20 Nov, 2016 at 10:59 | Posted in Economics | Comments Off on Is Dani Rodrik really a pluralist?

Unlearning economics has a well-written and interesting review of Dani Rodrik’s book Economics Rules up on Pieria. Although the reviewer thinks there is much in the book to like and appreciate, there are also things he strongly objects to. Such as Rodrik’s stance on the issue of pluralism:

If Rodrik is at his strongest when discussing particular neoclassical models and their applications, he’s at his weakest when discussing non-neoclassical and non-economic approaches. Just as his discussion of the former benefits from a broad array of concrete examples, his discussion of the latter suffers from a failure to discuss any examples, coupled with a series of sweeping, unsubstantiated assertions about what is wrong with them. Thus, despite the fact that Rodrik considers himself “well exposed to…different traditions within the social sciences”, one is forced to conclude that he is almost completely ignorant of what they – as well as non-neoclassical economics – have to offer.

comic1For example, Rodrik dismisses calls for methodological pluralism in economics with the bizarre claim that “no academic discipline is permissive of approaches that diverge too much from prevailing practices”. This is patently untrue: pluralism is simply par for the course in every other social science and almost every other discipline. As Alan Freeman once commented, economics is actually “more committed to the unity of its doctrines than theology, whose benchmark simply states that ‘Much of the excitement of the discipline lies in its contested nature.’” There is even a case that hard sciences like physics are more pluralist than economics. The insistence on a particular, rigid, deductive mathematical framework for approaching issues is a characteristic of economics and economics alone.

Most strangely, the case for economic pluralism follows directly from the defining theme of Rodrik’s book: that economics is about choosing the best model for the best situation. If this is so, how can he dismiss non-mainstream models outright? This would entail either a convincing methodological demonstration of why these models can’t be used in general, else a series of case studies of why popular non-neoclassical models are inferior to neoclassical ones. Without engaging with pluralist literature – model-based or otherwise – Rodrik has no clear reasons as to why he can reject such approaches. Personally, I would love to see some engagement from mainstream economists with ideas like Minsky’s Financial Instability Hypothesis; capacity utilisation models; agent based models; cost-plus pricing; endogenous money. Note here that ‘engagement’ does not mean building one model which superficially claims to incorporate these ideas, then carrying on as normal.

Similar weakness is evident in Rodrik’s discussion of other social sciences and his (largely unfavourable) comparison of them to economics. He makes another bizarre claim: that it would be “truly rare” for junior researchers to challenge more senior researchers in other disciplines, with his only citation as the Sokal affair, where a nonsense paper was accepted into a sociology journal. It is not clear how this is directly relevant, but presumably Rodrik’s argument is that the editors simply accepted the paper because it referenced and agreed with them. But similar controversies have happened in many areas such as physics and electrical engineering. Are these just woolly, “interpretative” disciplines too? This really says more about the modern peer-review process than it does about any particular discipline. Besides, research suggests that economics is one of the most hierarchical disciplines in the social sciences.

Yours truly can’t but agree. For my own take on Rodrik’s book, see my review in Real-World Economics Review.

Econometric inferences — idiosyncratic, unstable and inconsistent

19 Nov, 2016 at 11:49 | Posted in Statistics & Econometrics | 1 Comment

The impossibility of proper specification is true generally in regression analyses across the social sciences, whether we are looking at the factors affecting occupational status, voting behavior, etc. The problem is that as implied by the three conditions for regression analyses to yield accurate, unbiased estimates, you need to investigate a phenomenon that has underlying mathematical regularities – and, moreover, you need to know what they are. Neither seems true. I have no reason to believe that the way in which multiple factors affect earnings, student achievement, and GNP have some underlying mathematical regularity across individuals or countries. More likely, each individual or country has a different function, and one that changes over time. Even if there was some constancy, the processes are so complex that we have no idea of what the function looks like.

526a52886c0fd8d20bd844aec7a1f77dccaa0140d42d31bb49e8d017dbe6776cResearchers recognize that they do not know the true function and seem to treat, usually implicitly, their results as a good-enough approximation. But there is no basis for the belief that the results of what is run in practice is anything close to the underlying phenomenon, even if there is an underlying phenomenon. This just seems to be wishful thinking. Most regression analysis research doesn’t even pay lip service to theoretical regularities. But you can’t just regress anything you want and expect the results to approximate reality. And even when researchers take somewhat seriously the need to have an underlying theoretical framework – as they have, at least to some extent, in the examples of studies of earnings, educational achievement, and GNP that I have used to illustrate my argument – they are so far from the conditions necessary for proper specification that one can have no confidence in the validity of the results.

Steven J. Klees

Most work in econometrics and regression analysis is made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

The problem is that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is nothing that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief with little support, but a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

Every regression model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are invalid.

Friedman’s methodological folly

19 Nov, 2016 at 10:59 | Posted in Theory of Science & Methodology | 2 Comments

milton_friedman_1Friedman enters this scene arguing that all we need to do is predict successfully, that this can be done even without realistic theories, and that unrealistic theories are to be preferred to realistic ones, essentially because they can usually be more parsimonious.

The first thing to note about this response is that Friedman is attempting to turn inevitable failure into a virtue. In the context of economic modelling, the need to produce formulations in terms of systems of isolated atoms, where these are not characteristic of social reality, means that unrealistic formulations are more or less unavoidable. Arguing that they are to be preferred to realistic ones in this context belies the fact that there is not a choice …

My own response to Friedman’s intervention is that it was mostly an irrelevancy, but one that has been opportunistically grasped by some as a supposed defence of the profusion of unrealistic assumptions in economics. This would work if successful prediction were possible. But usually it is not.

Tony Lawson

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journals being filled with articles supporting the stories with empirical evidence confirming the predictions. However, the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though explicit discussion, argumentation and justification on the subject isn’t considered to be required.

If the ultimate criterion of success of a model is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economies.

the-only-function-of-economic-forecasting-is-to-make-astrology-look-respectable-quote-1There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.

George Blackford

Happy meal

19 Nov, 2016 at 10:32 | Posted in Varia | Comments Off on Happy meal


(h/t Jeanette Meyer)

Economics — an academic discipline gone badly wrong

18 Nov, 2016 at 18:30 | Posted in Economics | 2 Comments

Three economics graduates have savaged the way the subject is taught at many universities.

cqdkudpweaedk45Joe Earle, Cahal Moran and Zach Ward-Perkins are all, they write in The Econocracy: The Perils of Leaving Economics to the Experts, “of the generation that came of age in the maelstrom of the 2008 global financial crisis” and embarked on economics degrees at the University of Manchester in 2011 precisely in order to better “understand and shape the world”.

By their second year, they were sufficiently disillusioned to start “a campaign to reform economics education”, and they became active members of the Rethinking Economics network, which currently has over 40 groups in 13 countries. Their book sets out to examine how far their own experience is symptomatic of a more general malaise.

In order to do so, the authors carried out, they believe for the first time, “an in-depth curriculum review of seven [elite British] Russell Group universities…the course guides and examinations”, making a total of 174 economics modules in all.

What emerged was “a remarkable similarity in the content and structure of economics courses”, where “students only learn one particular type of [neoclassical] economics” and “are taught to accept this type of economics in an uncritical manner”.

In developing these arguments, the authors point to “the central role textbooks play” in economics degrees, even though in other social sciences students are expected to “engag[e] critically with the discipline’s academic literature”, in order to gain “an understanding of the debates and research agendas” and to “engage independently and critically with academics while improving research and review skills”.

Economics, in contrast, is treated as “a closed, fixed body of knowledge…It is as if students must be initiated into neoclassical economics in a way that shields them entirely from the real world, potential criticism or alternatives.”

Matthew Reisz

NHST — a case of statistical pseudoscience

18 Nov, 2016 at 18:13 | Posted in Statistics & Econometrics | Comments Off on NHST — a case of statistical pseudoscience


NHST is an incompatible amalgamation of the theories of Fisher and of Neyman and Pearson (Gigerenzer, 2004). Curiously, it is an amalgamation that is technically reassuring despite it being, philosophically, pseudoscience. More interestingly, the numerous critiques raised against it for the past 80 years have not only failed to debunk NHST from the researcher’s statistical toolbox, they have also failed to be widely known, to find their way into statistics manuals, to be edited out of journal submission requirements, and to be flagged up by peer-reviewers (e.g., Gigerenzer, 2004). NHST effectively negates the benefits that could be gained from Fisher’s and from Neyman-Pearson’s theories; it also slows scientific progress (Savage, 1957; Carver, 1978, 1993) and may be fostering pseudoscience. The best option would be to ditch NHST altogether and revert to the theories of Fisher and of Neyman-Pearson as—and when—appropriate. For everything else, there are alternative tools, among them exploratory data analysis (Tukey, 1977), effect sizes (Cohen, 1988), confidence intervals (Neyman, 1935), meta-analysis (Rosenthal, 1984), Bayesian applications (Dienes, 2014) and, chiefly, honest critical thinking (Fisher, 1960).

Jose Perezgonzalez

Making It Count

17 Nov, 2016 at 23:00 | Posted in Statistics & Econometrics | 2 Comments

Modern econometrics is fundamentally based on assuming — usually without any explicit justification — that we can gain causal knowledge by considering independent variables that may have an impact on the variation of a dependent variable. This is however, far from self-evident. Often the fundamental causes are constant forces that are not amenable to the kind of analysis econometrics supplies us with. As Stanley Lieberson has it in his modern classic Making It Count:

LiebersonOne can always say whether, in a given empirical context, a given variable or theory accounts for more variation than another. But it is almost certain that the variation observed is not universal over time and place. Hence the use of such a criterion first requires a conclusion about the variation over time and place in the dependent variable. If such an analysis is not forthcoming, the theoretical conclusion is undermined by the absence of information …

Moreover, it is questionable whether one can draw much of a conclusion about causal forces from simple analysis of the observed variation … To wit, it is vital that one have an understanding, or at least a working hypothesis, about what is causing the event per se; variation in the magnitude of the event will not provide the answer to that question.

Trygve Haavelmo was making a somewhat similar point back in 1941, when criticizing the treatmeant of the interest variable in Tinbergen’s regression analyses. The regression coefficient of the interest rate variable being zero was according to Haavelmo not sufficient for inferring that “variations in the rate of interest play only a minor role, or no role at all, in the changes in investment activity.” Interest rates may very well play a decisive indirect role by influencing other causally effective variables. And:

the rate of interest may not have varied much during the statistical testing period, and for this reason the rate of interest would not “explain” very much of the variation in net profit (and thereby the variation in investment) which has actually taken place during this period. But one cannot conclude that the rate of influence would be inefficient as an autonomous regulator, which is, after all, the important point.

Causality in economics — and other social sciences — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena requires theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation. Too much in love with axiomatic-deductive modeling, neoclassical economists especially tend to forget that accounting for causation — how causes bring about their effects — demands deep subject-matter knowledge and acquaintance with the intricate fabrics and contexts. As already Keynes argued in his A Treatise on Probability, statistics and econometrics should not primarily be seen as means of inferring causality from observational data, but rather as description of patterns of associations and correlations that we may use as suggestions of possible causal realations.

« Previous PageNext Page »

Blog at
Entries and Comments feeds.

%d bloggers like this: