The best economics article of 2016

31 Dec, 2016 at 11:00 | Posted in Economics | 1 Comment

The best economics article of 2016 in my opinion was Paul Romer’s extremely well-written and brave frontal attack on the theories that has put macroeconomics on a path of ‘intellectual regress’ for three decades now:

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …

67477738In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions.” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.

The noncommittal relationship with the truth revealed by these methodological evasions and the “less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.”

Paul Romer

There are many kinds of useless ‘post-realeconomics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Paul Romer and yours truly are certainly not the only ones having doubts about the scientific value of calibration. In Journal of Economic Perspective (1996, vol. 10) Nobel laureates Lars Peter Hansen and James J. Heckman write:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Mathematical statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Lucas, Sargent, Prescott, Kydland and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

As Romer says:

Math cannot establish the truth value of a fact. Never has. Never will.

So instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer notices — just looking the other way and pretend it’s raining:

The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.

Reformation in economics

30 Dec, 2016 at 17:52 | Posted in Economics | 4 Comments

51ig0swjsl-_sx350_bo1204203200_This is a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty applying it to the real world is to consign oneself to dealing with pure makings of our imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era.

Highly recommended reading for everyone interested in making economics a relevant and realist science.

Con te partirò (personal)

30 Dec, 2016 at 16:07 | Posted in Varia | Comments Off on Con te partirò (personal)

Living next to an Opera house sure has its benefits if you love opera. Just twenty minutes ago two lovely opera singers performed Con te partirò on the little piazza in front of my house. Awesome!

This version is pretty good too …

For-profit schools — a total disaster

29 Dec, 2016 at 17:53 | Posted in Economics, Education & School | Comments Off on For-profit schools — a total disaster

To make education more like a private good, [voucher advocates] tried to change the conditions of both supply and demand. On the demand side, the central proposal was that of education ‘vouchers’, put forward most notably by Nobel Prizewinning economist at the University of Chicago, Milton Friedman. The idea was that, rather than funding schools, government should provide funding directly parents in the form of vouchers that could be used at whichever school the parents preferred, and topped up, if necessary by additional fee payments.

Pros and Cons of PrivatizationAs is typically the case, voucher advocates ignored the implications of their proposals for the distribution of income. In large measure, vouchers represent a simply cash transfer, going predominantly from the poor to the rich. The biggest beneficiaries would be those, mostly well-off, who were already sending their children to private schools, for whom the voucher would be a simple cash transfer. Those whose children remained at the same public school as before would gain nothing …

The most notable entrant in the US school sector was Edison Schools. Edison Schools was founded in 1992 and was widely viewed as representing the future of school education … For-profit schools were also introduced in Chile and Sweden …

The story was much the same everywhere: an initial burst of enthusiasm and high profits, followed by the exposure of poor practices and outcomes, and finally collapse, with governments being left to pick up the pieces …

Sweden introduced voucher-style reforms in 1992, and opened the market to for-profit schools. Initially favorable assessments were replaced by disillusionment as the performance of the school system as a whole deteriorated … By 2015, the majority of the public favoured banning for-profit schools. The Minister for Education described the system as a ‘political failure.’ Other critics described it in harsher terms (The Swedish for-profit ‘free’ school disaster).

Although a full analysis has not yet been undertaken, it seems likely that the for-profit schools engaged in ‘cream-skimming’, admitting able and well-behaved students, while pushing more problematic students back into the public system. The rules under which the reform was introduced included ‘safeguards’ to prevent cream-skimming, but such safeguards have historical proved ineffectual in the face of the profits to be made by evading them …

Why has market-oriented reform of education been such a failure?  …

Education is characterized by market failure, by potentially inequitable initial allocations and, most importantly, by the fact that the relationship between the education ‘industry’ and its ‘consumers’, that is between educational institutions and teachers on the one hand and students on the other, cannot be reduced to a market transaction.

The critical problem with this simple model is that students, by definition, cannot know in advance what they are going to learn, or make an informed judgement about what they are learning. They have to rely, to a substantial extent, on their teachers to select the right topics of study and to teach them appropriately …

The result is that education does not rely on market competition to any significant extent to sort good teachers and institutions from bad ones. Rather, education depends on a combination of sustained institutional standards and individual professional ethics to maintain their performance.

The implications for education policy are clear, at least at the school level. School education should be publicly funded and provided either by public schools or by non-profits with a clear educational mission, as opposed to corporate ‘school management organisations’.

John Quiggin/Crooked Timber

Neo-liberals and libertarians have always provided a lot of ideologically founded ideas and ‘theories’ to underpin their Panglossian view on markets. But when they are tested against reality they usually turn out to be wrong. The promised results are simply not to be found. And that goes for for-profit schools too.

New study shows marginal productivity theory has only a ‘negligible’ link to reality

29 Dec, 2016 at 16:59 | Posted in Economics | Comments Off on New study shows marginal productivity theory has only a ‘negligible’ link to reality

The correlation between high executive pay and good performance is “negligible”, a new academic study has found, providing reformers with fresh evidence that a shake-up of Britain’s corporate remuneration systems is overdue.

jpgimageAlthough big company bosses enjoyed pay rises of more than 80 per cent in a decade, performance as measured by economic returns on invested capital was less than 1 per cent over the period, the paper by Lancaster University Management School says.

“Our findings suggest a material disconnect between pay and fundamental value generation for, and returns to, capital providers,” the authors of the report said.

In a study of more than a decade of data on the pay and performance of Britain’s 350 biggest listed companies, Weijia Li and Steven Young found that remuneration had increased 82 per cent in real terms over the 11 years to 2014 … The research found that the median economic return on invested capital, a preferable measure, was less than 1 per cent over the same period.

Patrick Jenkins/Financial Times

Mainstream economics textbooks usually refer to the interrelationship between technological development and education as the main causal force behind increased inequality. If the educational system (supply) develops at the same pace as technology (demand), there should be no increase, ceteris paribus, in the ratio between high-income (highly educated) groups and low-income (low education) groups. In the race between technology and education, the proliferation of skilled-biased technological change has, however, allegedly increased the premium for the highly educated group.

Another prominent explanation is that globalization – in accordance with Ricardo’s theory of comparative advantage and the Wicksell-Heckscher-Ohlin-Stolper-Samuelson factor price theory – has benefited capital in the advanced countries and labour in the developing countries. The problem with these theories are that they explicitly assume full employment and international immobility of the factors of production. Globalization means more than anything else that capital and labour have to a large extent become mobile over country borders. These mainstream trade theories are really not applicable in the world of today, and they are certainly not able to explain the international trade pattern that has developed during the last decades. Although it seems as though capital in the developed countries has benefited from globalization, it is difficult to detect a similar positive effect on workers in the developing countries.

There are, however, also some other quite obvious problems with these kinds of inequality explanations. The World Top Incomes Database shows that the increase in incomes has been concentrated especially in the top 1%. If education was the main reason behind the increasing income gap, one would expect a much broader group of people in the upper echelons of the distribution taking part of this increase. It is dubious, to say the least, to try to explain, for example, the high wages in the finance sector with a marginal productivity argument. High-end wages seem to be more a result of pure luck or membership of the same ‘club’ as those who decide on the wages and bonuses, than of ‘marginal productivity.’

Mainstream economics, with its technologically determined marginal productivity theory, seems to be difficult to reconcile with reality. Although card-carrying neoclassical apologetics like Greg Mankiw want to recall John Bates Clark’s (1899) argument that marginal productivity results in an ethically just distribution, that is not something – even if it were true – we could confirm empirically, since it is impossible realiter to separate out what is the marginal contribution of any factor of production. The hypothetical ceteris paribus addition of only one factor in a production process is often heard of in textbooks, but never seen in reality.

When reading  mainstream economists like Mankiw who argue for the ‘just desert’ of the 0.1 %, one gets a strong feeling that they are ultimately trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they ‘deserve.’ To most social scientists that probably smacks more of being an evasive action trying to explain away a very disturbing structural ‘regime shift’ that has taken place in our societies. A shift that has very little to do with ‘stochastic returns to education.’ Those were in place also 30 or 40 years ago. At that time they meant that perhaps a top corporate manager earned 10–20 times more than ‘ordinary’ people earned. Today it means that they earn 100–200 times more than ‘ordinary’ people earn. A question of education? Hardly. It is probably more a question of greed and a lost sense of a common project of building a sustainable society.

Since the race between technology and education does not seem to explain the new growing income gap – and even if technological change has become more and more capital augmenting, it is also quite clear that not only the wages of low-skilled workers have fallen, but also the overall wage share – mainstream economists increasingly refer to ‘meritocratic extremism,’ ‘winners-take-all markets’ and ‘super star-theories’ for explanation. But this is also highly questionable.

Fans may want to pay extra to watch top-ranked athletes or movie stars performing on television and film, but corporate managers are hardly the stuff that people’s dreams are made of – and they seldom appear on television and in the movie theaters.

Everyone may prefer to employ the best corporate manager there is, but a corporate manager, unlike a movie star, can only provide his services to a limited number of customers. From the perspective of ‘super-star theories,’ a good corporate manager should only earn marginally better than an average corporate manager. The average earnings of corporate managers of the 50 biggest Swedish companies today, is equivalent to the wages of 46 blue-collar workers.

It is difficult to see the takeoff of the top executives as anything else but a reward for being a member of the same illustrious club. That they should be equivalent to indispensable and fair productive contributions – marginal products – is straining credulity too far. That so many corporate managers and top executives make fantastic earnings today, is strong evidence the theory is patently wrong and basically functions as a legitimizing device of indefensible and growing inequalities.

No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to ‘capital,’ what it is supposed to mean and how to measure it. Remunerations do not necessarily correspond to any marginal product of different factors of production – or to ‘compensating differentials’ due to non-monetary characteristics of different jobs, natural ability, effort or chance.

Put simply – highly paid workers and corporate managers are not always highly productive workers and corporate managers, and less highly paid workers and corporate managers are not always less productive. History has over and over again disconfirmed the close connection between productivity and remuneration postulated in mainstream income distribution theory.

Neoclassical marginal productivity theory is obviously a collapsed theory from both a historical and a theoretical point of view, as shown already by Sraffa in the 1920s, and in the Cambridge capital controversy in the 1960s and 1970s.

When a theory is impossible to reconcile with facts there is only one thing to do — scrap it!

Observational studies vs. RCTs

29 Dec, 2016 at 14:12 | Posted in Theory of Science & Methodology | Comments Off on Observational studies vs. RCTs

 

Murray Rothbard on Adam Smith

28 Dec, 2016 at 16:03 | Posted in Economics | 4 Comments

Adam Smith (1723-90) is a mystery in a puzzle wrapped in an enigma. The mystery is the enormous and unprecedented gap between Smith’s exalted reputation and the reality of his dubious contribution to economic thought …

The problem is not simply that Smith was not the founder of economics.

81979The problem is that he originated nothing that was true, and that whatever he originated was wrong; that, even in an age that had fewer citations or footnotes than our own, Adam Smith was a shameless plagiarist, acknowledging little or nothing and stealing large chunks …

Even though an inveterate plagiarist, Smith had a Columbus complex, accusing close friends incorrectly of plagiarizing him. And even though a plagiarist, he plagiarized badly, adding new fallacies to the truths he lifted … Smith not only contributed nothing of value to economic thought; his economics was a grave deterioration from his predecessors …

The historical problem is this: how could this phenomenon have taken place with a book so derivative, so deeply flawed, so much less worthy than its predecessors?

The answer is surely not any lucidity or clarity of style or thought. For the much revered Wealth of Nations is a huge, sprawling, inchoate, confused tome, rife with vagueness, ambiguity and deep inner contradictions.

Not exactly the standard textbook presentation of the founding father of economics …

Probability calculus is no excuse for forgetfulness

28 Dec, 2016 at 14:16 | Posted in Theory of Science & Methodology | Comments Off on Probability calculus is no excuse for forgetfulness

When we cannot accept that the observations, along the time-series available to us, are independent, or cannot by some device be divided into groups that can be treated as independent, we get into much deeper water. For we have then, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply. We are left to use our judgement, making sense of what has happened as best we can, in the manner of the historian. Applied economics does then come back to history, after all.

hicksI am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem at hand. Very often they are not. Thus it is not at all sensible to take a small number of observations (sometimes no more than a dozen observations) and to use the rules of probability to deduce from them a ‘significant’ general law. For we are assuming, if we do so, that the variations from one to another of the observations are random, so that if we had a larger sample (as we do not) they would by some averaging tend to disappear. But what nonsense this is when the observations are derived, as not infrequently happens, from different countries, or localities, or industries — entities about which we may well have relevant information, but which we have deliberately decided, by our procedure, to ignore. By all means let us plot the points on a chart, and try to explain them; but it does not help in explaining them to suppress their names. The probability calculus is no excuse for forgetfulness.

John Hicks’ Causality in economics is an absolute masterpiece. It ought to be on the reading list of every course in economic methodology.

Just idag är jag stark

28 Dec, 2016 at 10:07 | Posted in Varia | Comments Off on Just idag är jag stark

 

The search for heavy balls in economics

27 Dec, 2016 at 16:58 | Posted in Theory of Science & Methodology | 3 Comments

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’ Galileo's falling bodies experimentIf we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s experiments were according to Nancy Cartwright (Hunting Causes and Using Them, p. 223)

designed to find out what contribution the motion due to the pull of the earth will make, with the assumption that the contribution is stable across all the different kinds of situations falling bodies will get into … He eliminated (as far as possible) all other causes of motion on the bodies in his experiment so that he could see how they move when only the earth affects them. That is the contribution that the earth’s pull makes to their motion.

Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time, and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistence — influencing the fall and build that in to one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball, but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any heavy balls to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but  not even one single  real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct, have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they wont’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fullfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model-world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kind of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know..

To solve, understand, or explain real-world problems you actually have to know something about them – logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real-world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world  phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions.

No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

Economic growth and the size of the ‘private sector’

26 Dec, 2016 at 20:32 | Posted in Statistics & Econometrics | 6 Comments

Economic growth has since long interested economists. Not least, the question of which factors are behind high growth rates has been in focus. The factors usually pointed at are mainly economic, social and political variables. In an interesting study from the University of  Helsinki, Tatu Westling has expanded the potential causal variables to also include biological and sexual variables. In  the report Male Organ and Economic Growth: Does Size Matter (2011), he has — based on the ‘cross-country’ data of Mankiw et al (1992), Summers and Heston (1988), Polity IV Project data of political regime types and a data set on average penis size in 76 non-oil producing countries (www.everyoneweb.com/worldpenissize) — been able to show that the level and growth of GDP per capita between 1960 and 1985 varies with penis size. Replicating Westling’s study — I have used my favourite program Gretl — we obtain the following two charts:


The Solow-based model estimates show that the maximum GDP is achieved with the penis of about 13.5 cm and that the male reproductive organ (OLS without control variables) are negatively correlated with — and able to explain 20% of the variation in — GDP growth.

Even with reservation for problems such as endogeneity and confounders one can not but agree with Westling’s final assessment that “the ‘male organ hypothesis’ is worth pursuing in future research” and that it “clearly seems that the ‘private sector’ deserves more credit for economic development than is typically acknowledged.” Or? …

Milton Friedman’s critique of econometrics

26 Dec, 2016 at 15:28 | Posted in Statistics & Econometrics | 2 Comments

Jan_Tinbergen_1986Tinbergen’s results cannot be judged by ordinary tests of statistical significance. The reason is that the variables with which he winds up, the particular series measuring these variables, the leads and lags, and various other aspects of the equations besides the particular values of the parameters (which alone can be tested by the usual statistical technique) have been selected after an extensive process of trial and error because they yield high coefficients of correlation. Tinbergen is seldom satisfied with a correlation coefficient less than 0.98. But these attractive correlation coefficients create no presumption that the relationships they describe will hold in the future. The multiple regression equations which yield them are simply tautological reformulations of selected economic data. Taken at face value, Tinbergen’s work “explains” the errors in his data no less than their real movements; for although many of the series employed in the study would be accorded, even by their compilers, a margin of error in excess of 5 per cent, Tinbergen’s equations “explain” well over 95 per cent of the observed variation.

As W. C. Mitchell put it some years ago, “a competent statistician, with sufficient clerical assistance and time at his command, can take almost any pair of time series for a given period and work them into forms which will yield coefficients of correlation exceeding ±.9 …. So work of [this] sort … must be judged, not by the coefficients of correlation obtained within the periods for which they have manipulated the data, but by the coefficients which they get in earlier or later periods to which their formulas may be applied.” But Tinbergen makes no attempt to determine whether his equations agree with data other than those which they translate …

The methods used by Tinbergen do not and cannot provide an empirically tested explanation of business cycle movements.

Milton Friedman

It is usual in macroeconomic (media) discourses to put Keynes and Fredman as bitter enemies. But on some issues they are in fact very close to each other. Econometrics, and especially its limited applicability, is — as can be seen comparing Friedman’s critique above to Keynes’ critique — one such prominent case.

Economists — nothing but a bunch of idiots savants

25 Dec, 2016 at 13:00 | Posted in Economics | 14 Comments

idsavLet’s be honest: no one knows what is happening in the world economy today. Recovery from the collapse of 2008 has been unexpectedly slow …

Policymakers don’t know what to do. They press the usual (and unusual) levers and nothing happens. Quantitative easing was supposed to bring inflation “back to target.” It didn’t. Fiscal contraction was supposed to restore confidence. It didn’t …

Most economics students are not required to study psychology, philosophy, history, or politics. They are spoon-fed models of the economy, based on unreal assumptions, and tested on their competence in solving mathematical equations. They are never given the mental tools to grasp the whole picture …

Good economists have always understood that this method has severe limitations. They use their discipline as a kind of mental hygiene to protect against the grossest errors in thinking …

Today’s professional economists have studied almost nothing but economics. They don’t even read the classics of their own discipline. Economic history comes, if at all, from data sets. Philosophy, which could teach them about the limits of the economic method, is a closed book. Mathematics, demanding and seductive, has monopolized their mental horizons. The economists are the idiots savants of our time.

Robert Skidelsky

Yes indeed — modern economics has become increasingly irrelevant to the understanding of the real world. In his seminal book Economics and Reality (1997) Tony Lawson traced this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject. As shown by Skidelsky, it is as relevant today as it was twenty years ago.

rocket-science-pic

It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Already back in 1991, JEL published a study by a commission — chaired by Anne Krueger and including people like Kenneth Arrow, Edward Leamer, and Joseph Stiglitz — focusing on “the extent to which graduate education in economics may have become too removed from real economic problems.” The commission members reported from own experience “that it is an underemphasis on the ‘linkages’ between tools, both theory and econometrics, and ‘real world problems’ that is the weakness of graduate education in economics,”  and that both students and faculty sensed “the absence of facts, institutional information, data, real-world issues, applications, and policy problems.” And in conclusion they wrote:

The commission’s fear is that graduate programs may be turning out a generation with too many idiot savants skilled in technique but innocent of real economic issues.

Not much is different today. Economics education is still in dire need of a remake. How about bringing economics back into some contact with reality?

Added (GMT 1215): And, of course, Paul Krugman and fellow ‘New Keynesians’ have been fast to tell us that although Skidelsky is absolutely right, ‘basic macro’ (read: IS-LM ‘New Keynesianism’) has done just fine, and that it is only RBC and New Classical DSGE macroeconomics that has faltered. But that is, sad to say, but still, nothing but pure nonsense!

Econometrics textbooks — vague and confused causal analysis

25 Dec, 2016 at 12:17 | Posted in Statistics & Econometrics | 1 Comment

causationEconometric textbooks fall on all sides of this debate. Some explicitly ascribe causal meaning to the structural equation while others insist that it is nothing more than a compact representation of the joint probability distribution. Many fall somewhere in the middle – attempting to provide the econometric model with sufficient power to answer economic problems but hesitant to anger traditional statisticians with claims of causal meaning. The end result for many textbooks is that the meaning of the econometric model and its parameters are vague and at times contradictory …

The purpose of this report is to examine the extent to which these and other advances in causal modeling have benefited education in econometrics. Remarkably, we find that not only have they failed to penetrate the field, but even basic causal concepts lack precise definitions and, as a result, continue to be confused with their statistical counterparts.

Judea Pearl & Bryant Chen

Pearl’s and Chen’s article addresses two very important questions in the teaching of modern econometrics and its textbooks – how is causality treated in general, and more specifically, to what extent they use a distinct causal notation.

The authors have for years been part of an extended effort of advancing explicit causal modeling (especially graphical models) in applied sciences, and this is a first examination of to what extent these endeavours have found their way into econometrics textbooks.

Although the text partly is of a rather demanding ‘technical’ nature, I would definitely recommend it for reading, especially for social scientists with an interest in these issues.

Pearl’s seminal contribution to this research field is well-known and indisputable, but on the ‘taming’ and ‘resolve’ of the issues, I however have to admit that — under the influence of especially David Freedman — I still have some doubts on the reach, especially in terms of realism and relevance, of these solutions for social sciences in general and economics in specific. And with regards to the present article I think that since the distinction between the ‘interventionist’ E[Y|do(X)] and the more traditional ‘conditional expectationist’ E[Y|X] is so crucial for the subsequent argumentation, a more elaborated presentation had been of value, not the least because then the authors could also more fully explain why the first is so important and if/why this (in yours truly’s and Freedman’s view) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to ‘socio-economic’ contexts where ‘manipulativity’ and ‘modularity’ are not perhaps so universally at hand.

A popular idea in quantitative social sciences is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:

P(O|C) > P(O|-C)

However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C.

In statistics and econometrics we usually solve this ‘confounder’ problem by ‘controlling for’ C, i. e. by holding C fixed. This means that we actually look at different ‘populations’ – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been ‘controlled for’ too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by ‘controlling for’ all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E,. F …) is broken.

This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments may the number of uncontrolled causes be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.

Some people think that randomization may solve the empirical problem. By randomizing we are getting different ‘populations’ that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative (A) have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.

That means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

Econometrics is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Advocates of econometrics want  to have deductively automated answers to  fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of  what’s going on in the real world, and not in idealized models. Conclusions  can only be as certain as their premises – and that also applies to the quest for causality in econometrics.

Proper use of regression analysis

25 Dec, 2016 at 10:25 | Posted in Statistics & Econometrics | Comments Off on Proper use of regression analysis

Level I regression analysis does not require any assumptions about how the data were generated. If one wants more from the data analysis, assumptions are required. For a Level II regression analysis, the added feature is statistical inference: estimation, hypothesis tests and confidence intervals. When the data are produced by probability sampling from a well-defined population, estimation, hypothesis tests and confidence intervals are on the table.

20110805A random sample of inmates from the set of all inmates in a state’s prison system might be properly used to estimate, for example, the number of gang members in state’s overall prison system. Hypothesis tests and confidence intervals might also be usefully employed. In addition, one might estimate, for instance, the distribution of in-prison misconduct committed by men compared to the in-prison misconduct committed by women, holding age fixed. Hypothesis tests or confidence intervals could again follow naturally. The key assumption is that each inmate in the population has a known probability of selection. If the probability sampling is implemented largely as designed, statistical inference can rest on reasonably sound footing. Note that there is no talk of causal effects and no causal model. Description is combined with statistical inference.

In the absence of probability sampling, the case for Level II regression analysis is far more difficult to make …

The goal in a Level III regression analysis is to supplement Level I description and Level II statistical inference with causal inference. In conventional regression, for instance, one needs a nearly right model [a model is the ‘right’ model when it accurately represents how the data on hand were generated ], but one must also be able to argue credibly that manipulation of one or more regressors alters the expected conditional distribution of the response. Moreover, any given causal variable can be manipulated independently of any other causal variable and independently of the disturbances. There is nothing in the data itself that can speak to these requirements. The case will rest on how the data were actually produced. For example, if there was a real intervention, a good argument for manipulability might well be made. Thus, an explicit change in police patrolling practices ordered by the local Chief will perhaps pass the manipulability sniff test. Changes in the demographic mix of a neighborhood will probably not …

Level III regression analysis adds to description and statistical inference, causal inference. One requires not just a nearly right model of how the data were generated, but good information justifying any claims that all causal variables are independently manipulable. In the absence of a nearly right model and one or more regressors whose values can be “set” independently of other regressors and the disturbances, causal inferences cannot not make much sense.

The implications for practice in criminology are clear but somewhat daunting. With rare exceptions, regression analyses of observational data are best undertaken at Level I. With proper sampling, a Level II analysis can be helpful. The goal is to characterize associations in the data, perhaps taking uncertainty into account … Reviewers and journal editors typically equate proper statistical practice with Level III.

Richard Berk

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.