The best economics article of 2016

31 December, 2016 at 11:00 | Posted in Economics | 1 Comment

The best economics article of 2016 in my opinion was Paul Romer’s extremely well-written and brave frontal attack on the theories that has put macroeconomics on a path of ‘intellectual regress’ for three decades now:

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …

67477738In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions.” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.

The noncommittal relationship with the truth revealed by these methodological evasions and the “less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.”

Paul Romer

There are many kinds of useless ‘post-realeconomics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Paul Romer and yours truly are certainly not the only ones having doubts about the scientific value of calibration. In Journal of Economic Perspective (1996, vol. 10) Nobel laureates Lars Peter Hansen and James J. Heckman write:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Mathematical statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Lucas, Sargent, Prescott, Kydland and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

As Romer says:

Math cannot establish the truth value of a fact. Never has. Never will.

So instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer notices — just looking the other way and pretend it’s raining:

The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.

Advertisements

Reformation in economics

30 December, 2016 at 17:52 | Posted in Economics | 4 Comments

51ig0swjsl-_sx350_bo1204203200_This is a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty applying it to the real world is to consign oneself to dealing with pure makings of our imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era.

Highly recommended reading for everyone interested in making economics a relevant and realist science.

Con te partirò (personal)

30 December, 2016 at 16:07 | Posted in Varia | Comments Off on Con te partirò (personal)

Living next to an Opera house sure has its benefits if you love opera. Just twenty minutes ago two lovely opera singers performed Con te partirò on the little piazza in front of my house. Awesome!

This version is pretty good too …

For-profit schools — a total disaster

29 December, 2016 at 17:53 | Posted in Economics, Education & School | Comments Off on For-profit schools — a total disaster

To make education more like a private good, [voucher advocates] tried to change the conditions of both supply and demand. On the demand side, the central proposal was that of education ‘vouchers’, put forward most notably by Nobel Prizewinning economist at the University of Chicago, Milton Friedman. The idea was that, rather than funding schools, government should provide funding directly parents in the form of vouchers that could be used at whichever school the parents preferred, and topped up, if necessary by additional fee payments.

Pros and Cons of PrivatizationAs is typically the case, voucher advocates ignored the implications of their proposals for the distribution of income. In large measure, vouchers represent a simply cash transfer, going predominantly from the poor to the rich. The biggest beneficiaries would be those, mostly well-off, who were already sending their children to private schools, for whom the voucher would be a simple cash transfer. Those whose children remained at the same public school as before would gain nothing …

The most notable entrant in the US school sector was Edison Schools. Edison Schools was founded in 1992 and was widely viewed as representing the future of school education … For-profit schools were also introduced in Chile and Sweden …

The story was much the same everywhere: an initial burst of enthusiasm and high profits, followed by the exposure of poor practices and outcomes, and finally collapse, with governments being left to pick up the pieces …

Sweden introduced voucher-style reforms in 1992, and opened the market to for-profit schools. Initially favorable assessments were replaced by disillusionment as the performance of the school system as a whole deteriorated … By 2015, the majority of the public favoured banning for-profit schools. The Minister for Education described the system as a ‘political failure.’ Other critics described it in harsher terms (The Swedish for-profit ‘free’ school disaster).

Although a full analysis has not yet been undertaken, it seems likely that the for-profit schools engaged in ‘cream-skimming’, admitting able and well-behaved students, while pushing more problematic students back into the public system. The rules under which the reform was introduced included ‘safeguards’ to prevent cream-skimming, but such safeguards have historical proved ineffectual in the face of the profits to be made by evading them …

Why has market-oriented reform of education been such a failure?  …

Education is characterized by market failure, by potentially inequitable initial allocations and, most importantly, by the fact that the relationship between the education ‘industry’ and its ‘consumers’, that is between educational institutions and teachers on the one hand and students on the other, cannot be reduced to a market transaction.

The critical problem with this simple model is that students, by definition, cannot know in advance what they are going to learn, or make an informed judgement about what they are learning. They have to rely, to a substantial extent, on their teachers to select the right topics of study and to teach them appropriately …

The result is that education does not rely on market competition to any significant extent to sort good teachers and institutions from bad ones. Rather, education depends on a combination of sustained institutional standards and individual professional ethics to maintain their performance.

The implications for education policy are clear, at least at the school level. School education should be publicly funded and provided either by public schools or by non-profits with a clear educational mission, as opposed to corporate ‘school management organisations’.

John Quiggin/Crooked Timber

Neo-liberals and libertarians have always provided a lot of ideologically founded ideas and ‘theories’ to underpin their Panglossian view on markets. But when they are tested against reality they usually turn out to be wrong. The promised results are simply not to be found. And that goes for for-profit schools too.

New study shows marginal productivity theory has only a ‘negligible’ link to reality

29 December, 2016 at 16:59 | Posted in Economics | Comments Off on New study shows marginal productivity theory has only a ‘negligible’ link to reality

The correlation between high executive pay and good performance is “negligible”, a new academic study has found, providing reformers with fresh evidence that a shake-up of Britain’s corporate remuneration systems is overdue.

jpgimageAlthough big company bosses enjoyed pay rises of more than 80 per cent in a decade, performance as measured by economic returns on invested capital was less than 1 per cent over the period, the paper by Lancaster University Management School says.

“Our findings suggest a material disconnect between pay and fundamental value generation for, and returns to, capital providers,” the authors of the report said.

In a study of more than a decade of data on the pay and performance of Britain’s 350 biggest listed companies, Weijia Li and Steven Young found that remuneration had increased 82 per cent in real terms over the 11 years to 2014 … The research found that the median economic return on invested capital, a preferable measure, was less than 1 per cent over the same period.

Patrick Jenkins/Financial Times

Mainstream economics textbooks usually refer to the interrelationship between technological development and education as the main causal force behind increased inequality. If the educational system (supply) develops at the same pace as technology (demand), there should be no increase, ceteris paribus, in the ratio between high-income (highly educated) groups and low-income (low education) groups. In the race between technology and education, the proliferation of skilled-biased technological change has, however, allegedly increased the premium for the highly educated group.

Another prominent explanation is that globalization – in accordance with Ricardo’s theory of comparative advantage and the Wicksell-Heckscher-Ohlin-Stolper-Samuelson factor price theory – has benefited capital in the advanced countries and labour in the developing countries. The problem with these theories are that they explicitly assume full employment and international immobility of the factors of production. Globalization means more than anything else that capital and labour have to a large extent become mobile over country borders. These mainstream trade theories are really not applicable in the world of today, and they are certainly not able to explain the international trade pattern that has developed during the last decades. Although it seems as though capital in the developed countries has benefited from globalization, it is difficult to detect a similar positive effect on workers in the developing countries.

There are, however, also some other quite obvious problems with these kinds of inequality explanations. The World Top Incomes Database shows that the increase in incomes has been concentrated especially in the top 1%. If education was the main reason behind the increasing income gap, one would expect a much broader group of people in the upper echelons of the distribution taking part of this increase. It is dubious, to say the least, to try to explain, for example, the high wages in the finance sector with a marginal productivity argument. High-end wages seem to be more a result of pure luck or membership of the same ‘club’ as those who decide on the wages and bonuses, than of ‘marginal productivity.’

Mainstream economics, with its technologically determined marginal productivity theory, seems to be difficult to reconcile with reality. Although card-carrying neoclassical apologetics like Greg Mankiw want to recall John Bates Clark’s (1899) argument that marginal productivity results in an ethically just distribution, that is not something – even if it were true – we could confirm empirically, since it is impossible realiter to separate out what is the marginal contribution of any factor of production. The hypothetical ceteris paribus addition of only one factor in a production process is often heard of in textbooks, but never seen in reality.

When reading  mainstream economists like Mankiw who argue for the ‘just desert’ of the 0.1 %, one gets a strong feeling that they are ultimately trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they ‘deserve.’ To most social scientists that probably smacks more of being an evasive action trying to explain away a very disturbing structural ‘regime shift’ that has taken place in our societies. A shift that has very little to do with ‘stochastic returns to education.’ Those were in place also 30 or 40 years ago. At that time they meant that perhaps a top corporate manager earned 10–20 times more than ‘ordinary’ people earned. Today it means that they earn 100–200 times more than ‘ordinary’ people earn. A question of education? Hardly. It is probably more a question of greed and a lost sense of a common project of building a sustainable society.

Since the race between technology and education does not seem to explain the new growing income gap – and even if technological change has become more and more capital augmenting, it is also quite clear that not only the wages of low-skilled workers have fallen, but also the overall wage share – mainstream economists increasingly refer to ‘meritocratic extremism,’ ‘winners-take-all markets’ and ‘super star-theories’ for explanation. But this is also highly questionable.

Fans may want to pay extra to watch top-ranked athletes or movie stars performing on television and film, but corporate managers are hardly the stuff that people’s dreams are made of – and they seldom appear on television and in the movie theaters.

Everyone may prefer to employ the best corporate manager there is, but a corporate manager, unlike a movie star, can only provide his services to a limited number of customers. From the perspective of ‘super-star theories,’ a good corporate manager should only earn marginally better than an average corporate manager. The average earnings of corporate managers of the 50 biggest Swedish companies today, is equivalent to the wages of 46 blue-collar workers.

It is difficult to see the takeoff of the top executives as anything else but a reward for being a member of the same illustrious club. That they should be equivalent to indispensable and fair productive contributions – marginal products – is straining credulity too far. That so many corporate managers and top executives make fantastic earnings today, is strong evidence the theory is patently wrong and basically functions as a legitimizing device of indefensible and growing inequalities.

No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical team work production that characterize modern societies; or, especially when it comes to ‘capital,’ what it is supposed to mean and how to measure it. Remunerations do not necessarily correspond to any marginal product of different factors of production – or to ‘compensating differentials’ due to non-monetary characteristics of different jobs, natural ability, effort or chance.

Put simply – highly paid workers and corporate managers are not always highly productive workers and corporate managers, and less highly paid workers and corporate managers are not always less productive. History has over and over again disconfirmed the close connection between productivity and remuneration postulated in mainstream income distribution theory.

Neoclassical marginal productivity theory is obviously a collapsed theory from both a historical and a theoretical point of view, as shown already by Sraffa in the 1920s, and in the Cambridge capital controversy in the 1960s and 1970s.

When a theory is impossible to reconcile with facts there is only one thing to do — scrap it!

Observational studies vs. RCTs

29 December, 2016 at 14:12 | Posted in Theory of Science & Methodology | Comments Off on Observational studies vs. RCTs

 

Murray Rothbard on Adam Smith

28 December, 2016 at 16:03 | Posted in Economics | 4 Comments

Adam Smith (1723-90) is a mystery in a puzzle wrapped in an enigma. The mystery is the enormous and unprecedented gap between Smith’s exalted reputation and the reality of his dubious contribution to economic thought …

The problem is not simply that Smith was not the founder of economics.

81979The problem is that he originated nothing that was true, and that whatever he originated was wrong; that, even in an age that had fewer citations or footnotes than our own, Adam Smith was a shameless plagiarist, acknowledging little or nothing and stealing large chunks …

Even though an inveterate plagiarist, Smith had a Columbus complex, accusing close friends incorrectly of plagiarizing him. And even though a plagiarist, he plagiarized badly, adding new fallacies to the truths he lifted … Smith not only contributed nothing of value to economic thought; his economics was a grave deterioration from his predecessors …

The historical problem is this: how could this phenomenon have taken place with a book so derivative, so deeply flawed, so much less worthy than its predecessors?

The answer is surely not any lucidity or clarity of style or thought. For the much revered Wealth of Nations is a huge, sprawling, inchoate, confused tome, rife with vagueness, ambiguity and deep inner contradictions.

Not exactly the standard textbook presentation of the founding father of economics …

Probability calculus is no excuse for forgetfulness

28 December, 2016 at 14:16 | Posted in Theory of Science & Methodology | Comments Off on Probability calculus is no excuse for forgetfulness

When we cannot accept that the observations, along the time-series available to us, are independent, or cannot by some device be divided into groups that can be treated as independent, we get into much deeper water. For we have then, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply. We are left to use our judgement, making sense of what has happened as best we can, in the manner of the historian. Applied economics does then come back to history, after all.

hicksI am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem at hand. Very often they are not. Thus it is not at all sensible to take a small number of observations (sometimes no more than a dozen observations) and to use the rules of probability to deduce from them a ‘significant’ general law. For we are assuming, if we do so, that the variations from one to another of the observations are random, so that if we had a larger sample (as we do not) they would by some averaging tend to disappear. But what nonsense this is when the observations are derived, as not infrequently happens, from different countries, or localities, or industries — entities about which we may well have relevant information, but which we have deliberately decided, by our procedure, to ignore. By all means let us plot the points on a chart, and try to explain them; but it does not help in explaining them to suppress their names. The probability calculus is no excuse for forgetfulness.

John Hicks’ Causality in economics is an absolute masterpiece. It ought to be on the reading list of every course in economic methodology.

Just idag är jag stark

28 December, 2016 at 10:07 | Posted in Varia | Comments Off on Just idag är jag stark

 

The search for heavy balls in economics

27 December, 2016 at 16:58 | Posted in Theory of Science & Methodology | 3 Comments

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’ Galileo's falling bodies experimentIf we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s experiments were according to Nancy Cartwright (Hunting Causes and Using Them, p. 223)

designed to find out what contribution the motion due to the pull of the earth will make, with the assumption that the contribution is stable across all the different kinds of situations falling bodies will get into … He eliminated (as far as possible) all other causes of motion on the bodies in his experiment so that he could see how they move when only the earth affects them. That is the contribution that the earth’s pull makes to their motion.

Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time, and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistence — influencing the fall and build that in to one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball, but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any heavy balls to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but  not even one single  real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct, have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they wont’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fullfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model-world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kind of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know..

To solve, understand, or explain real-world problems you actually have to know something about them – logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real-world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world  phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions.

No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

Economic growth and the size of the ‘private sector’

26 December, 2016 at 20:32 | Posted in Statistics & Econometrics | 6 Comments

Economic growth has since long interested economists. Not least, the question of which factors are behind high growth rates has been in focus. The factors usually pointed at are mainly economic, social and political variables. In an interesting study from the University of  Helsinki, Tatu Westling has expanded the potential causal variables to also include biological and sexual variables. In  the report Male Organ and Economic Growth: Does Size Matter (2011), he has — based on the ‘cross-country’ data of Mankiw et al (1992), Summers and Heston (1988), Polity IV Project data of political regime types and a data set on average penis size in 76 non-oil producing countries (www.everyoneweb.com/worldpenissize) — been able to show that the level and growth of GDP per capita between 1960 and 1985 varies with penis size. Replicating Westling’s study — I have used my favourite program Gretl — we obtain the following two charts:


The Solow-based model estimates show that the maximum GDP is achieved with the penis of about 13.5 cm and that the male reproductive organ (OLS without control variables) are negatively correlated with — and able to explain 20% of the variation in — GDP growth.

Even with reservation for problems such as endogeneity and confounders one can not but agree with Westling’s final assessment that “the ‘male organ hypothesis’ is worth pursuing in future research” and that it “clearly seems that the ‘private sector’ deserves more credit for economic development than is typically acknowledged.” Or? …

Milton Friedman’s critique of econometrics

26 December, 2016 at 15:28 | Posted in Statistics & Econometrics | 2 Comments

Jan_Tinbergen_1986Tinbergen’s results cannot be judged by ordinary tests of statistical significance. The reason is that the variables with which he winds up, the particular series measuring these variables, the leads and lags, and various other aspects of the equations besides the particular values of the parameters (which alone can be tested by the usual statistical technique) have been selected after an extensive process of trial and error because they yield high coefficients of correlation. Tinbergen is seldom satisfied with a correlation coefficient less than 0.98. But these attractive correlation coefficients create no presumption that the relationships they describe will hold in the future. The multiple regression equations which yield them are simply tautological reformulations of selected economic data. Taken at face value, Tinbergen’s work “explains” the errors in his data no less than their real movements; for although many of the series employed in the study would be accorded, even by their compilers, a margin of error in excess of 5 per cent, Tinbergen’s equations “explain” well over 95 per cent of the observed variation.

As W. C. Mitchell put it some years ago, “a competent statistician, with sufficient clerical assistance and time at his command, can take almost any pair of time series for a given period and work them into forms which will yield coefficients of correlation exceeding ±.9 …. So work of [this] sort … must be judged, not by the coefficients of correlation obtained within the periods for which they have manipulated the data, but by the coefficients which they get in earlier or later periods to which their formulas may be applied.” But Tinbergen makes no attempt to determine whether his equations agree with data other than those which they translate …

The methods used by Tinbergen do not and cannot provide an empirically tested explanation of business cycle movements.

Milton Friedman

It is usual in macroeconomic (media) discourses to put Keynes and Fredman as bitter enemies. But on some issues they are in fact very close to each other. Econometrics, and especially its limited applicability, is — as can be seen comparing Friedman’s critique above to Keynes’ critique — one such prominent case.

Economists — nothing but a bunch of idiots savants

25 December, 2016 at 13:00 | Posted in Economics | 14 Comments

idsavLet’s be honest: no one knows what is happening in the world economy today. Recovery from the collapse of 2008 has been unexpectedly slow …

Policymakers don’t know what to do. They press the usual (and unusual) levers and nothing happens. Quantitative easing was supposed to bring inflation “back to target.” It didn’t. Fiscal contraction was supposed to restore confidence. It didn’t …

Most economics students are not required to study psychology, philosophy, history, or politics. They are spoon-fed models of the economy, based on unreal assumptions, and tested on their competence in solving mathematical equations. They are never given the mental tools to grasp the whole picture …

Good economists have always understood that this method has severe limitations. They use their discipline as a kind of mental hygiene to protect against the grossest errors in thinking …

Today’s professional economists have studied almost nothing but economics. They don’t even read the classics of their own discipline. Economic history comes, if at all, from data sets. Philosophy, which could teach them about the limits of the economic method, is a closed book. Mathematics, demanding and seductive, has monopolized their mental horizons. The economists are the idiots savants of our time.

Robert Skidelsky

Yes indeed — modern economics has become increasingly irrelevant to the understanding of the real world. In his seminal book Economics and Reality (1997) Tony Lawson traced this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject. As shown by Skidelsky, it is as relevant today as it was twenty years ago.

rocket-science-pic

It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond my imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Already back in 1991, JEL published a study by a commission — chaired by Anne Krueger and including people like Kenneth Arrow, Edward Leamer, and Joseph Stiglitz — focusing on “the extent to which graduate education in economics may have become too removed from real economic problems.” The commission members reported from own experience “that it is an underemphasis on the ‘linkages’ between tools, both theory and econometrics, and ‘real world problems’ that is the weakness of graduate education in economics,”  and that both students and faculty sensed “the absence of facts, institutional information, data, real-world issues, applications, and policy problems.” And in conclusion they wrote:

The commission’s fear is that graduate programs may be turning out a generation with too many idiot savants skilled in technique but innocent of real economic issues.

Not much is different today. Economics education is still in dire need of a remake. How about bringing economics back into some contact with reality?

Added (GMT 1215): And, of course, Paul Krugman and fellow ‘New Keynesians’ have been fast to tell us that although Skidelsky is absolutely right, ‘basic macro’ (read: IS-LM ‘New Keynesianism’) has done just fine, and that it is only RBC and New Classical DSGE macroeconomics that has faltered. But that is, sad to say, but still, nothing but pure nonsense!

Econometrics textbooks — vague and confused causal analysis

25 December, 2016 at 12:17 | Posted in Statistics & Econometrics | 1 Comment

causationEconometric textbooks fall on all sides of this debate. Some explicitly ascribe causal meaning to the structural equation while others insist that it is nothing more than a compact representation of the joint probability distribution. Many fall somewhere in the middle – attempting to provide the econometric model with sufficient power to answer economic problems but hesitant to anger traditional statisticians with claims of causal meaning. The end result for many textbooks is that the meaning of the econometric model and its parameters are vague and at times contradictory …

The purpose of this report is to examine the extent to which these and other advances in causal modeling have benefited education in econometrics. Remarkably, we find that not only have they failed to penetrate the field, but even basic causal concepts lack precise definitions and, as a result, continue to be confused with their statistical counterparts.

Judea Pearl & Bryant Chen

Pearl’s and Chen’s article addresses two very important questions in the teaching of modern econometrics and its textbooks – how is causality treated in general, and more specifically, to what extent they use a distinct causal notation.

The authors have for years been part of an extended effort of advancing explicit causal modeling (especially graphical models) in applied sciences, and this is a first examination of to what extent these endeavours have found their way into econometrics textbooks.

Although the text partly is of a rather demanding ‘technical’ nature, I would definitely recommend it for reading, especially for social scientists with an interest in these issues.

Pearl’s seminal contribution to this research field is well-known and indisputable, but on the ‘taming’ and ‘resolve’ of the issues, I however have to admit that — under the influence of especially David Freedman — I still have some doubts on the reach, especially in terms of realism and relevance, of these solutions for social sciences in general and economics in specific. And with regards to the present article I think that since the distinction between the ‘interventionist’ E[Y|do(X)] and the more traditional ‘conditional expectationist’ E[Y|X] is so crucial for the subsequent argumentation, a more elaborated presentation had been of value, not the least because then the authors could also more fully explain why the first is so important and if/why this (in yours truly’s and Freedman’s view) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to ‘socio-economic’ contexts where ‘manipulativity’ and ‘modularity’ are not perhaps so universally at hand.

A popular idea in quantitative social sciences is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:

P(O|C) > P(O|-C)

However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C.

In statistics and econometrics we usually solve this ‘confounder’ problem by ‘controlling for’ C, i. e. by holding C fixed. This means that we actually look at different ‘populations’ – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been ‘controlled for’ too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by ‘controlling for’ all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E,. F …) is broken.

This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments may the number of uncontrolled causes be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.

Some people think that randomization may solve the empirical problem. By randomizing we are getting different ‘populations’ that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative (A) have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.

That means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

Econometrics is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Advocates of econometrics want  to have deductively automated answers to  fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of  what’s going on in the real world, and not in idealized models. Conclusions  can only be as certain as their premises – and that also applies to the quest for causality in econometrics.

Proper use of regression analysis

25 December, 2016 at 10:25 | Posted in Statistics & Econometrics | Comments Off on Proper use of regression analysis

Level I regression analysis does not require any assumptions about how the data were generated. If one wants more from the data analysis, assumptions are required. For a Level II regression analysis, the added feature is statistical inference: estimation, hypothesis tests and confidence intervals. When the data are produced by probability sampling from a well-defined population, estimation, hypothesis tests and confidence intervals are on the table.

20110805A random sample of inmates from the set of all inmates in a state’s prison system might be properly used to estimate, for example, the number of gang members in state’s overall prison system. Hypothesis tests and confidence intervals might also be usefully employed. In addition, one might estimate, for instance, the distribution of in-prison misconduct committed by men compared to the in-prison misconduct committed by women, holding age fixed. Hypothesis tests or confidence intervals could again follow naturally. The key assumption is that each inmate in the population has a known probability of selection. If the probability sampling is implemented largely as designed, statistical inference can rest on reasonably sound footing. Note that there is no talk of causal effects and no causal model. Description is combined with statistical inference.

In the absence of probability sampling, the case for Level II regression analysis is far more difficult to make …

The goal in a Level III regression analysis is to supplement Level I description and Level II statistical inference with causal inference. In conventional regression, for instance, one needs a nearly right model [a model is the ‘right’ model when it accurately represents how the data on hand were generated ], but one must also be able to argue credibly that manipulation of one or more regressors alters the expected conditional distribution of the response. Moreover, any given causal variable can be manipulated independently of any other causal variable and independently of the disturbances. There is nothing in the data itself that can speak to these requirements. The case will rest on how the data were actually produced. For example, if there was a real intervention, a good argument for manipulability might well be made. Thus, an explicit change in police patrolling practices ordered by the local Chief will perhaps pass the manipulability sniff test. Changes in the demographic mix of a neighborhood will probably not …

Level III regression analysis adds to description and statistical inference, causal inference. One requires not just a nearly right model of how the data were generated, but good information justifying any claims that all causal variables are independently manipulable. In the absence of a nearly right model and one or more regressors whose values can be “set” independently of other regressors and the disturbances, causal inferences cannot not make much sense.

The implications for practice in criminology are clear but somewhat daunting. With rare exceptions, regression analyses of observational data are best undertaken at Level I. With proper sampling, a Level II analysis can be helpful. The goal is to characterize associations in the data, perhaps taking uncertainty into account … Reviewers and journal editors typically equate proper statistical practice with Level III.

Richard Berk

Christmas greetings

24 December, 2016 at 10:31 | Posted in Varia | Comments Off on Christmas greetings

 

Twist in my sobriety

24 December, 2016 at 00:06 | Posted in Varia | Comments Off on Twist in my sobriety

 

Keynes betrayed

23 December, 2016 at 11:43 | Posted in Economics | 5 Comments

To complete the reconciliation of Keynesian economics with general equilibrium theory, Paul Samuelson introduced the neoclassical synthesis in 1955 …

51zdd7pouql-_sx323_bo1204203200_In this view of the world, high unemployment is a temporary phenomenon caused by the slow adjustment of money wages and money prices. In Samuelson’s vision, the economy is Keynesian in the short run, when some wages and prices are sticky. It is classical in the long run when all wages and prices have had time to adjust….

Although Samuelson’s neoclassical synthesis was tidy, it did not have much to do with the vision of the General Theory …

In Keynes’ vision, there is no tendency for the economy to self-correct. Left to itself, a market economy may never recover from a depression and the unemployment rate may remain too high forever. In contrast, in Samuelson’s neoclassical synthesis, unemployment causes money wages and prices to fall. As the money wage and the money price fall, aggregate demand rises and full employment is restored, even if government takes no corrective action. By slipping wage and price adjustment into his theory, Samuelson reintroduced classical ideas by the back door—a sleight of hand that did not go unnoticed by Keynes’ contemporaries in Cambridge, England. Famously, Joan Robinson referred to Samuelson’s approach as ‘bastard Keynesianism.’

The New Keynesian agenda is the child of the neoclassical synthesis and, like the IS-LM model before it, New Keynesian economics inherits the mistakes of the bastard Keynesians. It misses two key Keynesian concepts: (1) there are multiple equilibrium unemployment rates and (2) beliefs are fundamental.

Not that long ago Paul Krugman had a post up on his blog telling us that what he and many others do is “sorta-kinda neoclassical because it takes the maximization-and-equilibrium world as a starting point” and that “New Keynesian models are intertemporal maximization modified with sticky prices and a few other deviations.”

newkeyBeing myself sorta-kinda Keynesian, I side with Farmer and remain a skeptic of the pretences and aspirations of ‘New Keynesian’ macroeconomics.

Where ‘New Keynesian’ economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterize all open social systems. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the individual level.

So, I cannot concur with Krugman – and other sorta-kinda ‘New Keynesians’ – when they try to reduce Keynesian economics to “intertemporal maximization modified with sticky prices and a few other deviations.”

The purported strength of New Classical and ‘New Keynesian’ macroeconomics is that they have firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing ‘forward-looking’ individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient Walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world.

And then there is also the fact that ‘New Keynesians’ share the New Classical economists extraterrestial view of unemployment as voluntary.

The ‘New Keynesian’ microfounded dynamic stochastic general equilibrium models do not incorporate such a basic fact of reality as involuntary unemployment. Of course, working with microfounded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

To Keynes it was an obvious and sad fact of the world that not all unemployment is voluntary. But obviously not so to New Classical and ‘New Keynesian’ economists.

Mainstream economics — nothing but an assumption-making Nintendo game

23 December, 2016 at 09:51 | Posted in Economics | Comments Off on Mainstream economics — nothing but an assumption-making Nintendo game

euclidIn advanced economics the question would be: ‘What besides mathematics should be in an economics lecture?’ In physics the familiar spirit is Archimedes the experimenter. But in economics, as in mathematics itself, it is theorem-proving Euclid who paces the halls …

Economics … has become a mathematical game. The science has been drained out of economics, replaced by a Nintendo game of assumption-making …

Most thoughtful economists think that the games on the blackboard and the computer have gone too far, absurdly too far. It is time to bring economic observation, economic history, economic literature, back into the teaching of economics.

Economists would be less arrogant, and less dangerous as experts, if they had to face up to the facts of the world. Perhaps they would even become as modest as the physicists.

D. McCloskey

PISA-resultat och ekonometriska modellspecifikationer

22 December, 2016 at 18:10 | Posted in Education & School | Comments Off on PISA-resultat och ekonometriska modellspecifikationer

Det är inte ofta som skillnader mellan olika ekonometriska modellspecifikationer skapar rubriker i media men när PISA-undersökningen släpptes för några veckor sedan hände just detta. Orsaken var att OECD hävdade att svenska friskolor presterade sämre än kommunala skolor medan Skolverket kommit fram till att skillnaden mellan offentliga och privata huvudmän var liten och statistiskt osignifikant. Eftersom det är samma datamaterial som används kan det vara värt att klargöra vari skillnaden mellan OECD:s och Skolverkets analyser består.

pisa1Sett till rena PISA-poäng så uppvisar friskolor bättre resultat än kommunala (kolumn 1 nedan). Resultatskillnader mellan skolor beror emellertid främst på vilka elever som går på skolorna och elevunderlaget skiljer sig markant mellan fristående och kommunala grundskolor. För att uttala sig om några mer djupgående skillnader i skolkvalitet måste därför hänsyn tas till elevunderlaget. I PISA-data finns ett index över elevernas sociala bakgrund (ESCS) samt indikatorer på deras eventuella utländska bakgrund som kan användas till detta syfte. En justering med hjälp av dylika indikatorer är bättre än inget, men de fångar knappast alla relevanta skillnader mellan elever; både barn till väletablerade diplomater och flyktingbarn med hackig skolgång bakom sig har ”utländsk bakgrund”. Även mellan elever med likartad social bakgrund finns en stor spridning i studieförmåga, motivation och intresse.

Skolverket justerar för elevernas bakgrund genom att ta hänsyn till ovan nämnda indikatorer på individnivå. Den konceptuella fråga som man då försöker besvara är om PISA-resultaten skiljer sig mellan friskolor och kommunala för jämförbara elever (givet då att de verkligen är jämförbara, vilket är tveksamt). Svaret är att skillnaderna är små och inte statistiskt signifikanta (kolumn 2). En tolkning av detta är att en elev kan förväntas uppnå samma resultat på en representativ kommunal skola som på en representativ friskola.

OECD nöjer sig i sin analys inte med att ta hänsyn till bakgrundsindikatorerna på elevnivå utan lägger till skolans genomsnittliga värden av samma indikatorer. När dessa läggs till modellen gör de fristående skolorna sämre ifrån sig än de kommunala (kolumn 3). De kommunala skolornas fördel i denna specifikation är nästan lika stor som deras nackdel i råa ojusterade data. Den skattade skillnaden är ungefär lika stor för PISA:s tre ämnesområden men bara statistiskt signifikant för matematik.

Den konceptuella frågan bakom OECD:s analys är om jämförbara elever presterar bättre eller sämre på fristående eller kommunala skolor som har jämförbara sociala förutsättningar. Ett alternativt sett att se på resultatet är därför att de fristående skolorna i PISA-undersökningen är sämre än de kommunala på att förvalta jämförbara elevunderlag.

Jonas Vlachos

För forskare, utbildningsanordnare och politiker har det självklart blivit intressant att försöka undersöka vilka konsekvenser friskolereformen från 1992 haft.

Men detta är inte något som är helt lätt att göra med tanke på hur mångfacetterade och vittomfattande de mål är som satts upp för skolverksamheten i Sverige är.

Ett vanligt mål som man fokuserat på är elevernas prestationer i form av uppnående av olika kunskapsnivåer, betyg och testresultat. När man genomförde friskolereformen var ett av de ofta framförda argumenten att friskolorna skulle höja elevernas kunskapsnivåer. De kvantitativa mått man använt för att göra dessa värderingar är genomgående betyg och/eller resultat på nationella prov och internationella tester typ PISA.

Vid en första anblick kan det kanske förefalla trivialt att göra sådana undersökningar. Det är väl bara att – kan det tyckas – plocka fram data och genomföra nödiga statistiska tester och regressioner. Riktigt så enkelt är det nu inte. I själva verket är det —  som Vlachos påpekar — väldigt svårt att få fram entydiga kausala svar på den här typen av frågor.

Ska man entydigt kunna visa att det föreligger effekter och att dessa är ett resultat av just friskolornas införande – och inget annat – måste man identifiera och därefter kontrollera för påverkan från alla ‘störande bakgrundsvariabler’ av typen föräldrars utbildning, socioekonomisk status, etnicitet, geografisk hemhörighet, religion m.m. – så att vi kan vara säkra på att det inte är skillnader i dessa variabler som är de i fundamental mening verkliga kausalt bakomliggande förklaringarna till eventuella genomsnittliga effektskillnader.

Idealt sett skulle vi, för att verkligen vinnlägga oss om att kunna göra en sådan kausalanalys, vilja genomföra ett experiment där vi plockar ut en grupp elever och låter dem gå i friskolor och efter en viss tid utvärderar effekterna på deras kunskapsnivåer. Sedan skulle vi vrida tillbaka klockan och låta samma grupp av elever istället gå i kommunala skolor och efter en viss tid utvärdera effekterna på deras kunskapsnivåer. Genom att på detta experimentvis kunna isolera och manipulera undersökningsvariablerna så att vi verkligen kan säkerställa den unika effekten av friskolor – och inget annat – skulle vi kunna få ett exakt svar på vår fråga. Tyvärr går dock sådana experiment aldrig går att genomföra i verkligheten. Så vad gör man?

Det i särklass vanligaste undersökningsförfarandet är – som i fallet med OECD och Skolverket – att man genomför en traditionell multipel regressionsanalys baserad på så kallade ‘minsta kvadrat’ eller ‘maximum likelihood’ skattningar av observationsdata, där man försöker ‘konstanthålla’ ett antal specificerade bakgrundsvariabler för att om möjligt kunna tolka regressionskoefficienterna i kausala termer. Vi vet att det föreligger risk för ett ‘selektionsproblem’ eftersom de elever som går på friskolor ofta skiljer sig från de som går på kommunala skolor vad avser flera viktiga bakgrundsvariabler, kan vi inte bara rakt av jämföra de två skolformerna kunskapsnivåer för att därur dra några säkra kausala slutsatser. Risken är överhängande att de eventuella skillnader vi finner och tror kan förklaras av skolformen, i själva verket helt eller delvis beror på skillnader i de bakomliggande variablerna (t.ex. bostadsområde, etnicitet, föräldrars utbildning, m.m.)

Ska man försöka sig på att sammanfatta de regressionsanalyser som genomförts är resultatet att de kausala effekter på elevers prestationer man tyckt sig kunna identifiera av friskolor genomgående är små och ofta inte ens statistiskt signifikanta på gängse signifikansnivåer. Till detta kommer också att osäkerhet råder om man verkligen kunnat konstanthålla alla relevanta bakgrundsvariabler och att därför de skattningar som gjorts ofta i praktiken är behäftade med otestade antaganden och en icke-försumbar osäkerhet och ‘bias’ som gör det svårt att ge en någorlunda entydig värdering av forskningsresultatens vikt och relevans. Enkelt uttryckt skulle man kunna säga att många – kanske de flesta – av de ‘effektstudier’ av detta slag som genomförts, inte lyckats skapa tillräckligt jämföra grupper, och att – eftersom detta strikt sett är absolut nödvändigt för att de statistiska analyser man de facto genomför ska kunna tolkas på det sätt man gör – värdet av analyserna därför är svårt att fastställa. Det innebär också – och här ska man även väga in möjligheten av att det kan föreligga andra och bättre alternativa modellspecifikationer  – att de ‘känslighetsanalyser’ forskare på området regelmässigt genomför, inte heller ger någon säker vägledning om hur pass ‘robusta’ de gjorda regressionsskattningarna egentligen är. Vidare är det stor risk för att de latenta, bakomliggande, ej specificerade variabler som representerar karakteristika som ej är uppmätta (intelligens, attityd, motivation m.m.) är korrelerade med de oberoende variabler som ingår i regressionsekvationerna och därigenom leder till ett problem med endogenitet. Till detta kan man väl också foga att de flesta undersökningar som gjorts bara kan uttala sig om vad som gäller i genomsnitt. Bakom ett högt genomsnitt kan dölja sig flera svagpresterande enskilda skolor (elever) som vägs upp av några få högpresterande skolor (elever).

Probability and confidence — Keynes vs. Bayes

22 December, 2016 at 15:29 | Posted in Economics | 13 Comments

41T-mg6ZxPL._SX312_BO1,204,203,200_An alternative possibility is to accept the consequences of the apparent fact that the central prediction of the Bayesian model in its descriptive capacity, that people’s choices are or are ‘as if’ they are informed by real-valued subjective probabilities, is, in general, false …

According to Keynes’s decision theory it is rational to prefer to be guided by probabilities determined on the basis of greater evidential ‘weight’, the amount or completeness, in some sense, of the relevant evidence on which a judgement of probability is based … Keynes later suggests a link between weight and confidence, distinguishing between the ‘best estimates we can make of probabilities and the confidence with which we make them’ … The distinction between judgement of probability and the confidence with which it is made has no place in the world of a committed Bayesian because it drives a wedge into the link between choices and degrees of belief on which it is founded.

Jochen Runde

treatprob

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by rational agents as modeled by mainstream economics.

 

 

On analytical statistics and critical realism

22 December, 2016 at 12:17 | Posted in Theory of Science & Methodology | 1 Comment

24398683In this paper we began by describing the position of those critical realists who are sceptical about multi-variate statistics … Some underlying assumptions of this sceptical argument were shown to be false. Then a positive case in favour of using analytical statistics as part of a mixed-methods methodology was developed. An example of the interpretation of logistic regression was used to show that the interpretation need not be atomistic or reductionist. However, we also argued that the data underlying such interpretations are ‘ficts’, i.e. are not true in themselves, and cannot be considered to be accurate or true descriptions of reality. Instead, the validity of the interpretations of such data are what social scientists should argue about. Therefore what matters is how warranted arguments are built by the researcher who uses statistics. Our argument supports seeking surprising findings; being aware of the caveat that demi-regularities do not necessarily reveal laws; and otherwise following advice given from the ‘sceptical’ school. However the capacity of multi-variate statistics to provide a grounding for warranted arguments implies that their use cannot be rejected out of hand by serious social researchers.

Wendy Olsen & Jamie Morgan

Econometric forecasting and mathematical ‘rigour’

21 December, 2016 at 20:32 | Posted in Statistics & Econometrics | 3 Comments

9780199679348There have been over four decades of econometric research on business cycles … The formalization has undeniably improved the scientific strength of business cycle measures …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective, especially when the success rate in ex-ante forecasts of recessions is used as a key criterion. The fact that the onset of the 2008 financial-crisis-triggered recession was predicted by only a few ‘Wise Owls’ … while missed by regular forecasters armed with various models serves us as the latest warning that the efficiency of the formalization might be far from optimal. Remarkably, not only has the performance of time-series data-driven econometric models been off the track this time, so has that of the whole bunch of theory-rich macro dynamic models developed in the wake of the rational expectations movement, which derived its fame mainly from exploiting the forecast failures of the macro-econometric models of the mid-1970s recession …

These observations indicate … that econometric methods are limited by their statistical approach in analysing and forecasting business cycles, and more over, that the explanatory power of generalised and established theoretical relationships is highly limited when applied to particular economies during particular periods alone, that is, if none of the local and institution-specific factors are taken into serious consideration …

The wide conviction of the superiority of the methods of the science has converted the econometric community largely to a group of fundamentalist guards of mathematical rigour … So much so that the relevance of the research to business cycles is reduced to empirical illustrations. To that extent, probabilistic formalisation has trapped econometric business cycle research in the pursuit of means at the expense of ends.

The limits of econometric forecasting has, as noted by Qin, been critically pointed out many times before. Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And Ragnar Frisch also shared some of Haavelmo’s doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Maintaining that economics is a science in the ‘true knowledge’ business, I remain a skeptic of the pretences and aspirations of econometrics. The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it ‘fruitful to believe’ in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are scating on thin ice.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there, really, is no other ground than hope itself.

O Magnum Mysterium (personal)

21 December, 2016 at 11:27 | Posted in Varia | 1 Comment

 

Breathtaking and absolutely magnificent!

The pretence of knowledge — ‘New Keynesian’ DSGE models

21 December, 2016 at 10:48 | Posted in Economics | Comments Off on The pretence of knowledge — ‘New Keynesian’ DSGE models

The centre-piece of Paul Romer’s scathing attack on these models is on the ‘pretence of knowledge’ (Romer 2016) … He is critical of the incredible identifying assumptions and ‘pretence of knowledge’ in both Bayesian estimation and the calibration of parameters in DSGE models … A milder critique by Olivier Blanchard (2016) points to a number of failings of DSGE models and recommends greater openness to more eclectic approaches …

qwzgndaAn even deeper problem, not seriously addressed by Romer or Blanchard, lies in the unrealistic micro-foundations for the behaviour of households embodied in the ‘rational expectations permanent income’ model of consumption, an integral component of these DSGE models. Consumption is fundamental to macroeconomics both in DSGE models and in the consumption functions of general equilibrium macro-econometric models such as the Federal Reserve’s FRB-US. At the core of representative agent DSGE models is the Euler equation for consumption, popularised in the highly influential paper by Hall (1978). It connects the present with the future, and is essential to the iterative forward solutions of these models. The equation is based on the assumption of inter-temporal optimising by consumers and that every consumer faces the same linear period-to-period budget constraint, linking income, wealth, and consumption. Maximising expected life-time utility subject to the constraint results in the optimality condition that links expected marginal utility in the different periods. Under approximate ‘certainty equivalence’, this translates into a simple relationship between consumption at time t and planned consumption at t+1 and in periods further into the future …

However, consumers actually face idiosyncratic (household-specific) and uninsurable income uncertainty, and uncertainty interacts with credit or liquidity constraints. The asymmetric information revolution in economics in the 1970s … and a new generation of heterogeneous agent models … imply that household horizons then tend to be both heterogeneous and shorter – with ‘hand-to-mouth’ behaviour even by quite wealthy households, contradicting the textbook permanent income model, and hence DSGE models. A second reason for the failure of these DSGE models is that aggregate behaviour does not follow that of a ‘representative agent’ … A third reason is that structural breaks … and radical uncertainty further invalidate DSGE models … The failure of the representative agent Euler equation to fit aggregate data is further empirical evidence against the assumptions underlying the DSGE models, while evidence on financial illiteracy … is a problem for the assumption that all consumers optimise.

John Muellbauer

Everybody hurts (personal)

21 December, 2016 at 10:13 | Posted in Varia | Comments Off on Everybody hurts (personal)


David Bowie, Leonard Cohen, Umberto Eco, Alan Rickman, Gene Wilder, Andrzej Wajda.
RIP

Keynes’ critique of econometrics — the nodal point

20 December, 2016 at 17:40 | Posted in Statistics & Econometrics | 2 Comments

treatprob-2In my judgment, the practical usefulness of those modes of inference, here termed Universal and Statistical Induction, on the validity of which the boasted knowledge of modern science depends, can only exist—and I do not now pause to inquire again whether such an argument must be circular—if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appear more and more clearly as the ultimate result to which material science is tending …

The physicists of the nineteenth century have reduced matter to the collisions and arrangements of particles, between which the ultimate qualitative differences are very few …

The validity of some current modes of inference may depend on the assumption that it is to material of this kind that we are applying them … Professors of probability have been often and justly derided for arguing as if nature were an urn containing black and white balls in fixed proportions. Quetelet once declared in so many words—“l’urne que nous interrogeons, c’est la nature.” But again in the history of science the methods of astrology may prove useful to the astronomer; and it may turn out to be true—reversing Quetelet’s expression—that “La nature que nous interrogeons, c’est une urne”.

Professors of probability and statistics, yes. And more or less every mainstream economist!

The standard view in statistics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues – ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different.’

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by ‘modern’ social sciences. And often we ‘simply do not know.’

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on the future. Consequently we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different ‘variables’ is not enough. If they cannot get at the causal structure that generated the data, they are not really ‘identified.’

In his critique of Tinbergen, Keynes comes back to these fundamental logical, epistemological and ontological problems of applying statistical methods — based on probabilistic axiomatics — to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing  social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

The naive pretence that we as social scientists can just walk in to the library and grab a statistical model from the shelf and apply it to an open and mutable social reality has to be forsaken. As social scientists we always have to argue for and justify the belief in the appropriateness in choosing to work with specific methods and models in a given spatio-temporal context — and why we should believe in the results produced.

The inductive and causal claims that can be made when applying methods appropriate to intrinsically time-less closed ‘nomological machines’ on open social systems moving in real historical time, are indeed limited.

Mathematical economics — a contraption of empty formalism

20 December, 2016 at 15:09 | Posted in Economics | 1 Comment

My impression [is] that nothing emerges at the end which has not been introduced expressly or tacitly at the beginning is quite wrong … It seems to me essential in an article of this sort to put in the fullest and most explicit manner at the beginning the assumptions which are made and the methods by which the price indexes are derived; and then to state at the end what substantially novel conclusions has been arrived at …

economist-naked

I cannot persuade myself that this sort of treatment of economic theory has anything significant to contribute. I suspect it of being nothing better than a contraption proceeding from premises which are not stated with precision to conclusions which have no clear application … [This creates] a mass of symbolism which covers up all kinds of unstated special assumptions.

Letter from Keynes to Frisch 28 November 1935

The non-existence of Paul Krugman’s ‘Keynes/Hicks macroeconomic theory’

20 December, 2016 at 13:59 | Posted in Economics | 1 Comment

islmPaul Krugman has in numerous posts on his blog tried to defend “the whole enterprise of Keynes/Hicks macroeconomic theory” and especially his own somewhat idiosyncratic version of IS-LM.

The main problem is simpliciter that there is no such thing as a Keynes-Hicks macroeconomic theory!

So, let us get some things straight.

There is nothing in the post-General Theory writings of Keynes that suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory in the 1937 QJE-article there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. So of course there can’t be any “vindication for the whole enterprise of Keynes/Hicks macroeconomic theory” – simply because “Keynes/Hicks” never existed.

And it gets even worse!

John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory – ‘Mr. Keynes and the ‘Classics’. A Suggested Interpretation’ – returned to it in an article in 1980 – ‘IS-LM: an explanation’ – in Journal of Post Keynesian Economics. Self-critically he wrote:

sir_john_hicksI accordingly conclude that the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better – is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate. I have deliberately interpreted the equilibrium concept, to be used in such analysis, in a very stringent manner (some would say a pedantic manner) not because I want to tell the applied economist, who uses such methods, that he is in fact committing himself to anything which must appear to him to be so ridiculous, but because I want to ask him to try to assure himself that the divergences between reality and the theoretical model, which he is using to explain it, are no more than divergences which he is entitled to overlook. I am quite prepared to believe that there are cases where he is entitled to overlook them. But the issue is one which needs to be faced in each case.

When one turns to questions of policy, looking toward the future instead of the past, the use of equilibrium methods is still more suspect. For one cannot prescribe policy without considering at least the possibility that policy may be changed. There can be no change of policy if everything is to go on as expected-if the economy is to remain in what (however approximately) may be regarded as its existing equilibrium. It may be hoped that, after the change in policy, the economy will somehow, at some time in the future, settle into what may be regarded, in the same sense, as a new equilibrium; but there must necessarily be a stage before that equilibrium is reached …

I have paid no attention, in this article, to another weakness of IS-LM analysis, of which I am fully aware; for it is a weakness which it shares with General Theory itself. It is well known that in later developments of Keynesian theory, the long-term rate of interest (which does figure, excessively, in Keynes’ own presentation and is presumably represented by the r of the diagram) has been taken down a peg from the position it appeared to occupy in Keynes. We now know that it is not enough to think of the rate of interest as the single link between the financial and industrial sectors of the economy; for that really implies that a borrower can borrow as much as he likes at the rate of interest charged, no attention being paid to the security offered. As soon as one attends to questions of security, and to the financial intermediation that arises out of them, it becomes apparent that the dichotomy between the two curves of the IS-LM diagram must not be pressed too hard.

The editor of JPKE, Paul Davidson, gives the background to Hicks’s article:

I originally published an article about Keynes’s finance motive — which in 1937 Keynes added to his other liquidity preference motives (transactions, precautionary, speculative motives) , I showed that adding this finance motive required that Hicks’s IS curve and LM curves to be interdependent — and thus when the IS curve shifted so would the LM curve.
Hicks and I then discussed this when we met several times.
When I first started to think about the ergodic vs. nonergodic dischotomy, I sent to Hicks some preliminary drafts of articles I would be writing about nonergodic processes. Then John and I met several times to discuss this matter further and I finally convinced him to write the article — which I published in the Journal of Post Keynesian Economics– in which he renounces the IS-LM apparatus. Hicks then wrote me a letter in which he thought the word nonergodic was wonderful and said he wanted to lable his approach to macroeconomics as nonergodic!

So – back in 1937 John Hicks said that he was building a model of John Maynard Keynes’ General Theory. In 1980 he openly admits he wasn’t.

What Hicks acknowledges in 1980 is basically that his original review totally ignored the very core of Keynes’ theory – uncertainty. In doing this he actually turned the train of macroeconomics on the wrong tracks for decades. It’s about time that neoclassical economists – as Krugman, Mankiw, or what have you – set the record straight and stop promoting something that the creator himself admits was a total failure. Why not study the real thing itself – General Theory – in full and without looking the other way when it comes to non-ergodicity and uncertainty?

Paul Krugman persists in talking about a Keynes-Hicks-IS-LM-model that really never existed. It’s deeply disappointing. You would expect more from a Nobel prize winner.

In his 1937 paper Hicks actually elaborates four different models (where Hicks uses I to denote Total Income and Ix to denote Investment):

1) “Classical”: M = kI   Ix = C(i)   Ix = S(i,I)

2) Keynes’ “special” theory: M = L(i)   Ix = C(i)    I = S(I)

3) Keynes’ “general” theory: M = L(I, i)   Ix = C(i)   I = S(I)

4) The “generalized general” theory: M = L(I, i)   Ix =C(I, i)  Ix = S(I, i)

It is obvious from the way Krugman draws his IS-LM curves that he is thinking in terms of model number 4 – and that is not even by Hicks considered a Keynes model (modells 2 and 3)! It’s basically a loanable funds model, that belongs in the “classical” camp and which you find reproduced in most mainstream textbooks. Hicksian IS-LM? Maybe. Keynes? No way!

Bayesianism — a dangerous superficiality

19 December, 2016 at 12:34 | Posted in Theory of Science & Methodology | 4 Comments

419fn8sv1fl-_sx332_bo1204203200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

wpid-bilindustriella-a86478514bOne of my favourite “problem situating lecture arguments” against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

For more on my own objections to Bayesianism:
Bayesianism — a patently absurd approach to science
Bayesianism — preposterous mumbo jumbo
One of the reasons I’m a Keynesian and not a Bayesian

Next Page »

Blog at WordPress.com.
Entries and comments feeds.