A couple of days ago yours truly had a piece on why mainstream economists have tended to go astray in their tool-sheds and actually thereby have contributed to causing today’s economic crisis rather than to solving it.
J Bradford DeLong – professor of economics at Berkeley – writes on a related theme on Project Syndicate:
It is the scale of the catastrophe that astonishes me. But what astonishes me even more is the apparent failure of academic economics to take steps to prepare itself for the future. “We need to change our hiring patterns,” I expected to hear economics departments around the world say in the wake of the crisis.
The fact is that we need fewer efficient-markets theorists and more people who work on microstructure, limits to arbitrage, and cognitive biases. We need fewer equilibrium business-cycle theorists and more old-fashioned Keynesians and monetarists. We need more monetary historians and historians of economic thought and fewer model-builders …
Yet that is not what economics departments are saying nowadays.
Perhaps I am missing what is really going on. Perhaps economics departments are reorienting themselves after the Great Recession in a way similar to how they reoriented themselves in a monetarist direction after the inflation of the 1970’s. But if I am missing some big change that is taking place, I would like somebody to show it to me.
Perhaps academic economics departments will lose mindshare and influence to others – from business schools and public-policy programs to political science, psychology, and sociology departments. As university chancellors and students demand relevance and utility, perhaps these colleagues will take over teaching how the economy works and leave academic economists in a rump discipline that merely teaches the theory of logical choice.
Or perhaps economics will remain a discipline that forgets most of what it once knew and allows itself to be continually distracted, confused, and in denial. If that were that to happen, we would all be worse off.
One of the main functions of System 2 is to monitor and control thoughts and actions “suggested” by System 1 … For an example, here is a simple puzzle. Do not try to solve it but listen to your intuition:
A bat and ball cost $1.10.
The bat costs one dollar more than the ball.
How much does the ball cost?
A number came to your mind. The number, of course, is 10: 10 cents. The distinctive mark of this easy puzzle is that it evokes an answer that is intuitive, appealing, and wrong … The right answer is 5 cents.
Enligt en Sifo-under-sökning beställd av Aftonbladet rasar nu förtroendet för center-ledaren Annie Lööf.
Nu har bara var femte väljare stort eller mycket stort förtroende för henne.
För ett år sedan var det nästan en tredjedel
Förvånande? Knappast – för få svenskar tänder på vad fru Lööf argumenterat och motionerat för på senare år:
Inför plattskatt (lägre skatt för höginkomsttagare)
Avskaffa lagen om anställningsskydd
Sälj ut SvT och SR
Sverige bör gå med i NATO
Bygg ut kärnkraften
Med en sådan politisk agenda är det naturligt att centerns alla väljare snart får plats på Stureplan.
Verkligheten börjar nu komma ikapp vår egen Margaret Thatcher. Det börjar dra ihop sig till ett uppvaknande ur den nyliberala mardröm denna politiska broiler och klyschmakare lyckats dra ner det en gång så stolta centerpartiet i …
Bedövande vackert och nästintill outhärdligt smärtsamt berörande.
Stefan Nilsson har skrivit musiken till filmatiseringen av Göran Tunströms episka mästerverk Juloratoriet.
In the article The Scientific Model of Causality renowned econometrician and Nobel laureate James Heckman writes (emphasis added):
A model is a set of possible counterfactual worlds constructed under some rules. The rules may be laws of physics, the consequences of utility maximization, or the rules governing social interactions … A model is in the mind. As a consequence, causality is in the mind.
Even though this is a standard view among econometricians, it’s – at least from a realist point of view – rather untenable. The reason we as scientists are interested in causality is that it’s a part of the way the world works. We represent the workings of causality in the real world by means of models, but that doesn’t mean that causality isn’t a fact pertaining to relations and structures that exist in the real world. If it was only “in the mind,” most of us couldn’t care less.
The reason behind Heckman’s and most other econometricians’ nominalist-positivist view of science and models, is the belief that science can only deal with observable regularity patterns of a more or less lawlike kind. Only data matters and trying to (ontologically) go beyond observed data in search of the underlying real factors and relations that generate the data is not admissable. All has to take place in the econometric mind’s model since the real factors and relations according to the econometric (epistemologically based) methodology are beyond reach since they allegedly are both unobservable and unmeasurable. This also means that instead of treating the model-based findings as interesting clues for digging deepeer into real structures and mechanisms, they are treated as the end points of the investigation. Or as Asad Zaman puts it in Methodological Mistakes and Econometric Consequences:
Instead of taking it as a first step, as a clue to explore, conventional econometric methodology terminates at the discovery of a good fit … Conventional econometric methodology is a failure because it is merely an attempt to find patterns in the data, without any tools to assess whether or not the given pattern reflects some real forces which shape the data.
The critique put forward here is in line with what mathematical statistician David Freedman writes in Statistical Models and Causal Inference (2010):
In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …
Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …
Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.
Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.
Econometrics is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures like regression analysis may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
Most advocates of econometrics and regression analysis want to have deductively automated answers to fundamental causal questions. Econometricians think – as David Hendry expressed it in Econometrics – alchemy or science? (1980) – they “have found their Philosophers’ Stone; it is called regression analysis and is used for transforming data into ‘significant results!'” But as David Freedman poignantly notes in Statistical Models: “Taking assumptions for granted is what makes statistical techniques into philosophers’ stones.” To apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality in econometrics and regression analysis.
Without requirements of depth, explanations most often do not have practical significance. Only if we search for and find fundamental structural causes, can we hopefully also take effective measures to remedy problems like e.g. unemployment, poverty, discrimination and underdevelopment. A social science must try to establish what relations exist between different phenomena and the systematic forces that operate within the different realms of reality. If econometrics is to progress, it has to abandon its outdated nominalist-positivist view of science and the belief that science can only deal with observable regularity patterns of a more or less law-like kind. Scientific theories ought to do more than just describe event-regularities and patterns – they also have to analyze and describe the mechanisms, structures, and processes that give birth to these patterns and eventual regularities.
Christmas is here again – and with five kids in the family, blogging can’t have top priority. Regular blogging will be resumed late next week.
Winter is not my season, so I’m already longing for when the view from my library once again looks like this:
Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump” argument – susceptible to being ruined by some clever “bookie”.
Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here and here) there is no strong warrant for believing so, but in this post I want to make a point on the informational requirement that the economic ilk of Bayesianism presupposes.
In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.
Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.
That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.
I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’s A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.