In practice Prof. Tinbergen seems to be entirely indifferent whether or not his basic factors are independent of one another … But my mind goes back to the days when Mr. Yule sprang a mine under the contraptions of optimistic statisticians by his discovery of spurious correlation. In plain terms, it is evident that if what is really the same factor is appearing in several places under various disguises, a free choice of regression coefficients can lead to strange results. It becomes like those puzzles for children where you write down your age, multiply, add this and that, subtract something else, and eventually end up with the number of the Beast in Revelation.
Prof. Tinbergen explains that, generally speaking, he assumes that the correlations under investigation are linear … I have not discovered any example of curvilinear correlation in this book, and he does not tell us what kind of evidence would lead him to introduce it. If, as he suggests above, he were in such cases to use the method of changing his linear coefficients from time to time, it would certainly seem that quite easy manipulation on these lines would make it possible to fit any explanation to any facts. Am I right in thinking that the uniqueness of his results depends on his knowing beforehand that the correlation curve must be a particular kind of function, whether linear or some other kind ?
Apart from this, one would have liked to be told emphatically what is involved in the assumption of linearity. It means that the quantitative effect of any causal factor on the phenomenon under investigation is directly proportional to the factor’s own magnitude … But it is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves ; indeed, it is ridiculous. Yet this is what Prof. Tinbergen is throughout assuming …
Keynes’ comprehensive critique of econometrics and the assumptions it is built around — completeness, measurability, indepencence, homogeneity, and linearity — is still valid today.
Most work in econometrics is made on the assumption that the researcher has a theoretical model that is ‘true.’ But — to think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is not only a belief without support, it is a belief impossible to support.
The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.
Every econometric model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.
A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.
The theoretical conditions that have to be fulfilled for econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although econometrics have become the most used quantitative methods in economics today, it’s still a fact that the inferences made from them are as a rule invalid.
Econometrics is basically a deductive method. Given the assumptions it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.
Reading an applied econometrics paper could leave you with the impression that the economist (or any social science researcher) first formulated a theory, then built an empirical test based on the theory, then tested the theory. But in my experience what generally happens is more like the opposite: with some loose ideas in mind, the econometrician runs a lot of different regressions until they get something that looks plausible, then tries to fit it into a theory (existing or new) … Statistical theory itself tells us that if you do this for long enough, you will eventually find something plausible by pure chance!
This is bad news because as tempting as that final, pristine looking causal effect is, readers have no way of knowing how it was arrived at. There are several ways I’ve seen to guard against this:
(1) Use a multitude of empirical specifications to test the robustness of the causal links, and pick the one with the best predictive power …
(2) Have researchers submit their paper for peer review before they carry out the empirical work, detailing the theory they want to test, why it matters and how they’re going to do it. Reasons for inevitable deviations from the research plan should be explained clearly in an appendix by the authors and (re-)approved by referees.
(3) Insist that the paper be replicated. Firstly, by having the authors submit their data and code and seeing if referees can replicate it (think this is a low bar? Most empirical research in ‘top’ economics journals can’t even manage it). Secondly — in the truer sense of replication — wait until someone else, with another dataset or method, gets the same findings in at least a qualitative sense. The latter might be too much to ask of researchers for each paper, but it is a good thing to have in mind as a reader before you are convinced by a finding.
All three of these should, in my opinion, be a prerequisite for research that uses econometrics …
Naturally, this would result in a lot more null findings and probably a lot less research. Perhaps it would also result in fewer attempts at papers which attempt to tell the entire story: that is, which go all the way from building a new model to finding (surprise!) that even the most rigorous empirical methods support it.
Good suggestions, but unfortunately there are many more deep problems with econometrics that have to be ‘solved.’
In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture. The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics.
Misapplication of inferential statistics to non-inferential situations is a non-starter for doing proper science. And when choosing which models to use in our analyses, we cannot get around the fact that the evaluation of our hypotheses, explanations, and predictions cannot be made without reference to a specific statistical model or framework. The probabilistic-statistical inferences we make from our samples decisively depends on what population we choose to refer to. The reference class problem shows that there usually are many such populations to choose from, and that the one we choose decides which probabilities we come up with and a fortiori which predictions we make. Not consciously contemplating the relativity effects this choice of ‘nomological-statistical machines’ have, is probably one of the reasons econometricians have a false sense of the amount of uncertainty that really afflicts their models.
As economists and econometricians we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.
Economists — and econometricians — have (uncritically and often without arguments) come to simply assume that one can apply probability distributions from statistical theory on their own area of research. However, there are fundamental problems arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels.
Of course one could arguably treat our observational or experimental data as random samples from real populations. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of populations. But this is actually nothing but hand-waving! Doing econometrics it’s always wise to remember C. S. Peirce’s remark that universes are not as common as peanuts …
Det är värt att notera att kunskapsnedgången i de internationella undersökningarna överhuvudtaget inte avspeglas i resultaten på de nationella proven (jag är medveten om att axlarna i figuren nedan inte är optimalt skalade). En tolkning av detta är att undervisningen idag är så inriktad på proven att eleverna trots fallande underliggande kunskapsnivå ändå lyckas rätt hyfsat på dem. När eleverna ställs inför nya typer av uppgifter står de emellertid sig slätt. Detta skulle i så fall tyda på att provens utformning gör dem lätta att genomskåda och lära sig prestera bra på, utan att eleverna tillägnat sig djupare ämneskunskaper och ämnesförståelse.
Dessa siffror visar vad som borde vara allmänt känt inom utbildningsforskningen; utvärderingssystemet påverkar verksamheten i betydligt högre grad än styrdokument och allmänna målsättningar … Elever tenderar att bli bättre på just den typ av prov som används som utvärderingsinstrument, men inte nödvändigtvis på andra typer av prov. Även om ett större inslag av ”teaching-to-the-test” inte definitionsmässigt är dåligt, så tyder inte den svenska erfarenheten på att det är en självklar väg till bättre resultat.
Visst är det möjligt att utvecklingen hade varit ännu sämre utan de nationella provens ökade betydelse, men samtidigt finns det en uppenbar möjlighet att motsatsen är sann.
The results reported here suggest that an exam school education produces only scattered gains for applicants, even among students with baseline scores close to or above the mean in the target school. Because the exam school experience is associated with sharp increases in peer achievement, these results weigh against the importance of peer effects in the education production function …
Of course, test scores and peer effects are only part of the exam school story. It may be that preparation for exam school entrance is itself worth-while … The many clubs and activities found at some exam schools may expose students to ideas and concepts not easily captured by achievement tests or our post-secondary outcomes. It is also possible that exam school graduates earn higher wages, a question we plan to explore in future work. Still, the estimates reported here suggest that any labor market gains are likely to come through channels other than peer composition and increased cognitive achievement …
Our results are also relevant to the economic debate around school quality and school choice … As with the jump in house prices at school district boundaries, heavy rates of exam school oversubscription suggest that parents believe peer composition matters a great deal for their children’s welfare. The fact that we find little support for causal peer effects suggests that parents either mistakenly equate attractive peers with high value added, or that they value exam schools for reasons other than their impact on learning. Both of these scenarios reduce the likelihood that school choice in and of itself has strong salutary demand-side effects in education production.
Results based on one of the latest fads in econometrics — regression discontinuity design. If unfamiliar with the ‘technique,’ here’s a video giving some of the basics:
[h/t Eric Schüldt]
When a hot new tool arrives on the scene, it should extend the frontiers of economics and pull previously unanswerable questions within reach. What might seem faddish could in fact be economists piling in to help shed light on the discipline’s darkest corners. Some economists, however, argue that new methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy …
A paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist (sic!) at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm. RCTs involve randomly assigning a policy to some people and not to others, so that researchers can be sure that differences are caused by the policy. Analysis is a simple comparison of averages between the two. Mr Deaton and Ms Cartwright have a statistical gripe; they complain that researchers are not careful enough when calculating whether two results are significantly different from one another. As a consequence, they suspect that a sizeable portion of published results in development and health economics using RCTs are “unreliable”.
With time, economists should learn when to use their shiny new tools. But there is a deeper concern: that fashions and fads are distorting economics, by nudging the profession towards asking particular questions, and hiding bigger ones from view. Mr Deaton’s and Ms Cartwright’s fear is that RCTs yield results while appearing to sidestep theory, and that “without knowing why things happen and why people do things, we run the risk of worthless causal (‘fairy story’) theorising, and we have given up on one of the central tasks of economics.” Another fundamental worry is that by offering alluringly simple ways of evaluating certain policies, economists lose sight of policy questions that are not easily testable using RCTs, such as the effects of institutions, monetary policy or social norms.
Efter den globala finanskrisen 2008 har den ekonomiska vetenskapen hamnat i blickfånget. Studentrörelser och heterodoxa ekonomer har kritiserat det dominerande ekonomiska paradigmet och krävt ökad pluralism. Den senaste tidens politiska utveckling har blottat nyliberalismens brister och aktualiserat frågan om dess koppling till den ekonomiska vetenskapen. I Fronesis nr 54–55 fördjupar vi oss i det ekonomiska vetandets förutsättningar.
Vänstern har länge inriktat sitt samhällsteoretiska och politiska intresse mot kulturella och symboliska aspekter av makt och dominans, men har mer eller mindre lämnat det ekonomiska fältet därhän. Med Fronesis nr 54–55 vill vi röra oss bortom en simpel kritik av nationalekonomin och fördjupa förståelsen av villkoren för det ekonomiska vetandet. Numret introducerar för en svensk publik en rad centrala samtida teoretiker som belyser frågorna ur olika perspektiv.
Kajsa Borgnäs och Anders Hylmö: Det ekonomiska vetandet i förändring Ladda ner som PDF
Anders Hylmö: Den moderna nationalekonomin som vetenskaplig stil och disciplin
Dimitris Milonakis: Lärdomar från krisen
Marion Fourcade, Étienne Ollion och Yann Algan: Ekonomernas överlägsenhet
Kajsa Borgnäs: Utanför boxen, eller Vad är heterodox nationalekonomi?
Lars Pålsson Syll: Tony Lawson och kritiken av den nationalekonomiska vetenskapen – en introduktion
Tony Lawson: Den heterodoxa ekonomins natur
Josef Taalbi: Realistisk ekonomisk teori?
Erik Bengtsson: Den heterodoxa nationalekonomins materiella förutsättningar
Julie A. Nelson: Genusmetaforer och nationalekonomi
Linda Nyberg: Nyliberalism, politik och ekonomi
Philip Mirowski: Den politiska rörelsen som inte vågade säga sitt namn
Jason Read: En genealogi över homo oeconomicus
Kajsa Borgnäs: Den vetenskapliga ekonomins politiska makt
Daniel Hirschman och Elizabeth Popp Berman: Skapar nationalekonomer politik?
Peter Gerlach, Marika Lindgren Åsbrink och Ola Pettersson, intervjuade av Daniel Mathisen: Allt annat lika
My idea is to examine the most well-known works of a selection of the most famous neoclassical economists in the period from 1945 to the present.
My survey of well-known works by four famous mathematical neoclassical economists (Samuelson, Arrow, Debreu, Prescott) who all won the Nobel Prize for economics, has not revealed any precise explanations or successful predictions. This supports my conjecture that the use of mathematics in mainstream (or neoclassical) economics has not produced any precise explanations or successful predictions. This, I would claim, is the main difference between neoclassical economics and physics, where both precise explanations and successful predictions have often been obtained by the use of mathematics.
In science courage is to follow the motto of enlightenment and Kant’s dictum — Sapere Aude! To use your own understanding, having the the courage to think for yourself and question ‘received opinion,’ authority or orthodoxy.
In our daily lives courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but stand up for one’s rights not to be humiliated or abused in any ways by the rich and powerful.
Dignity, a better life, or justice and rule of law, are things worth fighting for. Not to step back creates courageous acts that stay in our memories and means something. As when Rosa Parks sixty years ago, on December 1, 1955, in Montgomery, Alabama, refused to give up her seat to make room for a white passenger.
Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given. Like in the great stories. The ones where people have lots of chances of turning back — but don’t.
As when Sir Nicholas Winton organised the rescue of 669 children destined for Nazi concentration camps during World War II.
Or as when Ernest Shackleton, in April 1916, aboard the small boat ‘James Caird’, spent 16 days crossing 1,300 km of ocean to reach South Georgia, then trekked across the island to a whaling station, and finally could rescue the remaining men from the crew of ‘Endurance’ left on the Elephant Island.
Not a single member of the expedition died.
What we do in life echoes in eternity.