The perils of calling your pet cat a dog …

27 October, 2016 at 11:05 | Posted in Statistics & Econometrics | Leave a comment

Since econometrics doesn’t content itself with only making optimal predictions, but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — most important of these are additivity and linearity. Important, simply because if they are not true, your model is invalid and descriptively incorrect.  And when the model is wrong — well, then it’s wrong.

The assumption of additivity and linearity means that the outcome variable is, in reality, linearly related to any predictors … and that if you have several predictors then their combined effect is best described by adding their effects together …

catdogThis assumption is the most important because if it is not true then even if all other assumptions are met, your model is invalid because you have described it incorrectly. It’s a bit like calling your pet cat a dog: you can try to get it to go in a kennel, or to fetch sticks, or to sit when you tell it to, but don’t be surprised when its behaviour isn’t what you expect because even though you’ve called it a dog, it is in fact a cat. Similarly, if you have described your statistical model inaccurately it won’t behave itself and there’s no point in interpreting its parameter estimates or worrying about significance tests of confidence intervals: the model is wrong.

Andy Field

Svensk universitetsutbildning i fritt fall (II)

26 October, 2016 at 18:02 | Posted in Education & School | Leave a comment

Jag sprang nyligen på en sammanställning av genusdoktorsavhandlingar från 2014. Många godbitar. Men en av dem var särskilt anmärkningsvärd, nämligen nummer 19 i uppräkningen. Det handlar om doktorsavhandlingen “Rum, rytm och resande” från Linköpings universitet (pdf). Sammanställningen sammanfattar:

“Avhandlingen undersöker järnvägstationer som fysiska platser och sociala rum ur könsperspektiv. Kimstad pendeltågsstation, Norrköpings järnvägsstation och Stockholms Centralstation ingår i studien. Resultaten visar att järnvägsstationerna reproducerar könsmaktsordningen och att detta påverkar både män som kvinnor som vistas där.”

krylboEn doktorand har alltså ägnat minst 4-5 år och flera miljoner skattekronor åt att besöka järnvägsstationer och komma fram till att “järnvägsstationerna reproducerar könsmaktsordningen”. Doktorandens chef har planerat detta arbete och chefens chefer har godkänt det. Dessutom har en betygskommitté med externa granskare bedömt och skrivit under på att doktorsavhandlingen håller måttet.

Det var katten. Men det blir värre.

Doktorsavhandlingen sammanfattas på engelska. Det börjar med:

“Results from the study show that individuals in different ways are affected by gendered power relations that dwell in rhythms of collective believes and in shape of materialized objects that encounter the commuters when visiting the railway station. While the rhythms of masculine seriality contains believes of males as potentially violent, as defenders and as bread winners, the rhythms of female seriality contains believes of women as primary mothers and housewives, of women as primary victim of sexual violence and of objectification of women’s bodies as either decent or as sexually available to heterosexual men”.

Rytmer av könsmaktsordningen. Poetiskt.


Ja, inför sådant tyckmyckentrutat pseudo-vetenskapligt blaj kan man inte annat än taga sig för pannan.

Mitt eget favoritexempel på detta ‘vetenskapliga’ skojeri är hämtat ur ett nummer av Pedagogisk Forskning i Sverige (2-3 2014) där författaren till artikeln “En pedagogisk relation mellan människa och häst. På väg mot en pedagogisk filosofisk utforskning av mellanrummet” ger följande intressanta ‘programförklaring:

Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.

elite-daily-sleeping-studentOch så säger man att svensk universitetsutbildning är i kris. Undrar varför …

What it takes to make economics a real science

26 October, 2016 at 09:23 | Posted in Economics | 1 Comment

What is science? One brief definition runs: “A systematic knowledge of the physical or material world.” Most definitions emphasize the two elements in this definition: (1) “systematic knowledge” about (2) the real world. Without pushing this definitional question to its metaphysical limits, I merely want to suggest that if economics is to be a science, it must not only develop analytical tools but must also apply them to a world that is now observable or that can be made observable through improved methods of observation and measurement. Or in the words of the Hungarian mathematical economist Janos Kornai, “In the real sciences, the criterion is not whether the proposition is logically true and tautologically deducible from earlier assumptions. The criterion of ‘truth’ is, whether or not the proposition corresponds to reality” …


One of our most distinguished historians of economic thought, George Stigler, has stated that: “The dominant influence upon the working range of economic theorists is the set of internal values and pressures of the discipline. The subjects of study are posed by the unfolding course of scientific developments.” He goes on to add: “This is not to say that the environment is without influence …” But, he continues, “whether a fact or development is significant depends primarily on its relevance to current economic theory.” What a curious relating of rigor to relevance! Whether the real world matters depends presumably on “its relevance to current economic theory.” Many if not most of today’s economic theorists seem to agree with this ordering of priorities …

Today, rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant … The theoretical analysis in much of this literature rests on assumptions that also fly in the face of the facts … Another related recent development in which theory proceeds with impeccable logic from unrealistic assumptions to conclusions that contradict the historical record, is the recent work on rational expectations …

I have scolded economists for what I think are the sins that too many of them commit, and I have tried to point the way to at least partial redemption. This road to salvation will not be an easy one for those who have been seduced by the siren of mathematical elegance or those who all too often seek to test unrealistic models without much regard for the quality or relevance of the data they feed into their equations. But let us all continue to worship at the altar of science. I ask only that our credo be: “relevance with as much rigor as possible,” and not “rigor regardless of relevance.” And let us not be afraid to ask — and to try to answer the really big questions.

Robert A. Gordon

Svensk universitetsutbildning i fritt fall

25 October, 2016 at 17:48 | Posted in Education & School | Leave a comment

Samtidigt som urholkningen av högskolan fortsätter finns det ett politiskt tryck på fler utbildningsplatser. Men ensidiga satsningar på fler platser gynnar varken samhället, högskolorna eller de studenter som får utbildning av tvivelaktig kvalitet. I nuläget måste högskolans begränsade budgetutrymme användas till att stärka utbildningens kvalitet, inte till att bygga ut högskolan ytterligare …

qualityKvaliteten på högskolan är i dag ifrågasatt. Den tveksamma kvaliteten syns bland annat i Universitetskanslersämbetets kvalitetsutvärderingar, där nästan var femte utbildning underkändes. En bidragande orsak till den bristande kvaliteten kan vara att antalet studenter har ökat snabbare än resurserna tillåter. Dessutom har det skett en urholkning av resurserna, då ersättningsbeloppen inte räknas upp med de faktiska kostnadsökningarna och dessutom minskas genom ett produktivitetsavdrag.

Samtidigt visar IFAU (Institutet för arbetsmarknadspolitisk utvärdering) att utbyggnaden av högskolan medfört sämre studentkvalitet. Detta är en naturlig följd av att övergången till högre studier redan är hög bland ungdomar i de högsta betygsintervallen. Det finns inte något överskott av högpresterande studenter som kan antas när fler platser tillkommer. Detta medför att studenter med allt sämre förkunskaper kommer in när högskolan växer. Dessutom är grundskolan och gymnasiets kvalitet ifrågasatt generellt. Lärarna måste lägga allt mer tid på att hjälpa studenterna genom utbildningarna vilket tar resurser från övrig undervisning. Det går inte heller att utesluta att kraven på studenterna sänks. Ökad genomströmning ger mer resurser till högskolan. Dagens resurssystem ger tyvärr incitament till högskolorna att godkänna studenter som inte når upp till kraven för godkänt.

Göran Arrius  Håkan Regnér  Linda Simonsen

2989556_1200_675Bra rutet! Svenska universitet och högskolor brottas idag med många problem. Två av de mer akuta är hur man ska hantera en situation med krympande ekonomi och att allt fler av studenterna är dåligt förberedda för högskolestudier.

Varför har det blivit så här? Yours truly har vid upprepade tillfällen blivit approcherad av media apropå dessa frågor, och har då utöver ‘the usual suspects’ också försökt lyfta en problematik som sällan — av rädsla för att inte vara ‘politiskt korrekt’ — lyfts i debatten.

De senaste femtio åren har vi haft en fullständig explosion av nya studentgrupper som går vidare till universitets- och högskolestudier. Detta är på ett sätt klart glädjande. Idag har vi lika många doktorander i vårt utbildningssystem som vi hade gymnasister på 1950-talet. Men denna utbildningsexpansion har tyvärr i mycket skett till priset av försämrade möjligheter för studenterna att tillgodogöra sig högskoleutbildningens kompetenskrav. Många utbildningar har fallit till föga och sänkt kraven.

Tyvärr är de studenter vi får till universitet och högskolor över lag allt sämre rustade för sina studier. Omstruktureringen av skolan i form av decentralisering, avreglering och målstyrning har tvärtemot politiska utfästelser inte levererat. I takt med den eftergymnasiala utbildningsexpansionen har en motsvarande kunskapskontraktion hos stora studentgrupper ägt rum. Den skolpolitik som lett till denna situation slår hårdast mot dem den utger sig för att värna — de med litet eller inget ‘kulturkapital’ i bagaget hemifrån.

Mot denna bakgrund är det egentligen anmärkningsvärt att man inte i större utsträckning problematiserat vad utbildningsexplosionen i sig kan leda till.

gaussEftersom vi för femtio år sedan vid våra universitet utbildade enbart en bråkdel av befolkningen, är det ingen djärv gissning — under antagande av att ‘begåvning’ i en population är åtminstone approximativt normalfördelad — att lejonparten av dessa studenter ‘begåvningsmässigt’ låg till höger om mittpunkten på normalfördelningskurvan. Om vi idag tar in fem gånger så många studenter på våra högskolor och universitet kan vi — under samma antagande — knappast räkna med att en lika stor del av dessa utgörs av individer som ligger till höger om normalfördelningskurvans mittpunkt. Rimligen torde detta — ceteris paribus — innebära att i takt med att proportionen av befolkningen som går vidare till högskola och universitet ökar, så ökar svårigheterna för många av dessa att uppnå traditionellt högt ställda akademiska kravnivåer.

Här borde i så fall statsmakterna ha ytterligare en stark anledning till att öka resurserna till högskola och universitet, istället för att som idag bedriva utbildningar på mager kost och med få lärarledda föreläsningar i rekordstora studentgrupper. Med nya kategorier av studenter, som i allt större utsträckning rekryteras från studieovana hem, är det svårt att se hur vi med knappare resursramar ska kunna lösa dilemmat med högre krav på meritmässigt allt mer svagpresterande studenter.

Statistical significance tests do not validate models

25 October, 2016 at 00:20 | Posted in Economics, Statistics & Econometrics | 1 Comment

The word ‘significant’ has a special place in the world of statistics, thanks to a test that researchers use to avoid jumping to conclusions from too little data. Suppose a researcher has what looks like an exciting result: She gave 30 kids a new kind of lunch, and they all got better grades than a control group that didn’t get the lunch. Before concluding that the lunch helped, she must ask the question: If it actually had no effect, how likely would I be to get this result? If that probability, or p-value, is below a certain threshold — typically set at 5 percent — the result is deemed ‘statistically significant.’

significant-p-valueClearly, this statistical significance is not the same as real-world significance — all it offers is an indication of whether you’re seeing an effect where there is none. Even this narrow technical meaning, though, depends on where you set the threshold at which you are willing to discard the ‘null hypothesis’ — that is, in the above case, the possibility that there is no effect. I would argue that there’s no good reason to always set it at 5 percent. Rather, it should depend on what is being studied, and on the risks involved in acting — or failing to act — on the conclusions …

This example illustrates three lessons. First, researchers shouldn’t blindly follow convention in picking an appropriate p-value cutoff. Second, in order to choose the right p-value threshold, they need to know how the threshold affects the probability of a Type II error. Finally, they should consider, as best they can, the costs associated with the two kinds of errors.

Statistics is a powerful tool. But, like any powerful tool, it can’t be used the same way in all situations.

Narayana Kocherlakota

Good lessons indeed — underlining how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero – even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing science.

In its standard form, a significance test is not the kind of ‘severe test’ that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypotheses. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

411-y9smopl-_sx346_bo1204203200_In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.

Why p-values cannot be taken at face value

24 October, 2016 at 09:00 | Posted in Economics, Statistics & Econometrics | 5 Comments

statistics-done-wrong-alex-reinhartA researcher is interested in differences between Democrats and Republicans in how they perform in a short mathematics test when it is expressed in two different contexts, either involving health care or the military. The research hypothesis is that context matters, and one would expect Democrats to do better in the health- care context and Republicans in the military context … At this point there is a huge number of possible comparisons that can be performed—all consistent with the data. For example, the pattern could be found (with statistical significance) among men and not among women— explicable under the theory that men are more ideological than women. Or the pattern could be found among women but not among men—explicable under the theory that women are more sensitive to context, compared to men … A single overarching research hypothesis—in this case, the idea that issue context interacts with political partisanship to affect mathematical problem-solving skills—corresponds to many different possible choices of the decision variable.

At one level, these multiplicities are obvious. And it would take a highly unscrupulous researcher to perform test after test in a search for statistical significance … Given a particular data set, it is not so difficult to look at the data and construct completely reasonable rules for data exclusion, coding, and data analysis that can lead to statistical significance—thus, the researcher needs only perform one test, but that test is conditional on the data … A researcher when faced with multiple reasonable measures can reason (perhaps correctly) that the one that produces a significant result is more likely to be the least noisy measure, but then decide (incorrectly) to draw inferences based on that one only.

Andrew Gelman & Eric Loken

Mainstream economists dissing people that want to rethink economics

22 October, 2016 at 20:02 | Posted in Economics | 9 Comments

There’s a lot of commenting on the blog now, after yours truly put up a post where Cambridge economist Pontus Rendahl in an interview compared heterodox economics to ‘creationism’ and ‘alternative medicine,’ and totally dissed students that want to see the economics curriculum moving in a more pluralist direction.
Rethinking econ_0
Sad to say, Rendahl is not the only mainstream economist having monumental problems when trying to argue with people challenging the ruling orthodoxy

A couple of years ago Paul Krugman felt a similar urge to defend mainstream neoclassical economics against the critique from students asking for more relevance, realism and pluralism in the teaching of economics. According to Krugman, the students and people like yours truly are wrong in blaming mainstream economics for not being relevant and not being able to foresee crises. To Krugman there is nothing wrong with ‘standard theory’ and ‘economics textbooks.’ If only policy makers and economists stick to ‘standard economic analysis’ everything would be just fine.

I’ll be dipped! If there’s anything the last couple of years have shown us, it is that economists have gone astray. Krugman’s ‘standard theory’ — mainstream neoclassical economics – has contributed to causing todays’s economic crisis rather than to solving it. Reading Krugman, I guess a lot of the young economics students that today are looking for alternatives to mainstream neoclassical theory are deeply disappointed. Rightly so. But — although Krugman, especially on his blog, certainly tries to present himself as a kind of radical and anti-establishment economics guy — when it really counts, he shows what he is —  a die-hard teflon-coated mainstream neoclassical economist.

Perhaps this becomes less perplexing to grasp when one considers what Krugman said in a speech (emphasis added) in 1996:

I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations …

Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium … I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.

So now all young economics students that want to see a real change in economics and the way it’s taught — now you know where you have people like Rendahl and Krugman. If you really want something other than the same old mainstream neoclassical catechism, if you really don’t want to be force-fed with mainstream neoclassical theories and models, you have to look elsewhere. 

Econometrics and the bridge between model and reality

21 October, 2016 at 23:28 | Posted in Statistics & Econometrics | 1 Comment

haavelmoTrygve Haavelmo, the “father” of modern probabilistic econometrics, wrote that he and other econometricians could not “build a complete bridge between our models and reality” by logical operations alone, but finally had to make “a non-logical jump” [‘Statistical testing of business-cycle theories,’ 1943:15]. A part of that jump consisted in that econometricians “like to believe … that the various a priori possible sequences would somehow cluster around some typical time shapes, which if we knew them, could be used for prediction” [1943:16]. But since we do not know the true distribution, one has to look for the mechanisms (processes) that “might rule the data” and that hopefully persist so that predictions may be made. Of possible hypothesis on different time sequences (“samples” in Haavelmo’s somewhat idiosyncratic vocabulary)) most had to be ruled out a priori “by economic theory”, although “one shall always remain in doubt as to the possibility of some … outside hypothesis being the true one” [1943:18].

The explanations we can give of economic relations and structures based on econometric models are, according to Haavelmo, “not hidden truths to be discovered” but rather our own “artificial inventions”. Models are consequently perceived not as true representations of the Data Generating Process, but rather instrumentally conceived “as if”-constructs. Their “intrinsic closure” is realized by searching for parameters showing “a great degree of invariance” or relative autonomy and the “extrinsic closure” by hoping that the “practically decisive” explanatory variables are relatively few, so that one may proceed “as if … natural limitations of the number of relevant factors exist” [‘The probability approach in econometrics,’ 1944:29].

But why the “logically conceivable” really should turn out to be the case is difficult to see. At least if we are not satisfied by sheer hope. In real economies it is unlikely that we find many “autonomous” relations and events. And one could of course also raise the objection that to invoke a probabilistic approach to econometrics presupposes, e. g., that we have to be able to describe the world in terms of risk rather than genuine uncertainty.

And that is exactly what Haavelmo [1944:48] does: “To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular ‘true’, but unknown, probability law.”

But to use this “trick of our own” and just assign “a certain probability law to a system of observable variables”, however, cannot – just as little as hoping – build a firm bridge between model and reality. Treating phenomena as if they essentially were stochastic processes is not the same as showing that they essentially are stochastic processes. As Hicks so neatly puts it in Causality in Economics [1979:120-21]:

Things become more difficult when we turn to time-series … The econometrist, who works in that field, may claim that he is not treading on very shaky ground. But if one asks him what he is really doing, he will not find it easy, even here, to give a convincing answer … [H]e must be treating the observations known to him as a sample of a larger “population”; but what population? … I am bold enough to conclude, from these considerations that the usefulness of “statistical” or “stochastic” methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand.”

And as if this wasn’t enough, one could also seriously wonder what kind of “populations” these statistical and econometric models ultimately are based on. Why should we as social scientists – and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems – unquestioningly accept Haavelmo’s “infinite population”, Fisher’s “hypothetical infinite population”, von Mises’s “collective” or Gibbs’s ”ensemble”?

Of course one could treat our observational or experimental data as random samples from real populations. I have no problem with that. But modern (probabilistic) econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of “parallel universes” and assume that our data are random samples from that kind of populations.

But this is actually nothing else but handwaving! And it is inadequate for real science. As David Freedman writes in Statistical Models and Causal Inference [2010:105-111]:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

Econometricians should know better than to treat random variables, probabilites and expected values as anything else than things that strictly seen only pertain to statistical models. If they want us take the leap of faith from mathematics into the empirical world in applying the theory, they have to really argue an justify this leap by showing that those neat mathematical assumptions (that, to make things worse, often are left implicit, as e.g. independence and additivity) do not collide with the ugly reality. The set of mathematical assumptions is no validation in itself of the adequacy of the application.

A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that “there can be no harm in considering economic variables as stochastic variables” [Haavelmo 1943:13]. In most cases no such arguments are given.

Of course you are entitled — like Haavelmo and his modern probabilistic followers — to express a hope “at a metaphysical level” that there are invariant features of reality to uncover and that also show up at the empirical level of observations as some kind of regularities.

But is it a justifiable hope? I have serious doubts. The kind of regularities you may hope to find in society is not to be found in the domain of surface phenomena, but rather at the level of causal mechanisms, powers and capacities. Persistence and generality has to be looked out for at an underlying deep level. Most econometricians do not want to visit that playground. They are content with setting up theoretical models that give us correlations and eventually “mimic” existing causal properties.

We have to accept that reality has no “correct” representation in an economic or econometric model. There is no such thing as a “true” model that can capture an open, complex and contextual system in a set of equations with parameters stable over space and time, and exhibiting invariant regularities. To just “believe”, “hope” or “assume” that such a model possibly could exist is not enough. It has to be justified in relation to the ontological conditions of social reality.

In contrast to those who want to give up on (fallible, transient and transformable) “truth” as a relation between theory and reality and content themselves with “truth” as a relation between a model and a probability distribution, I think it is better to really scrutinize if this latter attitude is feasible. To abandon the quest for truth and replace it with sheer positivism would indeed be a sad fate of econometrics.

On the proper use of mathematics in economics

21 October, 2016 at 10:01 | Posted in Economics | 1 Comment

19266522One must, of course, beware of expecting from this method more than it can give. Out of the crucible of calculation comes not an atom more truth than was put in. The assumptions being hypothetical, the results obviously cannot claim more than a vey limited validity. The mathematical expression ought to facilitate the argument, clarify the results, and so guard against possible faults of reasoning — that is all.

It is, by the way, evident that the economic aspects must be the determining ones everywhere: economic truth must never be sacrificed to the desire for mathematical elegance.

Econometrics and the axiom of correct specification

20 October, 2016 at 17:22 | Posted in Statistics & Econometrics | 4 Comments

Most work in econometrics and regression analysis is — still — made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

aWhen things sound to good to be true, they usually aren’t. And that goes for econometric wet dreams too. The snag is, of course, that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, but a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

Every regression model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

overconfidenceIn order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

Ed Leamer

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

stat That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are invalid.

41ibatsefvlRegression models have some serious weaknesses. Their ease of estimation tends to suppress attention to features of the data that matching techniques force researchers to consider, such as the potential heterogeneity of the causal effect and the alternative distributions of covariates across those exposed to different levels of the cause. Moreover, the traditional exogeneity assumption of regression … often befuddles applied researchers … As a result, regression practitioners can too easily accept their hope that the specification of plausible control variables generates as-if randomized experiment.

Econometrics — and regression analysis — is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics and regression analysis.

Next Page »

Create a free website or blog at
Entries and comments feeds.