Hunting for causes (wonkish)

30 Apr, 2021 at 11:37 | Posted in Theory of Science & Methodology | Comments Off on Hunting for causes (wonkish)

Causality and CorrelationThere are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size.

Second, statistical assumptions can be expressed in the familiar language of probability calculus, and thus assume an aura of scholarship and scientific respectability. Causal assumptions, as we have seen before, are deprived of that honor, and thus become immediate suspect of informal, anecdotal or metaphysical thinking. Again, this difference becomes illuminated among Bayesians, who are accustomed to accepting untested, judgmental assumptions, and should therefore invite causal assumptions with open arms—they don’t. A Bayesian is prepared to accept an expert’s judgment, however esoteric and untestable, so long as the judgment is wrapped in the safety blanket of a probability expression. Bayesians turn extremely suspicious when that same judgment is cast in plain English, as in “mud does not cause rain” …

The third resistance to causal (vis-a-vis statistical) assumptions stems from their intimidating clarity. Assumptions about abstract properties of density functions or about conditional independencies among variables are, cognitively speaking, rather opaque, hence they tend to be forgiven, rather than debated. In contrast, assumptions about how variables cause one another are shockingly transparent, and tend therefore to invite counter-arguments and counter-hypotheses.

Judea Pearl

Pearl’s seminal contributions to this research field is well-known and indisputable. But on the ‘taming’ and ‘resolve’ of the issues, yours truly however has to admit that — under the influence of especially David Freedman and Nancy Cartwright — he still has some doubts on the reach, especially in terms of realism and relevance, of Pearl’s ‘do-calculus solutions’ for social sciences in general and economics in specific (see here, here, here and here). The distinction between the causal — ‘interventionist’ — E[Y|do(X)] and the more traditional statistical — ‘conditional expectationist’ — E[Y|X] is crucial, but Pearl and his associates, although they have fully explained why the first is so important, have to convince us that it (in a relevant way) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to socio-economic contexts where ‘surgery’, ‘hypothetical minimal interventions’, ‘manipulativity’, ‘faithfulness’, ‘stability’, and ‘modularity’ are not perhaps so universally at hand.

CAUSES on Twitter: ""Right now, whole genome testing is most useful for  helping unravel the mystery for parents of children with rare disorders; it  can provide an answer about the cause, butWhat capacity a treatment has to contribute to an effect for an individual depends on the underlying structures – physiological, material, psychological, cultural and economic – that makes some causal pathways possible for that individual and some not, some likely and some unlikely. This is a well recognised problem when it comes to making inferences from model organisms to people. But it is equally a problem in making inferences from one person to another or from one population to another. Yet in these latter cases it is too often downplayed. When the problem is explicitly noted, it is often addressed by treating the underlying structures as moderators in the potential outcomes equation: give a name to a structure-type – men/women, old/young, poor/well off, from a particular ethnic background, member of a particular religious or cultural group, urban/rural, etc. Then introduce a yes-no moderator variable for it. Formally this can be done, and sometimes it works well enough. But giving a name to a structure type does nothing towards telling us what the details of the structure are that matter nor how to identify them. In particular, the usual methods for hunting moderator variables, like subgroup analysis, are of little help in uncovering what the aspects of a structure are that afford the causal pathways of interest. Getting a grip on what structures support similar causal pathways is central to using results from one place as evidence about another, and a casual treatment of them is likely to lead to mistaken inferences. The methodology for how to go about this is under developed, or at best under articulated, in EBM, possibly because it cannot be well done with familiar statistical methods and the ways we use to do it are not manualizable. It may be that medicine has fewer worries here than do social science and social policy, due to the relative stability of biological structures and disease processes. But this is no excuse for undefended presumptions about structural similarity.

Nancy Cartwright

Natural experiments in economics

28 Apr, 2021 at 15:35 | Posted in Economics | 1 Comment

Natural Experiments in the Social Sciences: A Design-Based Approach  (Strategies for Social Inquiry): Amazon.co.uk: Dunning, Thad:  9781107698000: BooksThad Dunning’s book Natural Experiments in the Social Sciences (CUP 2012) is a very useful guide for economists interested in research methodology in general and natural experiments in specific. Dunning argues that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but …

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

• And then there is also the problem that ‘Nature’ may not always supply us with the random experiments we are most interested in. If we are interested in X, why should we study Y only because design dictates that? Method should never be prioritized over substance!

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and the best method on the market. It is not.

The tools economists use

25 Apr, 2021 at 09:55 | Posted in Economics | 4 Comments

In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

Policy Methods Toolbox - Observatory of Public Sector Innovation  Observatory of Public Sector InnovationResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time …

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists …

Economists would not even know where to start without the work of historians, ethnographers, and other social scientists who provide rich narratives of phenomena and hypothesize about possible causes, but do not claim causal certainty.

Economists can be justifiably proud of the power of their statistical and analytical methods. But they need to be more self-conscious about these tools’ limitations. Ultimately, our understanding of the social world is enriched by both styles of research. Economists and other scholars should embrace the diversity of their approaches instead of dismissing or taking umbrage at work done in adjacent disciplines.

Dani Rodrik

As Rodrik notes, ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like 2019’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that randomized controlled trials, RCTs, cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.

‘Randomistas’ like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

All Along The Watchtower (personal)

24 Apr, 2021 at 09:48 | Posted in Varia | 1 Comment


In loving memory of my brother Peter, a big Jimi Hendrix fan.

La vita davanti a sé

23 Apr, 2021 at 22:28 | Posted in Varia | Comments Off on La vita davanti a sé

.

Econometrics — science based on whimsical assumptions

22 Apr, 2021 at 14:36 | Posted in Statistics & Econometrics | 3 Comments

freedmanIt is often said that the error term in a regression equation represents the effect of the variables that were omitted from the equation. This is unsatisfactory …

There is no easy way out of the difficulty. The conventional interpretation for error terms needs to be reconsidered. At a minimum, something like this would need to be said:

The error term represents the combined effect of the omitted variables, assuming that

(i) the combined effect of the omitted variables is independent of each variable included in the equation,
(ii) the combined effect of the omitted variables is independent across subjects,
(iii) the combined effect of the omitted variables has expectation 0.

This is distinctly harder to swallow.

David Freedman

Yes, indeed, that is harder to swallow.

Those conditions on the error term actually mean that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them.

But that is actually impossible to fully manage in reality!

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables (usually just assuming linearity).

Every regression model constructed is misspecified. There is always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. No wonder that the econometric Holy Grail of consistent and stable parameter-values is still nothing but a dream.

overconfidenceIn order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

Ed Leamer

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real-world social systems are not governed by stable causal mechanisms or capacities. As Keynes noticed when he first launched his attack against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of economic theoretical modelling – rather useless.

statRegression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions …  Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that most of the inferences made from them are invalid.

Tony Lawson on economics and social ontology

22 Apr, 2021 at 14:27 | Posted in Economics | 3 Comments

.

Modern economics has become increasingly irrelevant to the understanding of the real world. In his seminal book Economics and Reality (1997), Tony Lawson traced this irrelevance to the failure of economists to match their deductive-axiomatic methods with their subject

largepreviewIt is — sad to say — as relevant today as it was twenty-five years ago.

It is still a fact that within mainstream economics internal validity is everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science!

In pure mathematics and logic, we do not have to worry about external validity. But economics is not pure mathematics or logic. It’s about society. The real world.

Economics and Reality was a great inspiration to yours truly twenty-five years ago. It still is.

Regeringens sparpolitik bygger på en rad missuppfattningar

21 Apr, 2021 at 14:58 | Posted in Economics | 1 Comment

austerity22När Magdalena Andersson varnar för att Sverige kan hamna i samma läge som de krisande euroländerna visar hon bristande insikt om finanspolitikens grundläggande villkor. Så länge Sverige lånar i sin egen valuta utan fast växelkurs kan Riksbanken avvärja sådana kriser. Det finns ingen osynlig gräns som Sverige plötsligt kan korsa och bli som Grekland bara för att vi ökar vår statsskuld. Skillnaden mellan vår valutaregim och euroländernas är inte kvantitativ utan kvalitativ. Det är inte en gradskillnad utan en utan en artskillnad.

Finansministern varnade också för att en hög statsskuld kunde leda till en repris på den svenska 1990-talskrisen. Men den krisen berodde inte på statsskulden utan på en bostadsbubbla som brast, följd av drastiska räntehöjningar för att försöka behålla kronans fasta växelkurs. Som Magdalena Andersson påpekade i P1 slog 1990-talskrisen hårdast mot de svaga i samhället, men inte för att statsskulden var för hög utan tvärtom för att den på grund av svångremspolitiken minskade alltför fort, och därmed inte kunde agera stötdämpare åt den privata sektorns kollapsande köpkraft …

Det finanspolitiska ramverket sätter ramarna för all politik i Sverige. Frågan om huruvida vi borde behålla det eller inte förtjänar att diskuteras utifrån fakta, inte missförstånd.

Max Jerneck / Dagens Industri

Pourquoi la dette publique n’est pas un problème

21 Apr, 2021 at 08:09 | Posted in Economics | Comments Off on Pourquoi la dette publique n’est pas un problème

.

Why yours truly and other Swedes love paying our taxes

20 Apr, 2021 at 15:57 | Posted in Politics & Society | 10 Comments

.

Who’s afraid of MMT?

19 Apr, 2021 at 19:17 | Posted in Economics | 7 Comments

As anyone who has ever been responsible for legislative oversight of central bankers knows, they do not like to have their authority challenged. Most of all, they will defend their mystique – that magical aura that hovers over their words, shrouding a slushy mix of banality and baloney in a mist of power and jargon …

In our day, the voices of Modern Monetary Theory perturb the sleep not only of present central bankers, but even of those retired from the role. They prowl the corridors like Lady Macbeth, shouting “Out damn spot!”

Two fresh cases are Raghuram G. Rajan, a former governor of the Reserve Bank of India, and Mervyn King, a former governor of the BOE. In recently published commentaries, each combines bluster and condescension (in roughly equal measure) in a statement of trite truths with which one can, for the most part, hardly disagree.

Modern Monetary Theory Is Wrong: Inflation Is ComingBut Rajan and King each confront MMT only in the abstract. Neither cites or quotes from a single source, and neither names a single person associated with MMT …

What, then, is MMT? Contrary to the claims of King and Rajan, it is not a policy slogan. Rather, it is a body of theory in Keynes’s monetary tradition, which includes such eminent thinkers as the American economist Hyman Minsky and Wynne Godley of the UK Treasury and the University of Cambridge. MMT describes how “modern” governments and central banks actually work, and how changes in their balance sheets are mirrored by changes in the balance sheets of the public – an application of double-entry bookkeeping to economic thought. Thus, as Kelton writes in the plainest English, the deficit of the government is the surplus of the private sector, and vice versa.

MMT shares Keynes’s view that a proper goal of economic policy in a sovereign and developed country is to achieve full employment, buttressed by a guarantee of jobs to all who may need them. This is a goal that I helped write into law in the US under the Humphrey-Hawkins Full Employment and Balanced Growth Act of 1978, along with balanced growth and reasonable price stability. With occasional successes in practice, this policy objective, known as the “dual mandate,” has been the law of the land in the US ever since.

In short, as an example of good economics made popular, accessible, and democratic, MMT represents what central bankers have always feared – as well they might.

James K. Galbraith

Many countries today have deficits. That’s true. But the problem is not the budget deficit. The real deficits are in the climate, healthcare and infrastructure. How do we tackle those deficits? By spending!

MMT rejects the traditional Phillips curve inflation-unemployment trade-off and has a less positive evaluation of traditional policy measures to reach full employment. Instead of a general increase in aggregate demand, it usually prefers more ‘structural’ and directed demand measures with less risk of producing increased inflation. At full employment deficit spendings will often be inflationary, but that is not what should decide the fiscal position of the government. The size of public debt and deficits is not — as already Abba Lerner argued with his ‘functional finance’ theory in the 1940s — a policy objective. The size of public debt and deficits are what they are when we try to fulfil our basic economic objectives — full employment and price stability.

Governments can spend whatever amount of money they want. That does not mean that MMT says they ought to — that’s something our politicians have to decide. No MMTer denies that too much of government spendings can be inflationary. What is questioned is that government deficits necessarily is inflationary.

What is “forgotten” in mainstream macro modelling, is the insight that finance — in all its different shapes — has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies and acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterward, simply isn’t enough.

All real economic activities nowadays depend on a functioning financial machinery. But institutional arrangements, states of confidence, fundamental uncertainties, asymmetric expectations, the banking system, financial intermediation, loan granting processes, default risks, liquidity constraints, aggregate debt, cash flow fluctuations, etc., etc. — things that play decisive roles in channeling​ money/savings/credit — are more or less left in the dark in modern macro theoretical formalizations.

Erdogans Abstandsregel …

19 Apr, 2021 at 18:42 | Posted in Varia | Comments Off on Erdogans Abstandsregel …

.

A tragedy of statistical theory

19 Apr, 2021 at 16:05 | Posted in Statistics & Econometrics | Comments Off on A tragedy of statistical theory

Causal Inference cheat sheet for data scientists | by Antoine Rebecq |  Towards Data ScienceMethodologists (including myself) can at times exhibit poor judgment about which of their new discoveries, distinctions, and methods are of practical importance, and which are charitably described as ‘academic’ … Weighing the costs and benefits of proposed formalizations is crucial for allocating scarce resources for research and teaching, and can depend heavily on the application …

Much benefit can accrue from thinking a problem through within these models, as long as the formal logic is recognized as an allegory for a largely unknown reality. A tragedy of statistical theory is that it pretends as if mathematical solutions are not only sufficient but ‘‘optimal’’ for dealing with analysis problems when the claimed optimality is itself deduced from dubious assumptions … Likewise, we should recognize that mathematical sophistication seems to imbue no special facility for causal inference in the soft sciences, as witnessed for example by Fisher’s attacks on the smoking-lung cancer link.

Sander Greenland

Är hög statsskuld — verkligen — problemet?

17 Apr, 2021 at 10:49 | Posted in Economics | Comments Off on Är hög statsskuld — verkligen — problemet?

.

När depressionen drabbade 1930-talets industrivärld visade sig den ekonomiska teorin inte vara till någon större hjälp att komma ur situationen. Den engelske nationalekonomen John Maynard Keynes såg behovet av att utveckla en ny teori som på bröt mot den etablerade sanningen. I The General Theory of Employment, Interest and Money (1936) presenterade han sitt alternativ.

Vad som behövs nu är upplyst handling grundad på relevant och realistisk ekonomisk teori av det slag som Keynes står för.

Den överhängande faran är att vi inte får fart på konsumtion och kreditgivning. Förtroende och effektiv efterfrågan måste återupprättas.

Ett av de grundläggande feltänken i dagens diskussion om statsskuld och budgetunderskott är att man inte skiljer på skuld och skuld. Även om det på makroplanet av nödvändighet är så att skulder och tillgångar balanserar varandra, så är det inte oväsentligt vem som har tillgångarna och vem som har skulderna.

Länge har man varit motvillig att öka de offentliga skulderna eftersom ekonomiska kriser i mångt och mycket fortfarande uppfattas som förorsakad av för mycket skulder. Men det är här fördelningen av skulder kommer in. Om staten i ett läge med risk för skulddeflation och likviditetsfällor ‘lånar’ pengar för att bygga ut järnvägar, skola och hälsovård, så är ju de samhälleliga kostnaderna för detta minimala eftersom resurserna annars legat oanvända. När hjulen väl börjar snurra kan både de offentliga och de privata skulderna betalas av. Och även om detta inte skulle uppnås fullt ut så förbättras det ekonomiska läget därför att låntagare med dåliga balansräkningar ersätts med de som har bättre.

I stället för att ”värna om statsfinanserna” bör man se till att värna om samhällets framtid. Problemet med en statsskuld i en situation med nästintill negativa räntor inte är att den är för stor, utan för liten.

Econometrics — formal modelling that has failed miserably

16 Apr, 2021 at 14:23 | Posted in Statistics & Econometrics | Comments Off on Econometrics — formal modelling that has failed miserably

An ongoing concern is that excessive focus on formal modeling and statistics can lead to neglect of practical issues and to overconfidence in formal results … Analysis interpretation depends on contextual judgments about how reality is to be mapped onto the model, and how the formal analysis results are to be mapped back into reality. But overconfidence in formal outputs is only to be expected when much labor has gone into deductive reasoning. First, there is a need to feel the labor was justified, and one way to do so is to believe the formal deduction produced important conclusions. Second, there seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

What should we do with econometrics? | LARS P. SYLLUnfortunately, assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Despite assumption uncertainties, modelers often express only the uncertainties derived within their modeling assumptions, sometimes to disastrous consequences. Econometrics supplies dramatic cautionary examples in which complex modeling has failed miserably in important applications …

Much time should be spent explaining the full details of what statistical models and algorithms actually assume, emphasizing the extremely hypothetical nature of their outputs relative to a complete (and thus nonidentified) causal model for the data-generating mechanisms. Teaching should especially emphasize how formal ‘‘causal inferences’’ are being driven by the assumptions of randomized (‘‘ignorable’’) system inputs and random observational selection that justify the ‘‘causal’’ label.

Sander Greenland

Yes, indeed, econometrics fails miserably over and over again. One reason why it does — besides those discussed by Greenland — is that the error term in the regression models used are thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability.

Without sound justification of the assumptions made, the formal models used in econometric analysis is of questionable value. Failing to take unmodelled uncertainty (not stochastic risk) into serious consideration has made most econonometricians ridiculously overconfident in the reach of the (causal) inferences they make.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.