Some methodological perspectives on causal modeling in economics

6 May, 2021 at 17:55 | Posted in Economics | 3 Comments

Causal modeling attempts to maintain this deductive focus within imperfect research by deriving models for observed associations from more elaborate causal (‘structural’) models with randomized inputs … But in the world of risk assessment … the causal-inference process cannot rely solely on deductions from models or other purely algorithmic approaches. Instead, when randomization is doubtful or simply false (as in typical applications), an honest analysis must consider sources of variation from uncontrolled causes with unknown, nonrandom interdependencies. Causal identification then requires nonstatistical information in addition to information encoded as data or their probability distributions …

157e4bb021a73ee61009ce85178c36c3a6d4069b53842d45f3dc54a39754676bThis need raises questions of to what extent can inference be codified or automated (which is to say, formalized) in ways that do more good than harm. In this setting, formal models – whether labeled ‘‘causal’’ or ‘‘statistical’’ – serve a crucial but limited role in providing hypothetical scenarios that establish what would be the case if the assumptions made were true and the input data were both trustworthy and the only data available. Those input assumptions include all the model features and prior distributions used in the scenario, and supposedly encode all information being used beyond the raw data file (including information about the embedding context as well as the study design and execution).

Overconfident inferences follow when the hypothetical nature of these inputs is forgotten and the resulting outputs are touted as unconditionally sound scientific inferences instead of the tentative suggestions that they are (however well informed) …

The practical limits of formal models become especially apparent when attempting to integrate diverse information sources. Neither statistics nor medical science begins to capture the uncertainty attendant in this process, and in fact both encourage pernicious overconfidence by failing to make adequate allowance for unmodeled uncertainty sources. Instead of emphasizing the uncertainties attending field research, statistics and other quantitative methodologies tend to focus on mathematics and often fall prey to the satisfying – and false – sense of logical certainty that brings to population inferences. Meanwhile, medicine focuses on biochemistry and physiology, and the satisfying – and false – sense of mechanistic certainty about results those bring to individual events.

Sander Greenland

Wise words from a renowned epidemiologist.

As long as economists and statisticians cannot identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events -– in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment -– there, strictly seen, is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures –- something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

koopmansThe tool of statistical inference becomes available as the result of a self-imposed limitation of the universe of discourse. It is assumed that the available observations have been generated by a probability law or stochastic process about which some incomplete knowledge is available a priori …

It should be kept in mind that the sharpness and power of these remarkable tools of inductive reasoning are bought by willingness to adopt a specification of the universe in a form suitable for mathematical analysis.

Yes indeed — using statistics and econometrics to make inferences you have to make lots of (mathematical) tractability assumptions. And especially since econometrics aspires to explain things in terms of causes and effects, it needs loads of assumptions, such as e.g. invariance, additivity and linearity.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions. If not, they are of limited value to our explanations and predictions of real economic systems.

Unfortunately, real world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being invariant, atomistic and additive. But — when causal mechanisms operate in the real world they mostly do it in ever-changing and unstable ways. If economic regularities obtain they do so as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

So — if we want to explain and understand real-world economies we should perhaps be a little bit more cautious with using universe specifications “suitable for mathematical analysis.”

It should be kept in mind, when we evaluate the application of statistics and econometrics, that the sharpness and power of these remarkable tools of inductive reasoning are bought by willingness to adopt a specification of the universe in a form suitable for mathematical analysis.

As emphasised by Greenland, can causality in social sciences — and economics — never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis.

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real casuality we are searching for is the one existing in the real-world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

What kind of evidence do RCTs provide?

6 May, 2021 at 14:01 | Posted in Theory of Science & Methodology | 3 Comments

Randomized Control Trials Feed Our Fetish for Single-Focus Interventions -  ICTworksPerhaps it is supposed that the assumptions for an RCT are generally more often met (or meetable) than those for other methods. What justifies that? Especially given that the easiest assumption to feel secure about for RCTs—that the assignment is done “randomly”—is far from enough to support orthogonality, which is itself only one among the assumptions that need support. I sometimes hear, “Only the RCT can control for unknown unknowns.” But nothing can control for unknowns that we know nothing about. There is no reason to suppose that, for a given conclusion, the causal knowledge that it takes to stop post‐randomization correlations in an RCT is always, or generally, more available or more reliable than the knowledge required for one or another of the other methods to be reliable.

It is also essential to be clear what the conclusion is. As with any study method, RCTS can only draw conclusions about the objects studied—for the RCT, the population enrolled in the trial, which is seldom the one we are interested in. The RCT method can be expanded of course to include among its assumptions that the trial population is a representative sample of the target. Then it follows deductively that the difference in mean outcomes between treatment and control groups is an unbiased estimate of the ATE of the target population. How often are we warranted in assuming that, though, and on what grounds? Without this assumption, an RCT is just a voucher for claims about any except the trial population. What then justifies placing it above methods that are clinchers for claims we are really interested in—about target populations?

Nancy Cartwright

Sex and the problem with interventionist definitions of causation

5 May, 2021 at 12:28 | Posted in Theory of Science & Methodology | Leave a comment

What is Causality | Explained in 2 min - YouTubeWe suggest that “causation” is not univocal. There is a counterfactual/interventionist notion of causation—of use when one is designing a public policy to intervene and solve a problem—and an historical, or more exactly, etiological notion—often of use when one is identifying a problem to solve …

Consider sex: Susan did not get the job she applied for because the prejudiced employer took her to be a woman; she presented as a woman because she was raised as a girl; she was raised as a girl because she was biologically female; and so on. The causation is palpable—Susan’s sex caused her not to get the job she applied for. The counterfactual, if Susan were male and had applied for the job, she would have gotten it, suggests a vague, miraculous transformation of Susan into some unspecified male (maybe one with the same qualifications, provided Susan did not attend any all-female schools)— but it makes no literal sense as a practical intervention. Suppose, however, a past intervention to make Susan male, say one of her X chromosomes was to be changed to some Y in utero.

To make the counterfactual come out true, the intervention must be expanded to also bring it about that in the course of life as an adult male she applies for the job. Pretty much all of the world history that would interact with her in the course of her male life would have to be intervened upon to bring it about that she, as a male, applied for the job. That would be a remarkably prescient intervention indeed and certainly not a reasonable one. The counterfactual, if Susan had been made a male in utero, Susan would have gotten the job, is almost certainly not true. Etiological causation does not direct us to practical interventions—for that, we need to focus on other causes that are feasibly and ethically manipulable. But it can provide us with a rationale for wanting to change outcomes: Susan did not get the job because of a biological fact about her that is irrelevant to her qualifications, and we think that is unjust.

Glymour & Glymour

Was the Swedish corona strategy a success?

5 May, 2021 at 09:31 | Posted in Politics & Society | Leave a comment

.

Any scientific discussion about whether all or some versions of treatment lead to the same causal conclusion rests, again, on expert consensus and judgement. Because experts are fallible, the best we can do is to make these discussions—and our assumptions—as transparent as possible, so that others can directly challenge our arguments.

Miguel Hernán

Which causal inference method is the best one?

4 May, 2021 at 17:32 | Posted in Statistics & Econometrics | Leave a comment

.

Ô Solitude

4 May, 2021 at 14:45 | Posted in Varia | Leave a comment

.

Graphical causal models and collider bias

3 May, 2021 at 16:53 | Posted in Statistics & Econometrics | Leave a comment

causal-inference-in-statisticsWhy would two independent variables suddenly become dependent when we condition on their common effect? To answer this question, we return again to the definition of conditioning as filtering by the value of the conditioning variable. When we condition on Z, we limit our comparisons to cases in which Z takes the same value. But remember that Z depends, for its value, on X and Y. So, when comparing cases where Z takes some value, any change in value of X must be compensated for by a change in the value of Y — otherwise, the value of Z would change as well.

The reasoning behind this attribute of colliders — that conditioning on a collision node produces a dependence between the node’s parents — can be difficult to grasp at first. In the most basic situation where Z = X + Y, and X and Y are independent variables, we have the follow- ing logic: If I tell you that X = 3, you learn nothing about the potential value of Y, because the two numbers are independent. On the other hand, if I start by telling you that Z = 10, then telling you that X = 3 immediately tells you that Y must be 7. Thus, X and Y are dependent, given that Z = 10.

Students usually find this collider attribute rather perplexing. Why? My guess is the reason is most students — wrongly — think there can be no correlation without causation.

Deconstructing postmodernism

3 May, 2021 at 12:37 | Posted in Varia | Leave a comment

451: Impostor - explain xkcd

Responsible science

1 May, 2021 at 14:48 | Posted in Economics | 1 Comment

.

Hunting for causes (wonkish)

30 Apr, 2021 at 11:37 | Posted in Theory of Science & Methodology | Leave a comment

Causality and CorrelationThere are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size.

Second, statistical assumptions can be expressed in the familiar language of probability calculus, and thus assume an aura of scholarship and scientific respectability. Causal assumptions, as we have seen before, are deprived of that honor, and thus become immediate suspect of informal, anecdotal or metaphysical thinking. Again, this difference becomes illuminated among Bayesians, who are accustomed to accepting untested, judgmental assumptions, and should therefore invite causal assumptions with open arms—they don’t. A Bayesian is prepared to accept an expert’s judgment, however esoteric and untestable, so long as the judgment is wrapped in the safety blanket of a probability expression. Bayesians turn extremely suspicious when that same judgment is cast in plain English, as in “mud does not cause rain” …

The third resistance to causal (vis-a-vis statistical) assumptions stems from their intimidating clarity. Assumptions about abstract properties of density functions or about conditional independencies among variables are, cognitively speaking, rather opaque, hence they tend to be forgiven, rather than debated. In contrast, assumptions about how variables cause one another are shockingly transparent, and tend therefore to invite counter-arguments and counter-hypotheses.

Judea Pearl

Pearl’s seminal contributions to this research field is well-known and indisputable. But on the ‘taming’ and ‘resolve’ of the issues, yours truly however has to admit that — under the influence of especially David Freedman and Nancy Cartwright — he still has some doubts on the reach, especially in terms of realism and relevance, of Pearl’s ‘do-calculus solutions’ for social sciences in general and economics in specific (see here, here, here and here). The distinction between the causal — ‘interventionist’ — E[Y|do(X)] and the more traditional statistical — ‘conditional expectationist’ — E[Y|X] is crucial, but Pearl and his associates, although they have fully explained why the first is so important, have to convince us that it (in a relevant way) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to socio-economic contexts where ‘surgery’, ‘hypothetical minimal interventions’, ‘manipulativity’, ‘faithfulness’, ‘stability’, and ‘modularity’ are not perhaps so universally at hand.

CAUSES on Twitter: ""Right now, whole genome testing is most useful for  helping unravel the mystery for parents of children with rare disorders; it  can provide an answer about the cause, butWhat capacity a treatment has to contribute to an effect for an individual depends on the underlying structures – physiological, material, psychological, cultural and economic – that makes some causal pathways possible for that individual and some not, some likely and some unlikely. This is a well recognised problem when it comes to making inferences from model organisms to people. But it is equally a problem in making inferences from one person to another or from one population to another. Yet in these latter cases it is too often downplayed. When the problem is explicitly noted, it is often addressed by treating the underlying structures as moderators in the potential outcomes equation: give a name to a structure-type – men/women, old/young, poor/well off, from a particular ethnic background, member of a particular religious or cultural group, urban/rural, etc. Then introduce a yes-no moderator variable for it. Formally this can be done, and sometimes it works well enough. But giving a name to a structure type does nothing towards telling us what the details of the structure are that matter nor how to identify them. In particular, the usual methods for hunting moderator variables, like subgroup analysis, are of little help in uncovering what the aspects of a structure are that afford the causal pathways of interest. Getting a grip on what structures support similar causal pathways is central to using results from one place as evidence about another, and a casual treatment of them is likely to lead to mistaken inferences. The methodology for how to go about this is under developed, or at best under articulated, in EBM, possibly because it cannot be well done with familiar statistical methods and the ways we use to do it are not manualizable. It may be that medicine has fewer worries here than do social science and social policy, due to the relative stability of biological structures and disease processes. But this is no excuse for undefended presumptions about structural similarity.

Nancy Cartwright

Natural experiments in economics

28 Apr, 2021 at 15:35 | Posted in Economics | 1 Comment

Natural Experiments in the Social Sciences: A Design-Based Approach  (Strategies for Social Inquiry): Amazon.co.uk: Dunning, Thad:  9781107698000: BooksThad Dunning’s book Natural Experiments in the Social Sciences (CUP 2012) is a very useful guide for economists interested in research methodology in general and natural experiments in specific. Dunning argues that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but …

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

• And then there is also the problem that ‘Nature’ may not always supply us with the random experiments we are most interested in. If we are interested in X, why should we study Y only because design dictates that? Method should never be prioritized over substance!

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and the best method on the market. It is not.

The tools economists use

25 Apr, 2021 at 09:55 | Posted in Economics | 4 Comments

In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

Policy Methods Toolbox - Observatory of Public Sector Innovation  Observatory of Public Sector InnovationResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time …

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists …

Economists would not even know where to start without the work of historians, ethnographers, and other social scientists who provide rich narratives of phenomena and hypothesize about possible causes, but do not claim causal certainty.

Economists can be justifiably proud of the power of their statistical and analytical methods. But they need to be more self-conscious about these tools’ limitations. Ultimately, our understanding of the social world is enriched by both styles of research. Economists and other scholars should embrace the diversity of their approaches instead of dismissing or taking umbrage at work done in adjacent disciplines.

Dani Rodrik

As Rodrik notes, ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like 2019’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that randomized controlled trials, RCTs, cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.

‘Randomistas’ like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

All Along The Watchtower (personal)

24 Apr, 2021 at 09:48 | Posted in Varia | 1 Comment


In loving memory of my brother Peter, a big Jimi Hendrix fan.

La vita davanti a sé

23 Apr, 2021 at 22:28 | Posted in Varia | Leave a comment

.

Econometrics — science based on whimsical assumptions

22 Apr, 2021 at 14:36 | Posted in Statistics & Econometrics | 3 Comments

freedmanIt is often said that the error term in a regression equation represents the effect of the variables that were omitted from the equation. This is unsatisfactory …

There is no easy way out of the difficulty. The conventional interpretation for error terms needs to be reconsidered. At a minimum, something like this would need to be said:

The error term represents the combined effect of the omitted variables, assuming that

(i) the combined effect of the omitted variables is independent of each variable included in the equation,
(ii) the combined effect of the omitted variables is independent across subjects,
(iii) the combined effect of the omitted variables has expectation 0.

This is distinctly harder to swallow.

David Freedman

Yes, indeed, that is harder to swallow.

Those conditions on the error term actually mean that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them.

But that is actually impossible to fully manage in reality!

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables (usually just assuming linearity).

Every regression model constructed is misspecified. There is always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. No wonder that the econometric Holy Grail of consistent and stable parameter-values is still nothing but a dream.

overconfidenceIn order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

Ed Leamer

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real-world social systems are not governed by stable causal mechanisms or capacities. As Keynes noticed when he first launched his attack against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of economic theoretical modelling – rather useless.

statRegression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions …  Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that most of the inferences made from them are invalid.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.