Some methodological perspectives on statistical inference in economics

9 March, 2017 at 12:25 | Posted in Statistics & Econometrics | 2 Comments

Causal modeling attempts to maintain this deductive focus within imperfect research by deriving models for observed associations from more elaborate causal (‘structural’) models with randomized inputs … But in the world of risk assessment … the causal-inference process cannot rely solely on deductions from models or other purely algorithmic approaches. Instead, when randomization is doubtful or simply false (as in typical applications), an honest analysis must consider sources of variation from uncontrolled causes with unknown, nonrandom interdependencies. Causal identification then requires nonstatistical information in addition to information encoded as data or their probability distributions …

157e4bb021a73ee61009ce85178c36c3a6d4069b53842d45f3dc54a39754676bThis need raises questions of to what extent can inference be codified or automated (which is to say, formalized) in ways that do more good than harm. In this setting, formal models – whether labeled ‘‘causal’’ or ‘‘statistical’’ – serve a crucial but limited role in providing hypothetical scenarios that establish what would be the case if the assumptions made were true and the input data were both trustworthy and the only data available. Those input assumptions include all the model features and prior distributions used in the scenario, and supposedly encode all information being used beyond the raw data file (including information about the embedding context as well as the study design and execution).

Overconfident inferences follow when the hypothetical nature of these inputs is forgotten and the resulting outputs are touted as unconditionally sound scientific inferences instead of the tentative suggestions that they are (however well informed) …

The practical limits of formal models become especially apparent when attempting to integrate diverse information sources. Neither statistics nor medical science begins to capture the uncertainty attendant in this process, and in fact both encourage pernicious overconfidence by failing to make adequate allowance for unmodeled uncertainty sources. Instead of emphasizing the uncertainties attending field research, statistics and other quantitative methodologies tend to focus on mathematics and often fall prey to the satisfying – and false – sense of logical certainty that brings to population inferences. Meanwhile, medicine focuses on biochemistry and physiology, and the satisfying – and false – sense of mechanistic certainty about results those bring to individual events.

Sander Greenland

Wise words from a renowned epidemiologist.

As long as economists and statisticians cannot identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events -– in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment -– there, strictly seen, is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures –- something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

koopmansThe tool of statistical inference becomes available as the result of a self-imposed limitation of the universe of discourse. It is assumed that the available observations have been generated by a probability law or stochastic process about which some incomplete knowledge is available a priori …

It should be kept in mind that the sharpness and power of these remarkable tools of inductive reasoning are bought by willingness to adopt a specification of the universe in a form suitable for mathematical analysis.

Yes indeed — using statistics and econometrics to make inferences you have to make lots of (mathematical) tractability assumptions. And especially since econometrics aspires to explain things in terms of causes and effects, it needs loads of assumptions, such as e.g. invariance, additivity and linearity.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions. If not, they are of limited value to our explanations and predictions of real economic systems.

Unfortunately, real world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being invariant, atomistic and additive. But — when causal mechanisms operate in the real world they mostly do it in ever-changing and unstable ways. If economic regularities obtain they do so as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

So — if we want to explain and understand real-world economies we should perhaps be a little bit more cautious with using universe specifications “suitable for mathematical analysis.”

It should be kept in mind, when we evaluate the application of statistics and econometrics, that the sharpness and power of these remarkable tools of inductive reasoning are bought by willingness to adopt a specification of the universe in a form suitable for mathematical analysis.

As emphasised by Greenland, can causality in social sciences — and economics — never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis.

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real casuality we are searching for is the one existing in the real-world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.



RSS feed for comments on this post. TrackBack URI

  1. Pure math and logic systems are built up from abstract symbols. When applying math and logic to scientific fields, we need to associate abstract symbols to field terms with correct data semantics.
    The fatal issues from what I can see by using statistics and econometrics to economic data are:

    (1) Treat economic time-series data as “random” variables.
    Macro economic variables are structurally defined in terms of components in production/income/financial accounts. If we do not respect economic data semantics, then our regression equations or statistical inference networks in terms of wrong underlying data semantics become meaningless to the actual economy. We cannot identify the applicability in time periods or economic conditions from “timeless” equations. Thus, we cannot know how/when to use them to make assertions about the actual economy.

    (2) Hardwired “causal” factors in statistical and other math equations. Accounting identities correlate relevant “effect” factors into equations. Semantically, accounting identities are valid temporal logic assertions with ∀ time quantifier about the actual economy. They can be proved mathematically. We have precise effective structures from accounting identities, but vague “causal” factors obtained from semantically examining complete and relevant “effect” factors in accounting identities. In science, this is called “effective theory” and in Aristotle’s philosophy, this is called “formal cause”.

    For example, we can use this accounting identity P = (1- UR) * LFR * LPR to study the relationship of inflation (YoY P) and unemployment rate (UR) and to understand why Phillips curve and NAIRU are valid around 1960 -1970.
    Chart is here “

    We will never get a clear picture from regression equation P = a UR + b since we miss hidden factors (LFR LPR). Paricularly LPR (yellow line in chart) has more weight than UR to inflation. This 2-dimension (P, UR) equation is just a poor and misleading approximation of 4-dimension accounting identity.

  2. […] Some methodological perspectives on statistical inference in economics, […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at
Entries and comments feeds.