Understanding discrete random variables (student stuff)

22 July, 2014 at 11:13 | Posted in Statistics & Econometrics | Leave a comment

 

Ergodicity and parallel universes

13 July, 2014 at 16:37 | Posted in Statistics & Econometrics | 3 Comments

Consider the following experiment … Say you have an unbiased die and you ask your friend Sam to throw the die a thousand times. Now you compute the average of the thousand rolls (by calculating the probability of each face showing up, multiplying by the weight of each face, summing all the values and dividing by 1000). This value should converge to 3.5 by the law of large numbers. From a temporal perspective, this is one universe with Sam rolling the die sequentially. This is called the time average since each subsequent roll is sequential.

multiverseNow lets say you have the good fortune of having a thousand friends who exist in real life (and not on Facebook) who would be so kind to help with the next piece of the experiment. You give them each a similar unbiased die, and ask each of them to roll the die just once, for a cumulative 1000 rolls. You compute the average of the 1000 rolls and find that this average too converges to 3.5. This is the ensemble average.

This system is said to be ergodic because the time average is equal to the ensemble average. Easy right? Not so fast.

See the problem is that most economists tend to view the world as being ergodic, when the real world is far from it. Ergodic systems are by definition zero sum. Everything moves around but over the long run usually cancels out. There can thus be no growth in an ergodic system.

Let’s look at the problem by considering how to treat the expected value of wealth in the real world. To do so let us do the following thought experiment …

I give you an unbiased die and ask you to roll it. If the die comes up with a 6, I will give you 10 times your total net worth. If you roll anything other than a 6, you give me all your assets (your house, car, investments etc etc – just your assets, not your liabilities). Do you take the bet? If you’ve taken a statistics or finance class, you know that the expected value of the bet is 0.83 (-1W*5/6 + 10W*1/6, where W is your initial wealth). Logic would dictate that you take the bet since the expected value is positive. But most people are hesitant when it comes to this bet in particular. Why is that? …

The problem lies with how we think about the averages. When we computed the ensemble average above, we easily assumed that we live in parallel universes – 5 universes where we go broke and 1 where we come out on top. We then mixed these universes together to get our ensemble average of 0.83. And therein lies the problem – we don’t live in a world with parallel universes. We live in a world which is non-ergodic – where wealth can grow and you can easily go broke.

In order to look at the problem the right way, we need to compute the time average of the returns. To do so, you don’t have to assume that you live in 6 parallel universes but rather just one, where you roll the die 6 times, one after the other. To compute the time average returns, you take the six outcomes (the order doesn’t matter) and take the 6th root to give you the answer. The time average in this case is negative. Thus, the ensemble average in the first case tends to hide the real fact that you could easily lose all your wealth since the probability of loss is greater than the probability of coming out on top.

The difference between the time average and the ensemble average is generally small for systems where the volatility is small. But, the effect becomes more and more pronounced when volatility increases. So the next time you think about computing expected values, think about whether you need to compute the time average or the ensemble average. Also,consider whether the system you are dealing with is ergodic or non-ergodic.

Intellectual Entropy

Paul Samuelson once famously claimed that the “ergodic hypothesis” is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume – as Samuelson and most other neoclassical economists – that ergodicity is essential to economics? The answer can only be – as the article above shows and as I have argued here, here, here, here and here – NO WAY!

On ‘randomistas’ and causal inference from randomization (wonkish)

6 July, 2014 at 15:05 | Posted in Economics, Statistics & Econometrics | 3 Comments

Yesterday, Dwayne Woods was kind enough to direct me to an interesting new article by Stephen Ziliak & Edward Teather-Posadas on The Unprincipled Randomization Principle in Economics and Medicine:

Over the past decade randomized field experiments have gained prominence in the toolbox of economics and policy making. Yet enthusiasts for randomization have perhaps paid not enough attention to conceptual and ethical errors caused by complete randomization.

Many but by no means all of the randomized experiments are being conducted by economists on poor people in developing nations. The larger objective of the new development economics is to use randomized controlled trials to learn about behavior in the field and to eradicate poverty …

RCT-Gold-Standard

There are … prudential and other ethical implications of a practice that deliberately withholds already-known-to-be best practice treatments from one or more human subjects. Randomized trials often give nil placebo or no treatment at all to vulnerable individuals, withholding (in the name of science) best treatments from the control group.

Although I don’t want to trivialize the ethical aspects of randomization studies (there’s an interesting discussion on the issue here), I still don’t think that Ziliak & Teather-Posadas get at the heart of the problem with randomization as a research strategy.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the “trade-off” between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the “confounding factors”, the less the conditions are reminicent of the real “target system”. You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different “nomological machines” attempt to learn about causal relationships. I have  strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the “real” societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real “target system”, then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (“treatment”). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt “succeeds”? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P’(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P’(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B “works” in China but not in the US? Or that B “works” in a backward agrarian society, but not in a post-modern service society? That B “worked” in the field study conducted in year 2008 but not in year 2014? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Everyone – both “labs” and “experimentalists” – should consider the following lines from David Salsburg’s The Lady Tasting Tea (Henry Holt 2001:146):

In Kolmogorov’s axiomatization of probability theory, we assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify Kolmogorov’s abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Or as mathematical statistician David Freedman had it:

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science …

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance …

Fisher’s “constitutional hypothesis” explained the association between smoking and disease on the basis of a gene that caused both. This idea is refuted not by making assumptions but by doing some empirical work.

 Statistical Models and Causal Inference

So, using Randomized Controlled Trials (RCTs) is not at all the “gold standard” that it has lately often been portrayed as. But I don’t see the ethical issues as the biggest problem. To me the biggest problem with RCTs is that they usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

Randomization and causal inference (wonkish)

5 July, 2014 at 14:13 | Posted in Statistics & Econometrics | 2 Comments


Evidence-based theories and policies are highly valued nowadays. As in the video above, randomization is supposed to best control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

Renowned econometrician Ed Leamer has responded to these allegations, maintaning that randomization is not sufficient, and that the hopes of a better empirical and quantitative macroeconomics are to a large extent illusory. Randomization – just as econometrics – promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain:

We economists trudge relentlessly toward Asymptopia, where data are unlimited and estimates are consistent, where the laws of large numbers apply perfectly andwhere the full intricacies of the economy are completely revealed. But it’s a frustrating journey, since, no matter how far we travel, Asymptopia remains infinitely far away. Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began.

assumptions-analysis1-2The pointlessness of much of our daily activity makes us receptive when the Priests of our tribe ring the bells and announce a shortened path to Asymptopia … We may listen, but we don’t hear, when the Priests warn that the new direction is only for those with Faith, those with complete belief in the Assumptions of the Path. It often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid … Small seeds of doubt in each of us inevitably turn to despair and we abandon that direction and seek another …

Ignorance is a formidable foe, and to have hope of even modest victories, we economists need to use every resource and every weapon we can muster, including thought experiments (theory), and the analysis of data from nonexperiments, accidental experiments, and designed experiments. We should be celebrating the small genuine victories of the economists who use their tools most effectively, and we should dial back our adoration of those who can carry the biggest and brightest and least-understood weapons. We would benefit from some serious humility, and from burning our “Mission Accomplished” banners. It’s never gonna happen.

Part of the problem is that we data analysts want it all automated. We want an answer at the push of a button on a keyboard … Faced with the choice between thinking long and hard verus pushing the button, the single button is winning by a very large margin.

Let’s not add a “randomization” button to our intellectual keyboards, to be pushed without hard reflection and thought.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods is despairingly small.

Here I think Leamer’s “button” metaphor is appropriate. Many advocates of randomization want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions can only be as certain as their premises — and that also goes for methods based on randomization.

Frisch and Haavelmo on econometrics and statistics

4 July, 2014 at 09:51 | Posted in Statistics & Econometrics | 1 Comment


For the sake of balancing the overly rosy picture of econometric achievements given in this otherwise nice video, it may be interesting to see how Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica – assessed the the role of econometrics in the advancement of economics. Although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And as the quote below shows, Frisch also shared some of Haavelmo’s — and Keynes’s — doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

If you only have time to read one statistics book — this is the one!

21 June, 2014 at 12:19 | Posted in Statistics & Econometrics | Leave a comment

freedmanMathematical statistician David A. Freedman‘s Statistical Models and Causal Inference (Cambridge University Press, 2010) is a marvellous book. It ought to be mandatory reading for every serious social scientist – including economists and econometricians – who doesn’t want to succumb to ad hoc assumptions and unsupported statistical conclusions!

freedHow do we calibrate the uncertainty introduced by data collection? Nowadays, this question has become quite salient, and it is routinely answered using wellknown methods of statistical inference, with standard errors, t -tests, and P-values … These conventional answers, however, turn out to depend critically on certain rather restrictive assumptions, for instance, random sampling …

Thus, investigators who use conventional statistical technique turn out to be making, explicitly or implicitly, quite restrictive behavioral assumptions about their data collection process … More typically, perhaps, the data in hand are simply the data most readily available …

The moment that conventional statistical inferences are made from convenience samples, substantive assumptions are made about how the social world operates … When applied to convenience samples, the random sampling assumption is not a mere technicality or a minor revision on the periphery; the assumption becomes an integral part of the theory …

In particular, regression and its elaborations … are now standard tools of the trade. Although rarely discussed, statistical assumptions have major impacts on analytic results obtained by such methods.

Consider the usual textbook exposition of least squares regression. We have n observational units, indexed by i = 1, . . . , n. There is a response variable yi , conceptualized as μi + i , where μi is the theoretical mean of yi while the disturbances or errors i represent the impact of random variation (sometimes of omitted variables). The errors are assumed to be drawn independently from a common (gaussian) distribution with mean 0 and finite variance. Generally, the error distribution is not empirically identifiable outside the model; so it cannot be studied directly—even in principle—without the model. The error distribution is an imaginary population and the errors i are treated as if they were a random sample from this imaginary population—a research strategy whose frailty was discussed earlier.

Usually, explanatory variables are introduced and μi is hypothesized to be a linear combination of such variables. The assumptions about the μi and i are seldom justified or even made explicit—although minor correlations in the i can create major bias in estimated standard errors for coefficients …

Why do μi and i behave as assumed? To answer this question, investigators would have to consider, much more closely than is commonly done, the connection between social processes and statistical assumptions …

We have tried to demonstrate that statistical inference with convenience samples is a risky business. While there are better and worse ways to proceed with the data at hand, real progress depends on deeper understanding of the data-generation mechanism. In practice, statistical issues and substantive issues overlap. No amount of statistical maneuvering will get very far without some understanding of how the data were produced.

More generally, we are highly suspicious of efforts to develop empirical generalizations from any single dataset. Rather than ask what would happen in principle if the study were repeated, it makes sense to actually repeat the study. Indeed, it is probably impossible to predict the changes attendant on replication without doing replications. Similarly, it may be impossible to predict changes resulting from interventions without actually intervening.

Ramsey’s invalid criticisms of Keynes’s probability theory (wonkish)

19 June, 2014 at 16:17 | Posted in Statistics & Econometrics | 2 Comments

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump”argument – susceptible to being ruined by some clever “bookie”.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

Stressing the importance of Keynes’ view on uncertainty John Kay writes in Financial Times:

Keynes believed that the financial and business environment was characterised by “radical uncertainty”. The only reasonable response to the question “what will interest rates be in 20 years’ time?” is “we simply do not know” …

For Keynes, probability was about believability, not frequency. He denied that our thinking could be described by a probability distribution over all possible future events, a statistical distribution that could be teased out by shrewd questioning – or discovered by presenting a menu of trading opportunities. In the 1920s he became engaged in an intellectual battle on this issue, in which the leading protagonists on one side were Keynes and the Chicago economist Frank Knight, opposed by a Cambridge philosopher, Frank Ramsey, and later by Jimmie Savage, another Chicagoan.

Keynes and Knight lost that debate, and Ramsey and Savage won, and the probabilistic approach has maintained academic primacy ever since. A principal reason was Ramsey’s demonstration that anyone who did not follow his precepts – anyone who did not act on the basis of a subjective assessment of probabilities of future events – would be “Dutch booked” … A Dutch book is a set of choices such that a seemingly attractive selection from it is certain to lose money for the person who makes the selection.

I used to tell students who queried the premise of “rational” behaviour in financial markets – where rational means are based on Bayesian subjective probabilities – that people had to behave in this way because if they did not, people would devise schemes that made money at their expense. I now believe that observation is correct but does not have the implication I sought. People do not behave in line with this theory, with the result that others in financial markets do devise schemes that make money at their expense.

Although this on the whole gives a succinct and correct picture of Keynes’s view on probability, I think it’s necessary to somewhat qualify in what way and to what extent Keynes “lost” the debate with the Bayesians Frank Ramsey and Jim Savage.

In economics it’s an indubitable fact that few mainstream neoclassical economists work within the Keynesian paradigm. All more or less subscribe to some variant of Bayesianism. And some even say that Keynes acknowledged he was wrong when presented with Ramsey’s theory. This is a view that has unfortunately also been promulgated by Robert Skidelsky in his otherwise masterly biography of Keynes. But I think it’s fundamentally wrong. Let me elaborate on this point (the argumentation is more fully presented in my book John Maynard Keynes (SNS, 2007)).

It’s a debated issue in newer research on Keynes if he, as some researchers maintain, fundamentally changed his view on probability after the critique levelled against his A Treatise on Probability by Frank Ramsay. It has been exceedingly difficult to present evidence for this being the case.

Ramsey’s critique was mainly that the kind of probability relations that Keynes was speaking of in Treatise actually didn’t exist and that Ramsey’s own procedure  (betting) made it much easier to find out the “degrees of belief” people were having. I question this both from a descriptive and a normative point of view.

What Keynes is saying in his response to Ramsey is only that Ramsey “is right” in that people’s “degrees of belief” basically emanates in human nature rather than in formal logic.

Patrick Maher, former professor of philosophy at the University of Illinois, even suggests that Ramsey’s critique of Keynes’s probability theory in some regards is invalid:

Keynes’s book was sharply criticized by Ramsey. In a passage that continues to be quoted approvingly, Ramsey wrote:

“But let us now return to a more fundamental criticism of Mr. Keynes’ views, which is the obvious one that there really do not seem to be any such things as the probability relations he describes. He supposes that, at any rate in certain cases, they can be perceived; but speaking for myself I feel confident that this is not true. I do not perceive them, and if I am to be persuaded that they exist it must be by argument; moreover, I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.” (Ramsey 1926, 161)

I agree with Keynes that inductive probabilities exist and we sometimes know their values. The passage I have just quoted from Ramsey suggests the following argument against the existence of inductive probabilities. (Here P is a premise and C is the conclusion.)

P: People are able to come to very little agreement about inductive proba- bilities.
C: Inductive probabilities do not exist.

P is vague (what counts as “very little agreement”?) but its truth is still questionable. Ramsey himself acknowledged that “about some particular cases there is agreement” (28) … In any case, whether complicated or not, there is more agreement about inductive probabilities than P suggests.

Ramsey continued:

“If … we take the simplest possible pairs of propositions such as “This is red” and “That is blue” or “This is red” and “That is red,” whose logical relations should surely be easiest to see, no one, I think, pretends to be sure what is the probability relation which connects them.” (162)

I agree that nobody would pretend to be sure of a numeric value for these probabilities, but there are inequalities that most people on reflection would agree with. For example, the probability of “This is red” given “That is red” is greater than the probability of “This is red” given “That is blue.” This illustrates the point that inductive probabilities often lack numeric values. It doesn’t show disagreement; it rather shows agreement, since nobody pretends to know numeric values here and practically everyone will agree on the inequalities.

Ramsey continued:

“Or, perhaps, they may claim to see the relation but they will not be able to say anything about it with certainty, to state if it ismore or less than 1/3, or so on. They may, of course, say that it is incomparable with any numerical relation, but a relation about which so little can be truly said will be of little scientific use and it will be hard to convince a sceptic of its existence.” (162)

Although the probabilities that Ramsey is discussing lack numeric values, they are not “incomparable with any numerical relation.” Since there are more than three different colors, the a priori probability of “This is red” must be less than 1/3 and so its probability given “This is blue” must likewise be less than 1/3. In any case, the “scientific use” of something is not relevant to whether it exists. And the question is not whether it is “hard to convince a sceptic of its existence” but whether the sceptic has any good argument to support his position …

Ramsey concluded the paragraph I have been quoting as follows:

“Besides this view is really rather paradoxical; for any believer in induction must admit that between “This is red” as conclusion and “This is round” together with a billion propositions of the form “a is round and red” as evidence, there is a finite probability relation; and it is hard to suppose that as we accumulate instances there is suddenly a point, say after 233 instances, at which the probability relation becomes finite and so comparable with some numerical relations.” (162)

Ramsey is here attacking the view that the probability of “This is red” given “This is round” cannot be compared with any number, but Keynes didn’t say that and it isn’t my view either. The probability of “This is red” given only “This is round” is the same as the a priori probability of “This is red” and hence less than 1/3. Given the additional billion propositions that Ramsey mentions, the probability of “This is red” is high (greater than 1/2, for example) but it still lacks a precise numeric value. Thus the probability is always both comparable with some numbers and lacking a precise numeric value; there is no paradox here.

I have been evaluating Ramsey’s apparent argument from P to C. So far I have been arguing that P is false and responding to Ramsey’s objections to unmeasurable probabilities. Now I want to note that the argument is also invalid. Even if P were true, it could be that inductive probabilities exist in the (few) cases that people generally agree about. It could also be that the disagreement is due to some people misapplying the concept of inductive probability in cases where inductive probabilities do exist. Hence it is possible for P to be true and C false …

I conclude that Ramsey gave no good reason to doubt that inductive probabilities exist.

Ramsey’s critique made Keynes more strongly emphasize the individuals’ own views as the basis for probability calculations, and less stress that their beliefs were rational. But Keynes’s theory doesn’t stand or fall with his view on the basis for our “degrees of belief” as logical. The core of his theory – when and how we are able to measure and compare different probabilities – he doesn’t change. Unlike Ramsey he wasn’t at all sure that probabilities always were one-dimensional, measurable, quantifiable or even comparable entities.

Hendry and Mizon on the limited value of DSGE models

18 June, 2014 at 10:47 | Posted in Statistics & Econometrics | 5 Comments

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.

get off your assumptionsMoreover, all such views are predicated on there being no un-anticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.

The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models …

Many of the theoretical equations in DSGE models take a form in which a variable today, say incomes (denoted as yt) depends inter alia on its ‘expected future value’… For example, yt may be the log-difference between a de-trended level and its steady-state value. Implicitly, such a formulation assumes some form of stationarity is achieved by de-trending.

Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for DSGEs are profound. As we explain below, the mathematical basis of a DSGE model fails when distributions shift … This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are built.

David Hendry & Grayham Mizon

A great article, confirming much of Keynes’s critique of econometrics and underlining that to understand real world ”non-routine” decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks

Time is what prevents everything from happening at once. To simply assume that economic processes are stationary is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Econometrics is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Advocates of econometrics want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality and forecasting predictability in econometrics.

Econometric model specification

30 May, 2014 at 08:11 | Posted in Statistics & Econometrics | Leave a comment

 

How to prove discrimination

29 May, 2014 at 18:40 | Posted in Statistics & Econometrics | Leave a comment

 

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.