What is a good model?

14 september, 2015 kl. 20:42 | Publicerat i Theory of Science & Methodology | 1 kommentar

e18bc09e184ed8197c54b9ce807714a6Whereas increasing the difference between a model and its target system may have the advantage that the model becomes easier to study, studying a model is ultimately aimed at learning something about the target system. Therefore, additional approximations come with the cost of making the correspondence between model and target system less straight- forward. Ultimately, this makes the interpretation of results on the model in terms of the target system more problematic. We should keep in mind the advice of Whitehead: “Seek simplicity and distrust it.”

A ‘good model’ is to be understood as a model that achieves an equilibrium between being useful and not being too wrong. The usefulness of a model is clearly context-dependent; it may involve a combination of desired features such as being understandable (for students, researchers, or others), achieving computational tractability, and other criteria. ‘Not being too wrong’ is to be understood as ‘not being too different from reality’.

Sylvia Wenmackers & Danny Vanpoucke

An interesting article underlining the fact that all empirical sciences use simplifying or unrealistic assumptions in their modeling activities, and that that is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

But models do not only face theory. They also have to look to the world. Being able to model a ”credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Some of the standard assumptions made in neoclassical economic theory – on rationality, information handling and types of uncertainty – are not possible to make more realistic by ”de-idealization” or ”successive approximations” without altering the theory and its models fundamentally.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

No matter how many convoluted refinements of concepts made in the model, if the ”successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Sir David Hendry on the inadequacies of DSGE models

13 september, 2015 kl. 19:31 | Publicerat i Economics | Kommentarer inaktiverade för Sir David Hendry on the inadequacies of DSGE models

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.

HendryDavid-15x10cm-300dpiMoreover, all such views are predicated on there being no un-anticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.

The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models …

Many of the theoretical equations in DSGE models take a form in which a variable today, say incomes (denoted as yt) depends inter alia on its ‘expected future value’… For example, yt may be the log-difference between a de-trended level and its steady-state value. Implicitly, such a formulation assumes some form of stationarity is achieved by de-trending.

Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for DSGEs are profound. As we explain below, the mathematical basis of a DSGE model fails when distributions shift … This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are built.

David Hendry & Grayham Mizon

A great article, confirming much of Keynes’s critique of econometrics and underlining that to understand real world ”non-routine” decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks

Time is what prevents everything from happening at once. To simply assume that economic processes are stationary is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Econometrics is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Advocates of econometrics want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality and forecasting predictability in econometrics.

And the waltz goes on

13 september, 2015 kl. 11:25 | Publicerat i Varia | Kommentarer inaktiverade för And the waltz goes on

 

(h/t Jeanette Meyer)
Absolutely fabulous!!

I d’pluck a fair rose for my love (private)

12 september, 2015 kl. 22:45 | Publicerat i Varia | Kommentarer inaktiverade för I d’pluck a fair rose for my love (private)

 

Model selection and the reference class problem (wonkish)

12 september, 2015 kl. 16:20 | Publicerat i Theory of Science & Methodology | Kommentarer inaktiverade för Model selection and the reference class problem (wonkish)

The reference class problem arises when we want to assign a probability to a single proposition, X, which may be classified in various ways, yet its probability can change depending on how it is classified. (X may correspond to a sentence, or event, or an individual’s instantiating a given property, or the outcome of a random experiment, or a set of possible worlds, or some other bearer of probability.) X may be classified as belonging to set S1, or to set S2, and so on. Qua member of S1, its probability is p1; qua member of S2, its probability is p2, where p1 ≠ p2; and so on. And perhaps qua member of some other set, its probability does not exist at all …

0282_0Now, the bad news. Giving primacy to conditional probabilities does not so much rid us the epistemological reference class problem as give us another way of stating it. Which of the many conditional probabilities should guide us, should underpin our inductive reasonings and decisions? Our friend John Smith is still pondering his prospects of living at least eleven more years as he contemplates buying life insurance. It will not help him much to tell him of the many conditional probabilities that apply to him, each relativized to a different reference class: “conditional on your being an Englishman, your probability of living to 60 is x; conditional on your being consumptive, it is y; …”. (By analogy, when John Smith is pondering how far away is London, it will not help him much to tell him of the many distances that there are, each relative to a different reference frame.) If probability is to serve as a guide to life, it should in principle be possible to designate one of these conditional probabilities as the right one. To be sure, we could single out one conditional probability among them, and insist that that is the one that should guide him. But that is tantamount to singling out one reference class of the many to which he belongs, and claiming that we have solved the original reference class problem. Life, unfortunately, is not that easy—and neither is our guide to life.

Alan Hájek

When choosing which models to use in our analyses, we cannot get around the fact that the evaluation of our hypotheses, explanations, and predictions cannot be made without reference to a specific statistical model or framework. What Hajék so eloquently points at is that the probabilistic-statistical inferences we make from our samples decisively depends on what population we choose to refer to. The reference class problem shows that there usually are many such populations to choose from, and that the one we choose decides which probabilities we come up with and a fortiori which predictions we make. Not consciously contemplating the relativity effects this choice of ”nomological-statistical machines” have, is probably one of the reasons economists have a false sense of the amount of uncertainty that really afflicts their models.

Arvo Pärt

12 september, 2015 kl. 14:01 | Publicerat i Varia | 1 kommentar

 

 

The world’s greatest composer of contemporary classical music, Arvo Pärt, was 80 yesterday.

A day without listening to his music would be unimaginable.

The ‘bad luck’ theory of unemployment

11 september, 2015 kl. 15:46 | Publicerat i Economics | 1 kommentar

182ytxt83k5oxjpgAs is well-known, New Classical Economists have never accepted Keynes’s distinction between voluntary and involuntary unemployment. According to New Classical übereconomist Robert Lucas, an unemployed worker can always instantaneously find some job. No matter how miserable the work options are, ”one can always choose to accept them,” according to Lucas:

KLAMER: My taxi driver here is driving a taxi, even though he is an accountant, because he can’t find a job …

LUCAS: I would describe him as a taxi driver [laughing], if what he is doing is driving a taxi.

KLAMER: But a frustrated taxi driver.

LUCAS: Well, we draw these things out of urns, and sometimes we get good draws, sometimes we get bad draws.

Arjo Klamer

In New Classical Economics unemployment is seen as as a kind of leisure that workers optimally select.

This is, of course, only what you would expect of New Classical Chicago economists.

But sadly enough this extraterrestial view of unemployment is actually shared by so called New Keynesians, whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfounded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

It is extremely important to pose the question why mainstream economists choose to work with these kinds of models. It is not a harmless choice based solely on ‘internal’ scientific considerations. It is in fact also, and not to a trivial extent, a conscious choice motivated by ideology.

By employing these models one is actually to a significant degree absolving the structure of market economies from any responsibility in creating unemployment. Focussing on the choices of individuals, the unemployment ‘problem’ is reduced to being an individual ‘problem’, and not something that essentially has to do with the workings of market economies. A conscious methodological choice in this way comes to work as an apologetic device for not addressing or challenging given structures.

Not being able to explain unemployment, these models can’t help us to change the structures and institutions that produce the arguably greatest problem of our society.

Inequality and the poverty of atomistic reductionism

11 september, 2015 kl. 11:07 | Publicerat i Economics | 2 kommentarer

41xS7f+ClcL._SX322_BO1,204,203,200_The essence of this critique of the market lies in insisting on the structural relations that hold among individuals. The classic conception of the market sees individuals atomistically and therefore maintains that an individual’s holding can be justified by looking only at that individual. This was the original appeal of the libertarian picture: that the validity of an agreement could be established by establishing A’s willingness, B’s willingness, and the fact that they are entitled to trade what they are trading. Justification could be carried out purely locally. But this is not the case … Whether or not A is being coerced into trading with B is a function, not just of the local properties of A and B, but of the overall distribution of holdings and the willingness of other traders to trade with A …

If what we are trying to explain is really a relational property, the process of explaining it individual by individual simply will not work. And most if not all of the interesting properties in social explanation are inherently relational: for example, the properties of being rich or poor, employed or unemployed …

For the liberal the problem of economic distribution is raised by a simple juxtaposition: some are poor while others are rich. These two state of affairs are compared, side by side, and then the utilitarian question of redistribution becomes relevant. We could say that the liberal critique of inequality is that some are poor while others are rich, but, by contrast, the radical critique is that some are poor because others are rich …

tennis_520405In weakly competitive situations individualistic explanations suffice, whereas they are inadequate to explain strongly competitive situations. If A defeats B in golf and the question arises Why did A win and B lose?, the answer is simply the logical sum of the two independent explanations of the score which A received and the score which B received. But if A defeats B in tennis there is no such thing as the independent explanations of why A defeated B on the one hand and why B lost to A on the other. There is only one, unified explanation of the outcome of the match …

If anything is clear it is that society is not weakly, but strongly competitive and the presence of strong competition ensures that there are internal relations among the individual destinies of the participants. Consequently, individualistic explanations will not suffice in such cases.

The limits of statistical inference

10 september, 2015 kl. 21:55 | Publicerat i Statistics & Econometrics, Theory of Science & Methodology | 1 kommentar

causationCausality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

For more on these issues — see the chapter ”Capturing causality in economics and the limits of statistical inference” in my On the use and misuse of theories and models in economics.

Critical realism and mathematics in economics

9 september, 2015 kl. 11:00 | Publicerat i Economics | 1 kommentar


Interesting lecture, but I think just listening to what Tony Lawson or yours truly have to say, shows how unfounded and ridiculous is the idea that many mainstream economists have that because heterodox people often criticize the application of mathematics in mainstream economics, we are critical of math per se.

Indeed.

No, there is nothing wrong with mathematics per se.

No, there is nothing wrong with applying mathematics to economics.

amathMathematics is one valuable tool among other valuable tools for understanding and explaining things in economics.

What is, however, totally wrong, are the utterly simplistic beliefs that

• ”math is the only valid tool”

• ”math is always and everywhere self-evidently applicable”

• ”math is all that really counts”

• ”if it’s not in math, it’s not really economics”

”almost everything can be adequately understood and analyzed with math”

When it comes to the issue of mathematics in economics Roger Farmer also has some good advice well worth considering:

A common mistake amongst Ph.D. students is to place too much weight on the ability of mathematics to solve an economic problem. They take a model off the shelf and add a new twist. A model that began as an elegant piece of machinery designed to illustrate a particular economic issue, goes through five or six amendments from one paper to the next. By the time it reaches the n’th iteration it looks like a dog designed by committee.

Mathematics doesn’t solve economic problems. Economists solve economic problems. My advice: never formalize a problem with mathematics until you have already figured out the probable answer. Then write a model that formalizes your intuition and beat the mathematics into submission. That last part is where the fun begins because the language of mathematics forces you to make your intuition clear. Sometimes it turns out to be right. Sometimes you will realize your initial guess was mistaken. Always, it is a learning process.

And — of course — the always eminently quotable Keynes did also have some thoughts on the use of mathematics in economics …

But I am unfamiliar with the methods involved and it may be that my impression that nothing emerges at the end which has not been introduced expressly or tacitly at the beginning is quite wrong … It seems to me essential in an article of this sort to put in the fullest and most explicit manner at the beginning the assumptions which are made and the methods by which the price indexes are derived; and then to state at the end what substantially novel conclusions has been arrived at …

Quotation-Kenneth-Boulding-mathematics-economics-Meetville-Quotes-152829

I cannot persuade myself that this sort of treatment of economic theory has anything significant to contribute. I suspect it of being nothing better than a contraption proceeding from premises which are not stated with precision to conclusions which have no clear application … [This creates] a mass of symbolism which covers up all kinds of unstated special assumptions.

Letter from Keynes to Frisch 28 November 1935

Are all models wrong?

8 september, 2015 kl. 13:44 | Publicerat i Economics | 11 kommentarer

quote-all-models-are-wrong-but-some-are-useful-george-e-p-box-53-42-27If you say “All models are wrong” then the most important issue is to define the words. “All” is quite clear, “are” also is without much doubt. So, we are left with “models” and “wrong” …

The more interesting discussion is the one about the definition of truth. The philosopher Bertrand Russell has written something on truth about a century ago in his book The Problems of Philosophy (ch. XII):

”It will be seen that minds do not create truth or falsehood. They create beliefs, but when once the beliefs are created, the mind cannot make them true or false, except in the special case where they concern future things which are within the power of the person believing, such as catching trains. What makes a belief true is a fact, and this fact does not (except in exceptional cases) in any way involve the mind of the person who has the belief.”

Truth, Russell says, is correspondence with facts. If minds do not create truth or falsehood, but only beliefs, then I would argue that models are beliefs. They can be true when they correspond to the facts. So, there is hope! Models can be right after all! … A model is an abstraction, but as such it can be right. Of course, there is no proof that a model that has been right today will be right tomorrow, but that only makes economics an art.

Dirk Ehnts

Interesting reading that — once again — shows that being able to model and investigate a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all models are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance, etc.). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility.

The predominant strategy in ‘modern’ economics is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies. And as a rule the modelers consider their work done when they have been able to convince themselves that the model is valid. But — really — to have valid evidence is not enough. What economics needs is sound evidence. Why? Simply because the premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid, but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Validity is NOT enough

8 september, 2015 kl. 09:15 | Publicerat i Economics | 1 kommentar

Mainstream economics today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.


Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. As Julian Reiss writes:

errorineconomicsThere is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical prof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

Mainstream economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability. To have valid evidence is not enough. What economics needs is sound evidence.

Abductive argumentation

7 september, 2015 kl. 21:21 | Publicerat i Theory of Science & Methodology | 2 kommentarer

 

And if you want to know more on science and inference, the one book you should read is Peter Lipton‘s Inference to the Best Explanation (2nd ed, Routledge, 2004). A truly great book.

If you’re looking for a more comprehensive bibliography on Inference to the Best Explanation, ”Lord Keynes” has a good one here. [And for those who read Swedish, I self-indulgently recommend this.]

mcgregor4_clip_image002_0000

Utility — an almost vacuous concept

7 september, 2015 kl. 10:47 | Publicerat i Economics | Kommentarer inaktiverade för Utility — an almost vacuous concept

There is always a danger that, as you climb higher and higher, the principles become more and more general and harder and harder to translate into lower level operational principles …

Gary-Becker-CartoonThe economic notion of Utility looks dangerously general in the hands of, for example, Gary Becker. Becker won the Nobel Prize for modeling great swathes of what we do in day-to-day life under the principles of market equilibrium and rational choice theory, from drug addiction to racial discrimination to crime and family relations. Becker supposes that the agents he models act so as to maximize their expected utility. At that level of generality, say Principle U, we people are really much the same at base, governed by the same motivations and the same principle of human natur. The difficulty, or the trick, is to determine just what, in the case under study, utility consists in, which can include anything from financial gains to serious illness or the joys of watching your spouse have a good time. What in fact, are the principles that operate here? What does ‘utility’ mean here? This enterprise is relatively unconstrained, so that too much can count as utility. If (almost) anything goes, the principle gives very little help in the here and now.

Nancy Cartwright & Jeremy Hardie

Keeping one’s distance

6 september, 2015 kl. 18:25 | Publicerat i Varia | Kommentarer inaktiverade för Keeping one’s distance

CEZ7_cGWIAAZRNbOnly by the recognition of distance in our neighbour is strangeness alleviated: accepted into consciousness. The presumption of undiminished nearness present from the first, however, the flat denial of strangeness, does the other supreme wrong, virtually negates him as a particular human being and therefore the humanity in him, ‘counts him in,’ incorporates him in the inventory of property. Wherever immediateness posits and entrenches itself, the bad mediateness of society is insidiously asserted.

Theodor Adorno Minima Moralia

 
 
 

Policy evaluation

6 september, 2015 kl. 11:39 | Publicerat i Theory of Science & Methodology | Kommentarer inaktiverade för Policy evaluation

2f60511cc850364b8848132aea17f95aRazor-sharp intellects immediately go for the essentials. They have no time for bullshit. And neither should we.

In Evidence: For Policy Nancy Cartwright has assembled her papers on how better to use evidence from the sciences ”to evaluate whether policies that have been tried have succeeded and to predict whether those we are thinking of trying will produce the outcomes we aim for.” Many of the collected papers center around what can and cannot be inferred from results in well-done randomised controlled trials (RCTs).

A must-read for everyone with an interest in the methodology of science.

Rational expectations — a theory without empirical content

6 september, 2015 kl. 11:23 | Publicerat i Economics | 1 kommentar

Cassidy: What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?

Heckman: I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”

CarriedAwayIt became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lard Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.

Cassidy: What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?

Heckman: Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.

John Cassidy/The New Yorker

[h/t Noah Smith]

My first love (private)

5 september, 2015 kl. 18:18 | Publicerat i Varia | Kommentarer inaktiverade för My first love (private)

 

Do Re Mi

5 september, 2015 kl. 18:04 | Publicerat i Varia | Kommentarer inaktiverade för Do Re Mi

 

Absolutely fabulous. This flashmob made my day!

Evidence-based economics

5 september, 2015 kl. 14:50 | Publicerat i Economics | Kommentarer inaktiverade för Evidence-based economics

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of ”gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ”closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right ”closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ”It works there” is no evidence for ”it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ”rigorous” and ”precise” methods is despairingly small.

Like us, you want evidence that a policy will work here, where you are. Randomized controlled trials (RCTs) do not tell you that. They do not even tell you that a policy works. What they tell you is that a policy worked there, where the trial was carried out, in that population. Our argument is that the changes in tense – from ”worked” to ”work” – are not just a matter of grammatical detail. To move from one to the other requires hard intellectual and practical effort. The fact that it worked there is indeed fact. But for that fact to be evidence that it will work here, it needs to be relevant to that conclusion. To make RCTs relevant you need a lot more information and of a very different kind. What kind? That’s what this book is about.

« Föregående sidaNästa sida »

Blogga med WordPress.com.
Entries och kommentarer feeds.