Applying real-world filters to economic models

4 April, 2014 at 08:53 | Posted in Theory of Science & Methodology | 2 Comments

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us
genuine insights. To be taken seriously models should pass through the real world filter …

pleiderer

Although a model may be internally consistent, although it may be subtle and the analysis may be mathematically elegant, none of this carries any guarantee that it is applicable to the actual world. One might think that the applicability or “truth” of a theoretical model can always be established by formal empirical analysis that tests the model’s testable hypotheses, but this a bit of a fantasy. Formal empirical testing should, of course, be vigorously pursued, but lack of data and lack of natural experiments limit our ability in many cases to choose among competing models. In addition, even if we are able to formally test some hypotheses of these competing models, the results of these tests may only allow us to reject some of the models, leaving several survivors that have different implications on issues that we are not able to test. The real world filters will be critical in all these cases.

Paul Pfleiderer

Economic models — chameleons and theoretical cherry picking

28 March, 2014 at 20:07 | Posted in Economics, Theory of Science & Methodology | 6 Comments

chameleon-ipad-backgroundChameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty. Now these statements may not be made in quite the stark manner that I have made them here, but the underlying notion still prevails that because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters.

The best way to illustrate what chameleons are is to give some actual examples …

In April 2012 Harry DeAngelo and René Stulz circulated a paper entitled “Why High Leverage is Optimal for Banks.” The title of the paper is important here: it strongly suggests that the authors are claiming something about actual banks in the real world. In the introduction to this paper the authors explain what their model is designed to do:

“To establish that high bank leverage is the natural (distortion-free) result of intermediation focused on liquid-claim production, the model rules out agency problems, deposit insurance, taxes, and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of bank capital structure in the real world. However, in contrast to the MM framework – and generalizations that include only leverage-related distortions – it allows a meaningful role for banks as producers of liquidity and shows clearly that, if one extends the MM model to take that role into account, it is optimal for banks to have high leverage.” [emphasis added]

Their model, in other words, is designed to show that if we rule out many important things and just focus on one factor alone, we obtain the particular result that banks should be highly leveraged. This argument is for all intents and purpose analogous to the argument made in another paper entitled “Why High Alcohol Consumption is Optimal for Humans” by Bacardi and Mondavi. In the introduction to their paper Bacardi and Mondavi explain what their model does:

“To establish that high intake of alcohol is the natural (distortion free) result of human liquid-drink consumption, the model rules out liver disease, DUIs, health benefits, spousal abuse, job loss and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of human alcohol consumption in the real world. However, in contrast to the alcohol neutral framework – and generalizations that include only overconsumption- related distortions – it allows a meaningful role for humans as producers of that pleasant “buzz” one gets by consuming alcohol, and shows clearly that if one extends the alcohol neutral model to take that role into account, it is optimal for humans to be drinking all of their waking hours.”[emphasis added]

Deangelo and Stulz model is clearly a bookshelf theoretical model that would not pass through any reasonable filter if we want to take its results and apply them directly to the real world. In addition to ignoring much of what is important (agency problems, taxes, systemic risk, government guarantees, and other distortionary factors), the results of their main model are predicated on the assets of the bank being riskless and are based on a posited objective function that is linear in the percentage of assets funded with deposits. Given this the authors naturally obtain a corner solution with assets 100% funded by deposits. (They have no explicit model addressing what happens when bank assets are risky, but they contend that bank leverage should still be “high” when risk is present) …

cherry-pickDeAngelo and Stulz paper is a good illustration of my claim that one can generally develop a theoretical model to produce any result within a wide range. Do you want a model that produces the result that banks should be 100% funded by deposits? Here is aset of assumptions and an argument that will give you that result. That such a model exists tells us very little. By claiming relevance without running it through the filter it becomes a chameleon …

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us genuine insights. To be taken seriously models should pass through the real world filter.

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

Paul Pfleiderer

Reading Pfleiderer’s absolutely fabulous gem of an article reminded me of what H. L. Mencken once famously said:

There is always an easy solution to every problem – neat, plausible and wrong.

Pfleiderer’s perspective may be applied to many of the issues involved when modeling complex and dynamic economic phenomena. Let me take just one example — simplicity.

When it come to modeling I do see the point emphatically made time after time by e. g. Paul Krugman in simplicity — as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconomic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconomics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fulfill its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of  “simple” macroeconomic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.

That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.

As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.

 

Footnote: And of course you understood that the Bacardi/Mondavi paper is fictional. Or?

What is an explanation?

23 March, 2014 at 14:27 | Posted in Theory of Science & Methodology | 7 Comments

One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. But the researcher cannot stop at this. As a consequence of the relations and connections that the researcher finds, a will and demand arise for critical reflection on the findings. To show that unemployment depends on rigid social institutions or adaptations to European economic aspirations to integration, for instance, constitutes at the same time a critique of these conditions. It also entails an implicit critique of other explanations that one can show to be built on false beliefs. The researcher can never be satisfied with establishing that false beliefs exist but must go on to seek an explanation for why they exist. What is it that maintains and reproduces them? To show that something causes false beliefs – and to explain why – constitutes at the same time a critique of that thing.

explThis I think is something particular to the humanities and social sciences. There is no full equivalent in the natural sciences, since the objects of their study are not fundamentally created by human beings in the same sense as the objects of study in social sciences. We do not criticize apples for falling to earth in accordance with the law of gravitation.

The explanatory critique that constitutes all good social science thus has repercussions on the reflective person in society. To digest the explanations and understandings that social sciences can provide means a simultaneous questioning and critique of one’s self-understanding and the actions and attitudes it gives rise to. Science can play an important emancipating role in this way. Human beings can fulfill and develop themselves only if they do not base their thoughts and actions on false beliefs about reality. Fulfillment may also require changing fundamental structures of society. Understanding of the need for this change may issue from various sources like everyday praxis and reflection as well as from science.

Explanations of societal phenomena must be subject to criticism, and this criticism must be an essential part of the task of social science. Social science has to be an explanatory critique. The researcher’s explanations have to constitute a critical attitude toward the very object of research, society. Hopefully, the critique may result in proposals for how the institutions and structures of society can be constructed. The social scientist has a responsibility to try to elucidate possible alternatives to existing institutions and structures.

bhaskarIn a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is “closed,” and since social reality is fundamentally “open,” models of that kind cannot explain anything of what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

In the face of the kind of methodological individualism and rational choice theory that dominate positivist social science we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary “rational” actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals.

archerThe overarching flaw with methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

With a non-reductionist approach we avoid both determinism and voluntarism. For although the individual in society is formed and influenced by social structures that he does not construct himself, he can as an individual influence and change the given structures in another direction through his own actions. In society the individual is situated in roles or social positions that give limited freedom of action (through conventions, norms, material restrictions, etc.), but at the same time there is no principal necessity that we must blindly follow or accept these limitations. However, as long as social structures and positions are reproduced (rather than transformed), the actions of the individual will have a tendency to go in a certain direction.
Read more…

Searching for causality — statistics vs. history

17 March, 2014 at 11:37 | Posted in Theory of Science & Methodology | 4 Comments

History and statistics serve a common purpose: to understand the causal force of some phenomenon. It seems to me, moreover, that statistics is a simplifying tool to understand causality, whereas history is a more elaborate tool. And by “more elaborate” I mean that history usually attempts to take into account both more variables as well as fundamentally different variables in our quest to understand causality.

einstein-relativityTo make this point clear, think about what a statistical model is: it is a representation of some dependent variable as a function of one or more independent variables, which we think, perhaps because of some theory, have a causal influence on the dependent variable in question. A historical analysis is a similar type of model. For example, a historian typically starts by acknowledging some development, say a war, and then attempts to describe, in words, the events that led to the particular development. Now, it is true that historians typically delve deeply into the details of the events predating the development – e.g., by examining written correspondence between officials, by reciting historical news clippings to understand the public mood, etc. – but this simply means that the historian is examining more variables than the simplifying statistician. If the statistician added more variables to his regression, he would be on his way to producing a historical analysis.

There is, however, one fundamental way in which the historian’s model is different from the statistician’s: namely, the statistician is limited by the fact that he can only consider precisely quantified variables in his model. The historian, in contrast, can add whatever variables he wants to his model. Indeed, the historian’s model is non-numeric …

It is my view that what differentiates whether history or statistics will be successful relates to the subject area to which each tool is applied. In subjects where precisely quantified variables are all we need to confidently determine the causal force of some phenomenon, statistics will be preferable; in subjects where imprecisely quantified variables play an important causal role, we need to rely on history.

It seems to me, moreover, that the line dividing the subjects to which we apply our historical or statistical tools cuts along the same seam as does the line dividing the social sciences from the natural sciences. In the latter, we can ignore imprecisely quantified variables, such as human beliefs, as these variables don’t play an important causal role in the movement of natural phenomena. In the former, such imprecisely quantified variables play a central role in the construction and the stability of the laws that govern society at any given moment.

Econolosophy

On evidence and establishing scientific claims

8 March, 2014 at 17:54 | Posted in Theory of Science & Methodology | Leave a comment

nancyIn her interesting Pufendorf lectures Nancy Cartwright presents a theory of evidence and explains why randomized controlled trials (RCTs) are not at all the “gold standard” that it has lately often been portrayed as. As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its advocates portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

What science is all about — Inference to the Best Explanation

6 March, 2014 at 11:08 | Posted in Theory of Science & Methodology | 1 Comment

 

And if you want to know more on science and inference, the one book you should read is Peter Lipton‘s Inference to the Best Explanation (2nd ed, Routledge, 2004). A truly great book that has influenced my own scientific thinking tremendously.

If you’re looking for a more comprehensive bibliography on Inference to the Best Explanation, Lord Keynes has a good one here. [And for those who read Swedish, I self-indulgently recommend this.]

Top 10 Theory of Science Books for Economists

5 March, 2014 at 23:08 | Posted in Theory of Science & Methodology | 2 Comments

top-10-retail-news-thumb-610xauto-79997-600x240

• Archer, Margaret (1995). Realist social theory: the morphogenetic approach. Cambridge: Cambridge University Press

• Bhaskar, Roy (1978). A realist theory of science. Hassocks: Harvester P.

• Cartwright, Nancy (1989). Nature’s capacities and their measurement. Oxford: Clarendon Press

• Chalmers, Alan  (2013). What is this thing called science?. 4th. ed. Buckingham: Open University Press

• Garfinkel, Alan (1981). Forms of explanation: rethinking the questions in social theory. New Haven: Yale U.P.

• Harré, Rom (1960). An introduction to the logic of the sciences. London: Macmillan

• Lawson, Tony (1997). Economics and reality. London: Routledge

• Lieberson, Stanley (1987). Making it count: the improvement of social research and theory. Berkeley: Univ. of California Press

• Lipton, Peter (2004). Inference to the best explanation. 2. ed. London: Routledge

• Miller, Richard (1987). Fact and method: explanation, confirmation and reality in the natural and the social sciences. Princeton, N.J.: Princeton Univ. Press

Causality and the limits of statistical inference

12 February, 2014 at 12:43 | Posted in Theory of Science & Methodology | Leave a comment

causationCausality in social sciences – and economics – can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation – the foundation of all econometrics – can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

For more on these theory of science issues — see my article Capturing causality in economics and the limits of statistical inference in real-world economics review.

Why rigour is such a poor substitute for relevance

3 February, 2014 at 17:24 | Posted in Theory of Science & Methodology | 8 Comments

What is science? One brief definition runs: “A systematic knowledge of the physical or material world.” rigorMost definitions emphasize the two elements in this definition: (1) “systematic knowledge” about (2) the real world. Without pushing this definitional question to its metaphysical limits, I merely want to suggest that if economics is to be a science, it must not only develop analytical tools but must also apply them to a world that is now observable or that can be made observable through improved methods of observation and measurement. Or in the words of the Hungarian mathematical economist Janos Kornai, “In the real sciences, the criterion is not whether the proposition is logically true and tautologically deducible from earlier assumptions. The criterion of ‘truth’ is, whether or not the proposition corresponds to reality” …

One of our most distinguished historians of economic thought, George Stigler, has stated that: “The dominant influence upon the working range of economic theorists is the set of internal values and pressures of the discipline. The subjects of study are posed by the unfolding course of scientific developments.” He goes on to add: “This is not to say that the environment is without influence …” But, he continues, “whether a fact or development is significant depends primarily on its relevance to current economic theory.” What a curious relating of rigor to relevance! Whether the real world matters depends presumably on “its relevance to current economic theory.” Many if not most of today’s economic theorists seem to agree with this ordering of priorities …

Today, rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant … The theoretical analysis in much of this literature rests on assumptions that also fly in the face of the facts … Another related recent development in which theory proceeds with impeccable logic from unrealistic assumptions to conclusions that contradict the historical record, is the recent work on rational expectations …

I have scolded economists for what I think are the sins that too many of them commit, and I have tried to point the way to at least partial redemption. This road to salvation will not be an easy one for those who have been seduced by the siren of mathematical elegance or those who all too often seek to test unrealistic models without much regard for the quality or relevance of the data they feed into their equations. But let us all continue to worship at the altar of science. I ask only that our credo be: “relevance with as much rigor as possible,” and not “rigor regardless of relevance.” And let us not be afraid to ask — and to try to answer the really big questions.

Robert A. Gordon

On DSGE and the art of using absolutely ridiculous modeling assumptions

23 January, 2014 at 23:08 | Posted in Economics, Theory of Science & Methodology | 4 Comments

Reading some of the comments — by Noah Smith, David Andolfatto and others — on my post Why Wall Street shorts economists and their DSGE models, I — as usual — get the feeling that mainstream economists when facing anomalies think that there is always some further “technical fix” that will get them out of the quagmire. But are these elaborations and amendments on something basically wrong really going to solve the problem? I doubt it. Acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

When criticizing the basic workhorse DSGE model for its inability to explain involuntary unemployment, some DSGE defenders maintain that later elaborations — e.g. newer search models — manage to do just that. I strongly disagree. One of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — nothing but nonsense on stilts?

The whole strategy reminds me not so little of the following little tale:

Time after time you hear people speaking in baffled terms about mathematical models that somehow didn’t warn us in time, that were too complicated to understand, and so on. If you have somehow missed such public displays of throwing the model (and quants) under the bus, stay tuned below for examples.
But this is far from the case – most of the really enormous failures of models are explained by people lying …
truth_and_lies
A common response to these problems is to call for those models to be revamped, to add features that will cover previously unforeseen issues, and generally speaking, to make them more complex.

For a person like myself, who gets paid to “fix the model,” it’s tempting to do just that, to assume the role of the hero who is going to set everything right with a few brilliant ideas and some excellent training data.

Unfortunately, reality is staring me in the face, and it’s telling me that we don’t need more complicated models.

If I go to the trouble of fixing up a model, say by adding counterparty risk considerations, then I’m implicitly assuming the problem with the existing models is that they’re being used honestly but aren’t mathematically up to the task.

If we replace okay models with more complicated models, as many people are suggesting we do, without first addressing the lying problem, it will only allow people to lie even more. This is because the complexity of a model itself is an obstacle to understanding its results, and more complex models allow more manipulation …

I used to work at Riskmetrics, where I saw first-hand how people lie with risk models. But that’s not the only thing I worked on. I also helped out building an analytical wealth management product. This software was sold to banks, and was used by professional “wealth managers” to help people (usually rich people, but not mega-rich people) plan for retirement.

We had a bunch of bells and whistles in the software to impress the clients – Monte Carlo simulations, fancy optimization tools, and more. But in the end, the bank’s and their wealth managers put in their own market assumptions when they used it. Specifically, they put in the forecast market growth for stocks, bonds, alternative investing, etc., as well as the assumed volatility of those categories and indeed the entire covariance matrix representing how correlated the market constituents are to each other.

The result is this: no matter how honest I would try to be with my modeling, I had no way of preventing the model from being misused and misleading to the clients. And it was indeed misused: wealth managers put in absolutely ridiculous assumptions of fantastic returns with vanishingly small risk.

Cathy O’Neil

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.