Contaminated data — the case of racial discrimination
31 May, 2021 at 18:26 | Posted in Statistics & Econometrics | Comments Off on Contaminated data — the case of racial discrimination.
Is causality only in the mind?
31 May, 2021 at 10:35 | Posted in Theory of Science & Methodology | 3 Comments
I make two main points that are firmly anchored in the econometric tradition. The first is that causality is a property of a model of hypotheticals. A fully articulated model of the phenomena being studied precisely defines hypothetical or counterfactual states. A definition of causality drops out of a fully articulated model as an automatic by-product. A model is a set of possible counterfactual worlds constructed under some rules. The rules may be the laws of physics, the consequences of utility maximization, or the rules governing social interactions, to take only three of many possible examples. A model is in the mind. As a consequence, causality is in the mind.
So, according to this ‘Nobel prize’ winning econometrician, “causality is in the mind.” But is that a tenable view? Yours truly thinks not. If one as an economist or social scientist would subscribe to that view there would be pretty little reason to be interested in questions of causality at all. And it sure doesn’t suffice just to say that all science is predicated on assumptions. To most of us, models are seen as ‘vehicles’ or ‘instruments’ by which we represent causal processes and structures that exist and operate in the real world. As we all know, models often do not succeed in representing or explaining these processes and structures, but if we didn’t consider them as anything but figments of our minds, well then maybe we ought to reconsider why we should be in the science business at all …
The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its parts prevent the possibility of treating it as constituted by atoms with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind. To search for deductive precision and rigour in such a world is self-defeating. The only way to defend such an endeavour is to restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.
The value of getting at precise and rigorous conclusions about causality based on ‘tractability’ conditions that are seldom met in real life, is difficult to assess. Testing and constructing models is one thing, but we do also need guidelines for how to evaluate in which situations and contexts they are applicable. Formalism may help us a bit down the road, but we have to make sure it somehow also fits the world if it is going to be really helpful in navigating that world. In all of science, conclusions are never more certain than the assumptions on which they are founded. But most epistemically convenient methods and models that work in ‘well-behaved’ systems do not come with warrants that they will work in other (real-world) contexts.
Exchangeability (student stuff)
30 May, 2021 at 12:48 | Posted in Statistics & Econometrics | 1 Comment.
Debating identity politics and intersectionality
29 May, 2021 at 18:44 | Posted in Politics & Society | Comments Off on Debating identity politics and intersectionality.
Islamo-gauchisme : fausse polémique ou réalité ?
29 May, 2021 at 11:07 | Posted in Politics & Society | Comments Off on Islamo-gauchisme : fausse polémique ou réalité ?.
Econometrics — science built on untestable assumptions
27 May, 2021 at 22:31 | Posted in Statistics & Econometrics | 1 Comment
Just what is the causal content attributed to structural models in econometrics? And what does this imply with respect to the interpretation of the error term? …
Consider briefly the testability of the assumptions brought to light in this section. Given these assumptions directly involve the factors omitted in the error term, testing these empirically seems impossible without information about what is hidden in the error term. But given the error term is unobservable, this places the modeller in a difficult situation: how to know that some important factor has not been left out from the model undermining desired inferences in some way. It also shows that there will always be element of faith in the assumptions about the error term.
In econometrics textbooks it is often said that the error term in the regression models used represents the effect of the variables that were omitted from the model. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability.
Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.
Why econometric models by necessity are endlessly misspecified
27 May, 2021 at 14:39 | Posted in Statistics & Econometrics | 1 CommentThe impossibility of proper specification is true generally in regression analyses across the social sciences, whether we are looking at the factors affecting occupational status, voting behavior, etc. The problem is that as implied by the three conditions for regression analyses to yield accurate, unbiased estimates, you need to investigate a phenomenon that has underlying mathematical regularities – and, moreover, you need to know what they are. Neither seems true. I have no reason to believe that the way in which multiple factors affect earnings, student achievement, and GNP have some underlying mathematical regularity across individuals or countries. More likely, each individual or country has a different function, and one that changes over time. Even if there was some constancy, the processes are so complex that we have no idea of what the function looks like.
Researchers recognize that they do not know the true function and seem to treat, usually implicitly, their results as a good-enough approximation. But there is no basis for the belief that the results of what is run in practice is anything close to the underlying phenomenon, even if there is an underlying phenomenon. This just seems to be wishful thinking. Most regression analysis research doesn’t even pay lip service to theoretical regularities. But you can’t just regress anything you want and expect the results to approximate reality. And even when researchers take somewhat seriously the need to have an underlying theoretical framework – as they have, at least to some extent, in the examples of studies of earnings, educational achievement, and GNP that I have used to illustrate my argument – they are so far from the conditions necessary for proper specification that one can have no confidence in the validity of the results.
Most work in econometrics and regression analysis is made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.
The problem is that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is nothing that gives us reason to believe things will be different in the future.
To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is not only a belief with little support, but a belief impossible to support.
The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.
Every regression model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.
The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are of strongly questionable validity.
The econometric art as it is practiced at the computer … involves fitting many, perhaps thousands, of statistical models….There can be no doubt that such a specification search invalidates the traditional theories of inference … All the concepts of traditional theory utterly lose their meaning by the time an applied researcher pulls from the bramble of computer output the one thorn of a model he likes best, the one he chooses to portray as a rose.
Elizabeth Warren roasting top bank CEOs
26 May, 2021 at 19:51 | Posted in Economics, Politics & Society | 1 Comment.
That’s the right spirit, senator! Watching Warren give these baloney talking guys a well-earned lecture on public decency is absolutely fabulous.
Why economic models do not explain
26 May, 2021 at 19:14 | Posted in Economics | 1 Comment
Analogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.
One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.
But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’
Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.
The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?
One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’
Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’
In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).
In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?
As far as I can see there some heavy balls out there, but not even one single real economic law.
Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.
Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.
By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.
To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.
The problem articulated by Cartwright is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.
Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they.
In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all int their models.
So why do mainstream economists keep on pursuing this modelling project?
Testing causal explanations in economics
26 May, 2021 at 11:14 | Posted in Economics | Comments Off on Testing causal explanations in economics
Third, explanations fail by question (3.1) [“are the factors cited as possible causes of an event in fact aspects of the situation in which that event occurred?”] where the factors invoked as possible causes are idealisations. No doubt this claim will be considered contentious by some economists, accustomed as they are to explanations based on such dramatic assumptions as rational expectations, single-agent ‘economies’, and two-commodity ‘worlds’. The issue here turns on the distinction between abstraction (passing over or omitting to mention aspects of the causal history) and idealisation (invoking entities that exist only in the realm of ideas, such as most limit types and what Maki (1992) calls ‘theoretical isolations’). This distinction cannot be pursued here, but the general idea is that although every explanation involves abstraction insofar as we can never provide a complete list of the causes of any event, no genuine attempt at causal explanation can invoke as causes theoretical entities that have no existence other than in the minds and discourse of scientific investigators. For such entities cannot be aspects of real economic situations and are therefore ineligible as candidate causes. Explanations that invoke such entities therefore either fail, if offered as causal explanations in the sense I have described, or should be thought of as something other than causal explanations.
When it comes to modelling, yours truly does see the point often emphatically made for simplicity among economists and econometricians — but only as long as it doesn’t impinge on our truth-seeking. ‘Simple’ macroeconom(etr)ic models may be an informative heuristic tool for research. But if practitioners of modern macroeconom(etr)ics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fullfil its tasks. Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a skeptic of the pretences and aspirations of ‘simple’ macroeconom(etr)ic models and theories building on unwarranted idealisations. So far, I can’t really see that e. g. microfounded macromodels have yielded very much in terms of realistic and relevant economic knowledge.
All empirical sciences use simplifying or unrealistic assumptions (abstractions) in their modelling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
But models do not only face theory. They also have to look to the world. Being able to model a ‘credible world,’ a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.
Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming the invoked isolations to be warranted. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.
The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that the use of idealisations tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the ‘simplifying’ idealisations made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.
Constructing macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimising using fictitious idealisations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made in mainstream macroeconomics are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.
If you — to ‘save’ your theory or model — have to invoke things that do not exist , well, then your theory or model is probably not adequate enough to give the searched for causal explanations.
Functional finance and Ricardian equivalence
25 May, 2021 at 08:30 | Posted in Economics | 6 CommentsAccording to Abba Lerner, the purpose of public debt is “to achieve a rate of interest which results in the most desirable level of investment.” He also maintained that an application of Functional finance will have a tendency to balance the budget in the long run:
Finally, there is no reason for assuming that, as a result of the continued application of Functional Finance to maintain full employment, the government must always be borrowing more money and increasing the national debt. There are a number of reasons for this.
First, full employment can be maintained by printing the money needed for it, and this does not increase the debt at all. It is probably advisable, however, to allow debt and money to increase together in a certain balance, as long as one or the other has to increase.
Second, since one of the greatest deterrents to private investment is the fear that the depression will come before the investment has paid for itself, the guarantee of permanent full employment will make private investment much more attractive, once investors have gotten over their suspicion of the new procedure. The greater private investment will diminish the need for deficit spending.
Third, as the national debt increases, and with it the sum of private wealth, there will be an increasingly yield from taxes on higher incomes and inheritances, even if the tax rates are unchanged. These higher tax payments do not represent reductions of spending by the taxpayers. Therefore the government does not have to use these proceeds to maintain the requisite rate of spending, and can devote them to paying the interest on the national debt.
Fourth, as the national debt increases it acts as a self-equilibrating force, gradually diminishing the further need for its growth and finally reaching an equilibrium level where its tendency to grow comes completely to an end. The greater the national debt the greater is the quantity of private wealth. The reason for this is simply that for every dollar of debt owed by the government there is a private creditor who owns the government obligations (possibly through a corporation in which he has shares), and who regards these obligations as part of his private fortune. The greater the private fortunes the less is the incentive to add to them by saving out of current income …
Fifth, if for any reason the government does not wish to see private property grow too much … it can check this by taxing the rich instead of borrowing from them, in its program of financing government spending to maintain full employment. The rich will not reduce their spending significantly, and thus the effects on the economy, apart from the smaller debt, will be the same as if Money had been borrowed from them.
Even if most of today’s mainstream economists do not understand Lerner, there once was one who certainly did:
I recently read an interesting article on deficit budgeting … His argument is impeccable.
John Maynard Keynes CW XXVII:320
According to the Ricardian equivalence hypothesis the public sector basically finances its expenditures through taxes or by issuing bonds, and bonds must sooner or later be repaid by raising taxes in the future.
If the public sector runs extra spending through deficits, taxpayers will according to the hypothesis anticipate that they will have to pay higher taxes in future — and therefore increase their savings and reduce their current consumption to be able to do so, the consequence being that aggregate demand would not be different to what would happen if taxes were raised today.
Robert Barro attempted to give the proposition a firm theoretical foundation in the 1970s.
So let us get the facts straight from the horse’s mouth.
Describing the Ricardian Equivalence in 1989 Barro writes (emphasis added):
Suppose now that households’ demands for goods depend on the expected present value of taxes—that is, each household subtracts its share of this present value from the expected present value of income to determine a net wealth position. Then fiscal policy would affect aggregate consumer demand only if it altered the expected present value of taxes. But the preceding argument was that the present value of taxes would not change as long as the present value of spending did not change. Therefore, the substitution of a budget deficit for current taxes (or any other rearrangement of the timing of taxes) has no impact on the aggregate demand for goods. In this sense, budget deficits and taxation have equivalent effects on the economy — hence the term, “Ricardian equivalence theorem.” To put the equivalence result another way, a decrease in the government’s saving (that is, a current budget deficit) leads to an offsetting increase in desired private saving, and hence to no change in desired national saving.
Since desired national saving does not change, the real interest rate does not have to rise in a closed economy to maintain balance between desired national saving and investment demand. Hence, there is no effect on investment, and no burden of the public debt …
Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.
There is, of course, no reason for us to believe in that fairy-tale. Ricardo himself — mirabile dictu — didn’t believe in Ricardian equivalence. In “Essay on the Funding System” (1820) he wrote:
But the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly. We are too apt to think that the war is burdensome only in proportion to what we are at the moment called to pay for it in taxes, without reflecting on the probable duration of such taxes. It would be difficult to convince a man possessed of £20,000, or any other sum, that a perpetual payment of £50 per annum was equally burdensome with a single tax of £1000.
Science manufacturing ignorance
25 May, 2021 at 08:16 | Posted in Theory of Science & Methodology | Comments Off on Science manufacturing ignorance.
Blog at WordPress.com.
Entries and Comments feeds.