Proud father (personal)

23 Mar, 2023 at 20:42 | Posted in Varia | 2 Comments

david My son David Syll was elected today as a Swedish Bar Association member.

I know your mother Kristina and grandpa Erik — if they had still been with us — would have been so immensely proud of you for keeping up the family tradition.

Congratulations David!

 

Sadeness

22 Mar, 2023 at 15:36 | Posted in Economics | Leave a comment

.

Assumption uncertainty

21 Mar, 2023 at 14:31 | Posted in Statistics & Econometrics | Leave a comment

Assumptional Analysis: The Essence of Quantum Thinking - Kilmann DiagnosticsAn ongoing concern is that excessive focus on formal modeling and statistics can lead to neglect of practical issues and to overconfidence in formal results … Analysis interpretation depends on contextual judgments about how reality is to be mapped onto the model, and how the formal analysis results are to be mapped back into reality. But overconfidence in formal outputs is only to be expected when much labor has gone into deductive reasoning. First, there is a need to feel the labor was justified, and one way to do so is to believe the formal deduction produced important conclusions. Second, there seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

Unfortunately, assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Despite assumption uncertainties, modelers often express only the uncertainties derived within their modeling assumptions, sometimes to disastrous consequences. Econometrics supplies dramatic cautionary examples in which complex modeling has failed miserably in important applications …

Much time should be spent explaining the full details of what statistical models and algorithms actually assume, emphasizing the extremely hypothetical nature of their outputs relative to a complete (and thus nonidentified) causal model for the data-generating mechanisms. Teaching should especially emphasize how formal ‘‘causal inferences’’ are being driven by the assumptions of randomized (‘‘ignorable’’) system inputs and random observational selection that justify the ‘‘causal’’ label.

Sander Greenland

Yes, indeed, complex modeling when applying statistics theory fails miserably over and over again. One reason why it does — prominent in econometrics — is that the error term in the regression models standardly used is thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:

With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

On the benefits — and dangers — of reading

20 Mar, 2023 at 18:11 | Posted in Varia | Leave a comment

Amazon.fr - A la recherche du temps perdu, tome 1 : Du côté de chez Swann -  Proust, Marcel, Tadié, Jean-Yves - Livres As long as reading is for us the instigator whose magic keys have opened the door to those dwelling-places deep within us that we would not have known how to enter, its role in our lives is salutary.  It becomes dangerous, on the other hand, when, instead of awakening us to the personal life of the mind, reading tends to take its place, when the truth no longer appears to us as an ideal which we can realize only by the intimate progress of our own thought and the efforts of our heart, but as something material, deposited between the leaves of books like a honey fully prepared by others and which we need only take the trouble to reach down from the shelves of libraries and then sample passively in a perfect repose of mind and body.

Using counterfactuals in causal inference

20 Mar, 2023 at 14:01 | Posted in Economics | 3 Comments

What is Counterfactual Thinking | Explained in 2 min - YouTubeI have argued that there are four major problems in the way of using the counterfactual account for causal inference. Of the four, I argued that the fourth — the problem of indeterminacy — is likely to be the most damaging: To the extent that some of the causal principles that connect counterfactual antecedent and consequent are genuinely indeterministic, the counterfactual will be of the “might have been” and not the “would have been” kind …

The causal principles describing a situation of interest must be weak enough — that is, contain genuinely indeterministic relations so that the counterfactual antecedent can be implemented … At the same time, the principles must be strong enough — that is, contain enough deteerministic relations so that the consequent follows from the antecedent together with the principles … What is required is enough indeterministic causal relations so that the antecedent can be implemented and enough deterministic relations so that the consequent (or its negation) follows.

Evidently, this is a tall order: Why would deterministic and indeterministic causal principles be distributed in just this way? Wouldn’t it seem likely that to the extent we are willing to believe that the antecedent event was contingent, we are also willing to believe that the outcome remained contingent given the antecedent event? …

A final argument in favor of counterfactuals even in the context of establishing causation is that there are no alternatives that are unequivocally superior. The main alternative to the counterfactual account is process tracking. But process tracking is itself not without problems … For all its difficulties, counterfactual speculation may sometimes be the only way to make causal inferences about singular events.

Julian Reiss

On the limited value of randomization

18 Mar, 2023 at 19:50 | Posted in Statistics & Econometrics | Leave a comment

In Social Science and Medicine (December 2017), Angus Deaton & Nancy Cartwright argue that Randomized Controlled Trials (RCTs) do not have any warranted special status. They are, simply, far from being the ‘gold standard’ they are usually portrayed as:

Some common misunderstandings about randomization | LARS P. SYLLContrary to frequent claims in the applied literature, randomization does not equalize everything other than the treatment in the treatment and control groups, it does not automatically deliver a precise estimate of the average treatment effect (ATE), and it does not relieve us of the need to think about (observed or unobserved) covariates. Finding out whether an estimate was generated by chance is more difficult than commonly believed. At best, an RCT yields an unbiased estimate, but this property is of limited practical value. Even then, estimates apply only to the sample selected for the trial, often no more than a convenience sample, and justification is required to extend the results to other groups, including any population to which the trial sample belongs, or to any individual, including an individual in the trial. Demanding ‘external validity’ is unhelpful because it expects too much of an RCT while undervaluing its potential contribution. RCTs do indeed require minimal assumptions and can operate with little prior knowledge. This is an advantage when persuading distrustful audiences, but it is a disadvantage for cumulative scientific progress, where prior knowledge should be built upon, not discarded. RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not ‘what works’, but ‘why things work’.

In a comment on Deaton & Cartwright, statistician Stephen Senn argues that on several issues concerning randomization Deaton & Cartwright “simply confuse the issue,” that their views are “simply misleading and unhelpful” and that they make “irrelevant” simulations:

My view is that randomisation should not be used as an excuse for ignoring what is known and observed but that it does deal validly with hidden confounders. It does not do this by delivering answers that are guaranteed to be correct; nothing can deliver that. It delivers answers about which valid probability statements can be made and, in an imperfect world, this has to be good enough. Another way I sometimes put it is like this: show me how you will analyse something and I will tell you what allocations are exchangeable. If you refuse to choose one at random I will say, “why? Do you have some magical thinking you’d like to share?”

Contrary to Senn, Andrew Gelman shares Deaton’s and Cartwright’s view that randomized trials often are overrated:

There is a strange form of reasoning we often see in science, which is the idea that a chain of reasoning is as strong as its strongest link. The social science and medical research literature is full of papers in which a randomized experiment is performed, a statistically significant comparison is found, and then story time begins, and continues, and continues—as if the rigor from the randomized experiment somehow suffuses through the entire analysis …

One way to get a sense of the limitations of controlled trials is to consider the conditions under which they can yield meaningful, repeatable inferences. The measurement needs to be relevant to the question being asked; missing data must be appropriately modeled; any relevant variables that differ between the sample and population must be included as potential treatment interactions; and the underlying effect should be large. It is difficult to expect these conditions to be satisfied without good substantive understanding. As Deaton and Cartwright put it, “when little prior knowledge is available, no method is likely to yield well-supported conclusions.” Much of the literature in statistics, econometrics, and epidemiology on causal identification misses this point, by focusing on the procedures of scientific investigation—in particular, tools such as randomization and p-values which are intended to enforce rigor—without recognizing that rigor is empty without something to be rigorous about.

Yours truly’s view is that nowadays many social scientists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of models used in social sciences. In their view, they are more or less tests of ‘an underlying model’ that enable them to make the right selection from the ever-expanding ‘collection of potentially applicable models.’ When looked at carefully, however, there are in fact few real reasons to share this optimism.

Many ‘experimentalists’ claims that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) explaining. And then the population problem is more difficult to tackle.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of ‘exchangeable’ and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X. But as noted by Jerome Cornfield, even if one of the main functions of randomization is to generate a sample space, there are

reasons for questioning the basic role of the sample space, i.e., of variations from sample to sample, in statistical theory. In practice, certain unusual samples would ordinarily be modified, adjusted or entirely discarded, if they in fact were obtained, even though they are part of the basic description of sampling variation. Savage reports that Fisher, when asked what he would do with a randomly selected Latin Square that turned out to be a Knut Vik Square, replied that “he thought he would draw again and that, ideally, a theory explicitly excluding regular squares should be developed.” But this option is not available in clinical trials and undesired baseline imbalances between treated and control groups can occur. There is often no alternative to reweighting or otherwise adjusting for these imbalances.

In a usual regression context, one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating whether being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real-world systems.

Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that are produced every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real-world target system we happen to live in.

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like ‘faithfulness,’ ‘exchangeability,’ or stability, is not to give proof. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do not produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real world, well, then we haven’t really obtained the causation we are looking for.

As social scientists, we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

And as if this wasn’t enough, one could also seriously wonder what kind of ‘populations’ many statistical models ultimately are based on. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the ‘infinite super populations’ used in e.g. the ‘potential outcome’ framework that has become so popular lately in social sciences?

Modelling assumptions made in statistics are more often than not made for mathematical tractability reasons, rather than verisimilitude. That is unfortunately also a reason why the methodological ‘rigour’ encountered when taking part of statistical research often is deceptive. The models constructed may seem technically advanced and very ‘sophisticated,’ but that’s usually only because the problems here discussed have been swept under the carpet. Assuming that our data are generated by ‘coin flips’ in an imaginary ‘superpopulation’ only means that we get answers to questions that we are not asking. The inferences made based on imaginary ‘superpopulations,’ well, they too are nothing but imaginary.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here’. Causes deduced in an experimental setting still have to show that they come with an export warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is often despairingly small.

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science …

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling — by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance …

David A. Freedman Statistical Models and Causal Inference

On fighting inflation

18 Mar, 2023 at 16:51 | Posted in Economics | 3 Comments

.

Absolutely lovely! Comedian and television host Jon Stewart turns out to know much more about real-world economics than mainstream Harvard economist Larry Summers. Don’t know why, but watching this interview/debate makes yours truly come to think about a famous H. C. Andersen tale …

Ep. 18 - The Emperor is Naked! - Buck and Chaco (podcast) | Listen Notes

Greatest intro in pop history

18 Mar, 2023 at 16:12 | Posted in Varia | Leave a comment

.

Norman Whitfield’s instrumental arrangement to this 1972 ‘psychedelic soul’ hit is still unparalleled.

Physics envy — a sure way to make economics useless

18 Mar, 2023 at 12:05 | Posted in Economics | 2 Comments

Physics envy" Magnet for Sale by sketchplanator | RedbubbleIn the 1930s, Lionel Robbins laid down the basic commandments of the discipline when he said that the premises on which economics was founded followed from ‘deduction from simple assumptions reflecting very elementary facts of general experience’, and as such were ‘as universal as the laws of mathematics or mechanics, and as little capable of “suspension”.

Ah yes, general experience. What did Albert Einstein allegedly say about common sense? A funny thing happened on its way to becoming a science: economics seldom tested its premises empirically. Only in recent years has there been serious investigation of its core assumptions and, all too often, they’ve been found wanting.

Unlike in physics, there are no universal and immutable laws of economics. You can’t will gravity out of existence. But as the recurrence of speculative bubbles shows, you can unleash ‘animal spirits’ so that human behaviour and prices themselves defy economic gravity …

Given this willful blindness, the current reaction against economists is understandable. In response, a ‘data revolution’ has prompted many economists to do more grunt work with their data, while engaging in public debates about the practicality of their work. Less science, more social. That is a recipe for an economics that might yet redeem the experts.

John Rapley

Physics envy still lingers on among many mainstream economists. It is also one of the prime reasons behind the pseudo-scientific character of modern mainstream economics. The use of mathematics in mainstream economics — in contradistinction to physics — has not given us much in terms of accurate explanations and successful predictions of real-world phenomena.

Turning economics into a ‘pseudo-natural science’ is — as Keynes made very clear in a letter to Roy Harrod in 1938 — something that must be firmly ‘repelled.’

Alone together

17 Mar, 2023 at 09:48 | Posted in Varia | Leave a comment

.

Does randomization control for ‘lack of balance’?

16 Mar, 2023 at 16:40 | Posted in Statistics & Econometrics | 2 Comments

Mike Clarke, the Director of the Cochrane Centre in the UK, for example, states on the Centre’s Web site: ‘In a randomized trial, the only difference between the two groups being compared is that of most interest: the intervention under investigation’.

Evidence-based medicine is broken: why we need data and technology to fix itThis seems clearly to constitute a categorical assertion that by randomizing, all other factors — both known and unknown — are equalized between the experimental and control groups; hence the only remaining difference is exactly that one group has been given the treatment under test, while the other has been given either a placebo or conventional therapy; and hence any observed difference in outcome between the two groups in a randomized trial (but only in a randomized trial) must be the effect of the treatment under test.

Clarke’s claim is repeated many times elsewhere and is widely believed. It is admirably clear and sharp, but it is clearly unsustainable … Clearly the claim taken literally is quite trivially false: the experimental group contains Mrs Brown and not Mr Smith, whereas the control group contains Mr Smith and not Mrs Brown, etc. Some restriction on the range of differences being considered is obviously implicit here; and presumably the real claim is something like that the two groups have the same means and distributions of all the [causally?] relevant factors. Although this sounds like a meaningful claim, I am not sure whether it would remain so under analysis … And certainly, even with respect to a given (finite) list of potentially relevant factors, no one can really believe that it automatically holds in the case of any particular randomized division of the subjects involved in the study. Although many commentators often seem to make the claim … no one seriously thinking about the issues can hold that randomization is a sufficient condition for there to be no difference between the two groups that may turn out to be relevant …

In sum, despite what is often said and written, no one can seriously believe that having randomized is a sufficient condition for a trial result to be reasonably supposed to reflect the true effect of some treatment. Is randomizing a necessary condition for this? That is, is it true that we cannot have real evidence that a treatment is genuinely effective unless it has been validated in a properly randomized trial? Again, some people in medicine sometimes talk as if this were the case, but again no one can seriously believe it. Indeed, as pointed out earlier, modern medicine would be in a terrible state if it were true. As already noted, the overwhelming majority of all treatments regarded as unambiguously effective by modern medicine today — from aspirin for mild headache through diuretics in heart failure and on to many surgical procedures — were never (and now, let us hope, never will be) ‘validated’ in an RCT.

John Worrall

For more on the question of ‘balance’ in randomized experiments, this collection of papers in Social Science & Medicine gives many valuable insights.

Visdom

15 Mar, 2023 at 16:23 | Posted in Varia | 1 Comment

Kan vara en bild av 1 person och text

On the method of ‘successive approximations’

15 Mar, 2023 at 11:12 | Posted in Economics | 1 Comment

In The World in the Model, Mary Morgan characterizes the modelling tradition of economics as one concerned with “thin men acting in small worlds” and writes:

Strangely perhaps, the most obvious element in the inference gap for models … lies in the validity of any inference between two such different media – forward from the real world to the artificial world of the mathematical model and back again from the model experiment to the real material of the economic world. The model is at most a parallel world. The parallel quality does not seem to bother economists. But materials do matter: it matters that economic models are only representations of things in the economy, not the things themselves.

Now, a salient feature of modern mainstream economics is the idea of science advancing through the use of ‘successive approximations.’ Is this really a feasible methodology?

Allan Gibbard & Hal Varian — as most other mainstream economists — obviously think so, arguing that “in many cases, an economic phenomenon will initially be represented by a caricature, and the representation will then gradually evolve into an econometrically estimable model.”

Yours truly thinks not.

Most models in science are representations of something else. Models ‘stand for’ or ‘depict’ specific parts of a ‘target system’ (usually the real world). All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something about the target system. But purpose-built assumptions made solely to secure a way of reaching deductively validated results in mathematical models – like ‘rational expectations’ or ‘representative actors’ — are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue — as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

The obvious ontological shortcoming of a basically epistemic — rather than ontological — approach such as ‘successive approximations’ is that ‘similarity’ or ‘resemblance’ tout court does not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. A good model is a model that works and helps us explain and understand the problems we want to research.  No matter how many convoluted refinements of concepts made in the model, if the ‘successive approximations’ do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target and makes our understanding of why things work the way they do in the real world into an inexplicable mystery. Dramatically simplified and distorted models building on known to be significantly wrong/false assumptions do not deliver much in terms of genuine explanation.

To this may also be added, as noted  by Kevin Hoover, that when mainstream economists talk about their models as approximations,

too often the term is used even in cases for which we do not have a clear characterization of the target of approximation. Frequently, it is asserted that a model is a good one if it approximates the ultimate truth about the world. That ultimate truth, however, is not available to us as a standard against which to judge the approximation. The use of the term, then, is not so much wrong as empty and useless.

Yours truly has to conclude that constructing ‘minimal’ economic models — or using microfounded macroeconomic models as ‘stylized facts’ or ‘stylized pictures’ somehow ‘successively approximating’ macroeconomic reality — is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

Many of the model assumptions standardly made in mainstream economics are harmfully restrictive and misrepresentative — rather than incomplete and harmless — and can not in any sensible meaning be considered approximations at all. As May Brodbeck has it:

Model ships appear frequently in bottles; model boys in heaven only.

The insufficiency of validity

14 Mar, 2023 at 11:14 | Posted in Theory of Science & Methodology | 2 Comments

Mainstream economics is at its core in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modelling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science, it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. As Julian Reiss writes:

Error in Economics: Towards a More Evidence-Based Methodology (Routledge  INEM Advances in Economic Methodology): Amazon.co.uk: Reiss, Julian:  9780415579728: BooksThere is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical proof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

Proving things about thought-up worlds is not enough. To have valid evidence is not enough. A deductive argument is valid if it makes it impossible for the premises to be true and the conclusion to be false. Fine, but what we need in science is sound arguments — arguments that are both valid and whose premises are all actually true.

Theories and models being ‘coherent’ or ‘consistent’ with data do not make the theories and models success stories. We want models to somehow represent their real-world targets. This representation can not be complete in most cases because of the complexity of the target systems. That kind of incompleteness is unavoidable. But it’s a  totally different thing when models misrepresent real-world targets. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Mainstream economics — a vending machine view

13 Mar, 2023 at 19:46 | Posted in Economics | 1 Comment

The Dappled World : A Study of the Boundaries of Science by Nancy Cartwright  (19 9780521644112 | eBayThe theory is a vending machine: you feed it input in certain prescribed forms for the desired output; it gurgitates for a while; then it drops out the sought-for representation, plonk, on the tray, fully formed, as Athena from the brain of Zeus. This image of the relation of theory to the models we use to represent the world is hard to fit with what we know of how science works.

When applying deductivist thinking to economics, economists usually set up ‘as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still hold when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

For Keynes, the world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a “weight of argument” that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, Keynes would add “nor do people”. The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by “legal atoms” with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to proving things in closed model worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and our theories and models of it.

Rigour, coherence and consistency have to be defined relative to the entities to which they are supposed to apply. Too often they have been restricted to questions internal to the theory or model. But clearly, the nodal point has to concern external questions, such as how our theories and models relate to real-world structures and relations. Applicability rather than internal validity ought to be the arbiter of taste.

In social sciences — including economics — we, as a rule, have neither deductive structures nor terribly believable or warranted assumptions. The vending machine account of science is particularly ill-suited for dealing with these situations.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.