The leap of generalization

24 Feb, 2021 at 16:45 | Posted in Statistics & Econometrics | Leave a comment

Statistician Andrew Gelman has an interesting blogpost up on what inference in science really means:

gelmanI like Don Rubin’s take on this, which is that if you want to go from association to causation, state very clearly what the assumptions are for this step to work. The clear statement of these assumptions can be helpful in moving forward …

Another way to say this is that all inference is about generalizing from sample to population, to predicting the outcomes of hypothetical interventions on new cases. You can’t escape the leap of generalization. Even a perfectly clean randomized experiment is typically of interest only to the extent that it generalizes to new people not included in the original study.

I agree — but that’s also why we so often fail (even when having the best intentions) when it comes to making generalizations in social sciences.

What strikes me again and again when taking part of the results of randomized experiments is that they really are very similar to theoretical models. They all have the same basic problem — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/models to avoid the ‘confounding factors,’ the less the conditions are reminiscent of the real ‘target system.’ The nodal issue is basically about how scientists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I doubt the generalizability of the (randomized or not) experiment strategy because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance don’t give us warranted export licenses to the ‘real’ societies.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to best control for bias from unknown confounders. The received opinion — including Rubin and Gelman — is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences. Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given  the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity, etc., etc.)  these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. [And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions.] Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of  the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Many advocates of randomization want  to have deductively automated answers to  fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of  what’s going on in the real world, and not in (ideally controlled) experiments. Conclusions  can only be as certain as their premises — and that also goes for methods based on randomized experiments.

So yours truly agrees with Gelman that “all inference is about generalizing from sample to population.” But I don’t think randomized experiments — ideal or not — take us very far on that road. Randomized experiments in social sciences are far from being the ‘gold standard’ they so often are depicted as.

Agni Parthene

24 Feb, 2021 at 15:39 | Posted in Varia | Leave a comment

Frank Hahn on the value of equilibrium economics

22 Feb, 2021 at 10:16 | Posted in Economics | Leave a comment

Frank-HahnIt cannot be denied that there is something scandalous in the spectacle of so many people refining the analyses of economic states which they give no reason to suppose will ever, or have ever, come about. It probably is also dangerous. Equilibrium economics … is easily convertible into an apologia for existing economic arrangements and it is frequently so converted.

Frank Hahn

Keynes on the methodology of econometrics

21 Feb, 2021 at 20:39 | Posted in Statistics & Econometrics | Leave a comment

Machine Learning or Econometrics? | by Dr. Dataman | Analytics Vidhya |  Medium There is first of all the central question of methodology — the logic of applying the method of multiple correlation to unanalysed economic material, which we know to be non-homogeneous through time. If we are dealing with the action of numerically measurable, independent forces, adequately analysed so that we were dealing with independent atomic factors and between them completely comprehensive, acting with fluctuating relative strength on material constant and homogeneous through time, we might be able to use the method of multiple correlation with some confidence for disentangling the laws of their action … In fact we know that every one of these conditions is far from being satisfied by the economic material under investigation.

Letter from John Maynard Keynes to Royall Tyler (1938)

On the irrelevance of mainstream economics

20 Feb, 2021 at 12:06 | Posted in Economics | Leave a comment

There is something about the way mainstream economists construct their models nowadays that obviously doesn’t sit right.

One might have hoped that humbled by the manifest failure of its theoretical pretences during the latest economic-financial crises, the one-sided, almost religious, insistence on axiomatic-deductivist modelling as the only scientific activity worthy of pursuing in economics would give way to methodological pluralism based on ontological considerations rather than formalistic tractability. But — empirical evidence still only plays a minor role in mainstream economic theory, where models largely function as a substitute for empirical evidence.

Image result for the nature lawsonThe dominant ideology in the economics academy, I am maintaining, is precisely the extraordinarily wide-spread and long-lasting belief that mathematical modelling is somehow neutral at the level of content or form, but an essential method for science, underpinning any proper or serious economics …

The scandal of modern economics is not that it gets so many things wrong, but that it is so largely irrelevant. However in being irrelevant … the mainstream modelling orientation cannot but serve to deflect criticism from the nature of status quo at the level of the economy and thereby work to sustain it … In truth, few people take any mainstream analysis seriously, except in economics faculties’ promotion exercises.

If macroeconomic models — no matter of what ilk — build on microfoundational assumptions of representative actors, rational expectations, market clearing, and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations microfoundations is not a symptom of ‘irrationality.’ It rather shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality.

A gadget is just a gadget – and no matter how many brilliantly silly mathematical models you come up with, they do not help us working with the fundamental issues of modern economies. The mainstream economics project is — mostly because of its irrelevance — seriously harmful to most people, but also seriously harmless for those who benefit from the present status quo of our societies.

Ekonomi – vetenskap eller gissningslek?

19 Feb, 2021 at 11:15 | Posted in Economics | Leave a comment

Image result for arne anka baksmällan Är nationalekonomi vetenskapligt baserad kunskap? Kan man förutspå hur enskilda individer eller hela samhällen kommer att agera ekonomiskt? Till exempel om miljonbonusar gör toppchefer mer lojala mot de företag som anställt dom? Kan ekonomiska analytiker verkligen se in i framtiden och förutspå kommande kriser? Eller handlar ekonomernas prognoser mer om en kvalificerad gissningslek?

Yours truly, Tore Ellingsen, och Daniel Waldenström ger sina svar i det här avsnittet av P1:s Vetandets värld.

Economics — where does it lead us?

19 Feb, 2021 at 09:51 | Posted in Economics | 2 Comments

Image result for ariel rubinstein The issue of interpreting economic theory is, in my opinion, the most serious problem now facing economic theorists. The feeling among many of us can be summarized as follows. Economic theory should deal with the real world. It is not a branch of abstract mathematics even though it utilizes mathematical tools. Since it is about the real world, people expect the theory to prove useful in achieving practical goals. But economic theory has not delivered the goods. Predictions from economic theory are not nearly as accurate as those offered by the natural sciences, and the link between economic theory and practical problems … is tenuous at best. Economic theory lacks a consensus as to its purpose and interpretation. Again and again, we find ourselves asking the question “where does it lead?”

Ariel Rubinstein

Adorno on pop culture

19 Feb, 2021 at 09:39 | Posted in Politics & Society | 1 Comment

Image result for adorno pop cultureWhen Adorno issued his own analyses of pop culture, though, he went off the beam. He was too irritated by the new Olympus of celebrities—and, even more, by the enthusiasm they inspired in younger intellectuals—to give a measured view. In the wake of “The Work of Art,” Adorno published two essays, “On Jazz,” and “On the Fetish Character of Music and the Regression of Listening,” that ignored the particulars of pop sounds and instead resorted to crude generalizations. Notoriously, Adorno compares jitterbugging to “St. Vitus’ dance or the reflexes of mutilated animals.” He shows no sympathy for the African-American experience, which was finding a new platform through jazz and popular song. The writing is polemical, and not remotely dialectical.

Alex Ross / The New Yorker

Manufacturing strategic ignorance

19 Feb, 2021 at 00:34 | Posted in Theory of Science & Methodology | Leave a comment


Say ‘consistent’ one more time and I …

18 Feb, 2021 at 17:19 | Posted in Economics | 10 Comments

Image result for say 'consistent' one more time meme Being able to model a credible world, a world that somehow could be considered ‘similar’ to the real world is not the same as investigating the real world. The minimalist demand on models in terms of ‘credibility’ and ‘consistency’ has to give away to stronger epistemic demands. Claims in a ‘consistent’ model do not per se give a warrant for exporting the claims to real-world target systems.

Questions of external validity are important more specifically also when it comes to microfounded macro models. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Yours truly has for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science.

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Say you are a die hard ‘New Keynesian’ macroeconomist that wants to show that the preferred ‘rigidity’ view on economy is the right one. Then, of course, it is a pretty trivial modelling matter to come up with what ever assumptions that makes it possible for him to construct yet another revised and amended DSGE model that is ‘consistent’ with his preferred view on the economy. But what’s the point in doing that when we all know that the assumptions used are ridiculously unrealistic? Using known-to-be false modelling assumptions you can ‘prove’ anything!

Economics is not mathematics or logic. It’s about society. The real world. And if you want to analyse and explain things in that world you have to build on assumptions that are not known-to-be ridiculously false.

Axioms of ‘internal consistency’ of choice, such as the weak and the strong axioms of revealed preference … are often used in decision theory, micro-economics, game theory, social choice theory, and in related disciplines …

Image result for amartya sen Can a set of choices really be seen as consistent or inconsistent on purely internal grounds, without bringing in something external to choice, such as the underlying objectives or values that are pursued or acknowledged by choice? …

The presumption of inconsistency may be easily disputed, depending on the context, if we know a bit more about what the person is trying to do. Suppose the person faces a choice at a dinner table between having the last remaining apple in the fruit basket (y) and having nothing instead (x), forgoing the nice-looking apple. She decides to behave decently and picks nothing (x), rather than the one apple (y). If, instead, the basket had contained two apples, and she had encountered the choice between having nothing (x), having one nice apple (y) and having another nice one (z), she could reasonably enough choose one (y), without violating any rule of good behavior. The presence of another apple (z) makes one of the two apples decently choosable, but this combination of choices would violate the standard consistency conditions, including Property a, even though there is nothing particularly “inconsistent” in this pair of choices (given her values and scruples) … We cannot determine whether the person is failing in any way without knowing what he is trying to do, that is, without knowing something external to the choice itself.

Amartya Sen

Hovern’ engan

18 Feb, 2021 at 14:14 | Posted in Varia | Leave a comment


Image result for whatever you do you do to me

Folk som har otur när de försöker tänka …

17 Feb, 2021 at 21:51 | Posted in Politics & Society | Leave a comment

Att olika åsikter möts är bra, och inte farligt. Det viktiga är att vi kan tala om dem på skolan – och det gör vi. Namnändringen är fortfarande en relevant fråga. Men inte för att alla förväntas hålla med om att “vita havet” har rasistiska konnotationer. Kanske är det helt enkelt dags att skapa en annan samhörighet än färgerna svart och vit, som våra två hav nu heter. Inga beslut har fattats. Men världen förändras och vi med den.

Konstfacks rektor Maria Lantz


Beyond mathematical modelling

16 Feb, 2021 at 19:53 | Posted in Economics | Leave a comment

Image result for mathematical modelling reality Mathematical modelling has now dominated the economics academy for so long that younger people that emerge from economic studies who are dissatisfied with what they are taught, cannot think beyond the modelling. They have been immersed in it so long that it is a kind of common sense to them. The idea that modelling is bound to be almost always irrelevant just does not compute for many. Yet they recognize that modern academic economics mostly does not provide any insights. So, they assume that the fault lies in the sorts of topics covered, or conclusions drawn etc. with the solution to be found by way of doing the modelling differently. It is all quite dire …

The only diversity the mainstream advocate is that which remains consistent with the mathematical modelling emphasis. So, it is more or less all irrelevant, because it all carries an unrealistic ontology. Different accounts or ways of modelling of isolated atoms … The result is that academic economics has been and remains a big failure in terms of providing anything of relevance … The modelling project in economics, as it turns out, has in fact not produced a single insight into the real world – as opposed, of course, to occasionally tagging on insights determined independently of modelling. If that assessment were wrong, it would be so easy to provide a counterexample. Yet so far none has ever been seriously suggested in defence of the methods …

I love mathematics. But everything has a context of relevance. Mathematical modelling methods are just irrelevant to the analysis of most social situations; I suspect you have as much chance of cutting the grass with your armchair as generating insight by way of addressing human behaviour using the methods in question … The problem is not mathematical methods in themselves but their employment in conditions where doing so is simply not appropriate.

Tony Lawson

Using known to be false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follow by deduction — but is of course factually wrong. Models and theories building on that kind of reasoning is nothing but, as argued by Lawson, a pointless waste of time and resources.

Mainstream economics today is mainly an approach in which you think the goal is to be able to write down a set of empirically untested assumptions and then deductively infer conclusions from them. When applying this deductivist thinking to economics, economists usually set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t do for the simple reason that empty theoretical exercises of this kind do not tell us anything about the world. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

From a methodological point of view one can, of course, also wonder, how we are supposed to evaluate tests of theories and models building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us?

From falsehoods anything logically follows. Modern expected utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

So how should we evaluate the search for ever-greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its parts prevent the possibility of treating it as constituted by atoms with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind. To search for deductive precision and rigour in such a world is self-defeating. The only way to defend such an endeavour is to restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see.

‘New Keynesian’ macroeconomics

16 Feb, 2021 at 08:49 | Posted in Economics | 2 Comments

The standard NK [New Keynesian] model, like most of its predecessors in the RBC literature, represents an economy inhabited by an infinitely-lived representative household. That assumption, while obviously unrealistic, may be justified by the belief that, like so many other aspects of reality, the finiteness of life and the observed heterogeneity of individuals along many dimensions … can be safely ignored for the purposes of explaining aggregate fluctuations and their interaction with monetary policy, with the consequent advantages in terms of tractability …

Image result for new keynesianism memeThere is a sense in which none of the extensions of the NK model described above can capture an important aspect of most financial crises, namely, a gradual build-up of financial imbalances leading to an eventual “crash” characterized by defaults, sudden-stops of credit flows, asset price declines, and a large contraction in aggregate demand, output and employment. By contrast, most of the models considered above share with their predecessors a focus on equilibria that take the form of stationary fluctuations driven by exogenous shocks. This is also the case in variants of those models that allow for financial frictions of different kinds and which have become quite popular as a result of the financial crisis … The introduction of financial frictions in those models often leads to an amplification of the effects of non-financial shocks. It also makes room for additional sources of fluctuations related to the presence of financial frictions … or exogenous changes in the tightness of borrowing constraints​ … Most attempts to use a version of the NK models to explain the “financial crisis,” however, end up relying on a large exogenous shock that impinges on the economy unexpectedly, triggering a large recession, possibly amplified by a financial accelerator mechanism embedded in the model. It is not obvious what the empirical counterpart to such an exogenous shock is.

Jordi Gali

Gali’s presentation sure raises important questions that serious economists ought to ask themselves. Using ‘simplifying’ tractability assumptions such as ‘infinitely-lived representative household,’ rational expectations, common knowledge, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt — ‘New Keynesian’ or not — economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not come up with any other arguments for their chosen modelling strategy than Gali’s “advantages in terms of tractability,” it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Gali’s article underlines that the essence of mainstream economic theory is its almost exclusive use of a deductivist methodology. A methodology that is more or less used without a smack of argument to justify its relevance.

The theories and models that mainstream economists construct describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their “laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in economic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery to deliver “precise” and “rigorous” results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

Henry Louis Mencken once wrote that “there is always an easy solution to every human problem – neat, plausible and wrong.” And mainstream economics has indeed been wrong. Very wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real-world d target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of “infinitely-lived” representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.

The punch line is that most of the problems that mainstream economics is wrestling with, issues from its attempts at formalistic modelling per se of social phenomena. If scientific progress in economics – as Robert Lucas and other latter days mainstream economists seem to think – lies in our ability to tell “better and better stories” without considering the realm of imagination and ideas a retreat from real-world target systems reality, one would, of course, think our economics journal being filled with articles supporting the stories with empirical evidence. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these theoretical claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject not required. Mainstream economic theory is obviously navigating in dire straits.

If the ultimate criterion for success of a deductivist system is to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economic target systems. These systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

Mainstream economic theory still today consists mainly of investigating economic models. It has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence still only plays a minor role in mainstream economic theory, where models largely function as substitutes for empirical evidence.

What is wrong with mainstream economics is not that it employs models per se, but that it employs poor models. They are poor because they do not bridge to the real world target system in which we live. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on mathematical deductivist modelling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than “consequent advantages in terms of tractability.”

Noam Chomsky on postmodern grotesquerie

15 Feb, 2021 at 15:15 | Posted in Theory of Science & Methodology | Leave a comment


Image result for if you can't say it clearly you have not thought it through quotes

« Previous PageNext Page »

Blog at
Entries and comments feeds.