Manufacturing strategic ignorance

19 Feb, 2021 at 00:34 | Posted in Theory of Science & Methodology | Leave a comment


Noam Chomsky on postmodern grotesquerie

15 Feb, 2021 at 15:15 | Posted in Theory of Science & Methodology | Leave a comment


Image result for if you can't say it clearly you have not thought it through quotes

Hans Albert turns 100

8 Feb, 2021 at 08:41 | Posted in Economics, Theory of Science & Methodology | 1 Comment


Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

200px-Hans_Albert_2005-2A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

Hans Albert

Seen from a deductive-nomological perspective, typical economic models (M) usually consist of a theory (T) — a set of more or less general (typically universal) law-like hypotheses (H) — and a set of (typically spatio-temporal) auxiliary assumptions (A). The auxiliary assumptions give ‘boundary’ descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises T & A. Using this kind of model game theorists are (portrayed as) trying to explain (predict) facts by subsuming them under T, given A.

An obvious problem with the formal-logical requirements of what counts as H is the often severely restricted reach of the ‘law.’ In the worst case, it may not be applicable to any real, empirical, relevant, situation at all. And if A is not true, then M does not really explain (although it may predict) at all. Deductive arguments should be sound – valid and with true premises – so that we are assured of having true conclusions. Constructing game theoretical models assuming ‘common knowledge’ and ‘rational expectations,’ says nothing of situations where knowledge is ‘non-common’ and  expectations are ‘non-rational.’

Building theories and models that are ‘true’ in their own very limited ‘idealized’ domain is of limited value if we cannot supply bridges to the real world. ‘Laws’ that only apply in specific ‘idealized’ circumstances —  in ‘nomological machines’ — are not the stuff that real science is built of.

When confronted with the massive empirical refutations of almost all models they have set up, many game theorists react by saying that these refutations only hit A (the Lakatosian ‘protective belt’), and that by ‘successive approximations’ it is possible to make the models more readily testable and predictably accurate. Even if T & A1 do not have much of empirical content, if by successive approximation we reach, say, T & A25, we are to believe that we can finally reach robust and true predictions and explanations.

Hans Albert’s ‘Model Platonism’ critique shows that there is a strong tendency for modellers to use the method of successive approximations as a kind of ‘immunization,’ taking for granted that there can never be any faults with the theory. Explanatory and predictive failures hinge solely on the auxiliary assumptions. That the kind of theories and models used by game theorists should all be held non-defeasibly corroborated, seems, however — to say the least — rather unwarranted.

Retreating into looking upon models and theories as some kind of ‘conceptual exploration,’ and give up any hopes whatsoever of relating theories and models to the real world is pure defeatism. Instead of trying to bridge the gap between models and the world, they simply decide to look the other way.

To me, this kind of scientific defeatism is equivalent to surrendering our search for understanding and explaining the world we live in. It cannot be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything about the world we live in – then why should we waste any of our precious time on them?

When should we trust science?

3 Feb, 2021 at 19:00 | Posted in Theory of Science & Methodology | 2 Comments


Using formal mathematical modelling, mainstream economists sure can guarantee that the conclusions hold given the assumptions. However the validity we get in abstract model-worlds does not warrant transfer to real-world economies. Validity may be good, but it isn’t — as Nancy Cartwright so eloquently argues — enough. From a realist perspective, both relevance and soundness are sine qua non.

broken-linkIn their search for validity, rigour and precision, mainstream macro modellers of various ilks construct microfounded DSGE models that standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time, the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

The predominant strategy in mainstream macroeconomics today is to build ‘rigorous’ models and make things happen in these ‘analogue-economy models.’ But although macro-econometrics may have supplied economists with rigorous replicas of real economies, if the goal of theory is to be able to make accurate forecasts or explain what happens in real economies, this ability to — ad nauseam — construct toy models, does not give much leverage.

‘Rigorous’ and ‘precise’ New Classical models — and that goes for the ‘New Keynesian’ variety too — cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.


Mainstream economists are proud of having an ever-growing smorgasbord of models to cherry-pick from (as long as, of course, the models do not question the standard modelling strategy) when performing their analyses. The ‘rigorous’ and ‘precise’ deductions made in these closed models, however, are not in any way matched by a similar stringency or precision when it comes to what ought to be the most important stage of any research — making statements and explaining things in real economies. Although almost every mainstream economist holds the view that thought-experimental modelling has to be followed by confronting the models with reality — which is what they indirectly want to predict/explain/understand using their models — they all of a sudden become exceedingly vague and imprecise. It is as if all the intellectual force has been invested in the modelling stage and nothing is left for what really matters — what exactly do these models teach us about real economies.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimetre if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real-world economies.

Proving things ‘rigorously’ in mathematical models is at most a starting point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

On logic and science

29 Dec, 2020 at 11:35 | Posted in Theory of Science & Methodology | 6 Comments

Julia Rohrer в Twitter: "I don't always tweet about books I haven't  finished yet, but when I do, it's because they're awesome 📘 @_MiguelHernan  & Robins have a (free!) book on causalSuppose you conducted an observational study to identify the effect of heart transplant A on death Y and that you assumed no unmeasured confounding given disease severity L. A critic of your study says “the inferences from this observational study may be incorrect because of potential confounding.” The critic is not making a scientific statement, but a logical one. Since the findings from any observational study may be confounded, it is obviously true that those of your study can be confounded. If the critic’s intent was to provide evidence about the shortcomings of your particular study, he failed. His criticism is noninformative because he simply restated a characteristic of observational research that you and the critic already knew before the study was conducted.

To appropriately criticize your study, the critic needs to engage in a truly scientific conversation. For example, the critic may cite experimental or observational findings that contradict your findings, or he can say something along the lines of “the inferences from this observational study may be incorrect because of potential confounding due to cigarette smoking, a common cause through which a backdoor path may remain open”. This latter option provides you with a testable challenge to your assumption of no unmeasured confounding. The burden of the proof is again yours.

To be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

anBut that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Mathematics and logic cannot establish the truth value of facts. Never has. Never will.

Postmodern thinking

28 Dec, 2020 at 18:27 | Posted in Theory of Science & Methodology | Comments Off on Postmodern thinking

adornoThe compulsive types there correspond to the paranoids here. The wistful opposition to factual research, the legitimate consciousness that scientism forgets what is best, exacerbates through its naïvété the split from which it suffers. Instead of comprehending the facts, behind which others are barricaded, it hurriedly throws together whatever it can grab from them, rushing off to play so uncritically with apochryphal cognitions, with a couple isolated and hypostatized categories, and with itself, that it is easily disposed of by referring to the unyielding facts. It is precisely the critical element which is lost in the apparently independent thought. The insistence on the secret of the world hidden beneath the shell, which dares not explain how it relates to the shell, only reconfirms through such abstemiousness the thought that there must be good reasons for that shell, which one ought to accept without question. Between the pleasure of emptiness and the lie of plenitude, the ruling condition of the spirit [Geistes: mind] permits no third option.

Long before ‘postmodernism’ became fashionable among a certain kind of ‘intellectuals’, Adorno wrote searching critiques of this kind of thinking.

When listening to — or reading — the postmodern mumbo-jumbo​ that surrounds​ us today in social sciences and humanities, I often find myself wishing for that special Annie Hall moment of truth:

Why economic models do not explain

7 Dec, 2020 at 16:49 | Posted in Economics, Theory of Science & Methodology | 6 Comments

Thomas Piketty explains the meaning of economic models, and why we can't  rely on them — Quartz In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. Mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modelling project?

Mainstream ‘as if’ models are based on the logic of idealization and a set of tight axiomatic and ‘structural’ assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the assumptions are true, the conclusions necessarily follow. But it is a poor guide for real-world systems. As Hans Albert has it on this ‘style of thought’:

A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content … Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

The way axioms and theorems are formulated in mainstream economics often leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and theory without informational content never risks being empirically tested and found falsified. Used in mainstream ‘thought experimental’ activities, it may, of course, ​be very ‘handy’, but totally void of any empirical value.

Some economic methodologists have lately been arguing that economic models may well be considered ‘minimal models’ that portray ‘credible worlds’ without having to care about things like similarity, isomorphism, simplified ‘representationality’ or resemblance to the real world. These models are said to resemble ‘realistic novels’ that portray ‘possible worlds’. And sure: economists constructing and working with that kind of models learn things about what might happen in those ‘possible worlds’. But is that really the stuff real science is made of? I think not. As long as one doesn’t come up with credible export warrants to real-world target systems and show how those models — often building on idealizations with known to be false assumptions — enhance our understanding or explanations about the real world, well, then they are just nothing more than just novels.  Showing that something is possible in a ‘possible world’ doesn’t give us a justified license to infer that it therefore also is possible in the real world. ‘The Great Gatsby’ is a wonderful novel, but if you truly want to learn about what is going on in the world of finance, I would recommend rather reading Minsky or Keynes and directly confront real-world finance.

Different models have different cognitive goals. Constructing models that aim for explanatory insights may not optimize the models for making (quantitative) predictions or deliver some kind of ‘understanding’ of what’s going on in the intended target system. All modelling in science have tradeoffs. There simply is no ‘best’ model. For one purpose in one context model A is ‘best’, for other purposes and contexts model B may be deemed ‘best’. Depending on the level of generality, abstraction, and depth, we come up with different models. But even so, I would argue that if we are looking for what yours truly has called ‘adequate explanations’ (Ekonomisk teori och metod, Studentlitteratur, 2005) it is not enough to just come up with ‘minimal’ or ‘credible world’ models.

The assumptions and descriptions we use in our modelling have to be true — or at least ‘harmlessly’ false — and give a sufficiently detailed characterization of the mechanisms and forces at work. Models in mainstream economics do nothing of the kind.

Coming up with models that show how things may possibly be explained is not what we are looking for. It is not enough. We want to have models that build on assumptions that are not in conflict with known facts and that show how things actually are to be explained. Our aspirations have to be more far-reaching than just constructing coherent and ‘credible’ models about ‘possible worlds’. We want to understand and explain ‘difference-making’ in the real world and not just in some made-up fantasy world. No matter how many mechanisms or coherent relations you represent in your model, you still have to show that these mechanisms and relations are at work and exist in society if we are to do real science. Science has to be something more than just more or less realistic ‘story-telling’ or ‘explanatory fictionalism’. You have to provide decisive empirical evidence that what you can infer in your model also helps us to uncover what actually goes on in the real world. It is not enough to present your students with epistemically informative insights about logically possible but non-existent general equilibrium models. You also, and more importantly, have to have a world-linking argumentation and show how those models explain or teach us something about real-world economies. If you fail to support your models in that way, why should we care about them? And if you do not inform us about what are the real-world intended target systems of your modelling, how are we going to be able to value or test them? Without giving that kind of information it is impossible for us to check if the ‘possible world’ models you come up with actually hold also for the one world in which we live — the real world.

Debunking the anti-vaccination movement

3 Dec, 2020 at 08:28 | Posted in Theory of Science & Methodology | Comments Off on Debunking the anti-vaccination movement



Vaccin och vetenskap

2 Dec, 2020 at 21:12 | Posted in Theory of Science & Methodology | Comments Off on Vaccin och vetenskap


What is causality?

22 Oct, 2020 at 13:27 | Posted in Theory of Science & Methodology | 8 Comments


Inference to the best explanation

7 Oct, 2020 at 19:28 | Posted in Theory of Science & Methodology | Comments Off on Inference to the best explanation


One of the few statisticians that I have on my blogroll is Andrew Gelman.  Although not sharing his Bayesian leanings, yours truly finds  his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer infra for  “reverse causal questioning” is typical Gelmanian:

When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …

Reverse causal reasoning is different; it involves asking questions and searching for new variables that might not yet even be in our model. We can frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to “need an explanation”? It means that existing explanations — the existing model of the phenomenon — does not do the job …

By formalizing reverse casual reasoning within the process of data analysis, we hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. This is consistent with views such as Cartwright (2007) that causal inference in reality is more complex than is captured in any theory of inference … What we are really suggesting is a way of talking about reverse causal questions in a way that is complementary to, rather than outside of, the mainstream formalisms of statistics and econometrics.

In a time when scientific relativism is expanding, it is more important than ever not to reduce science to a pure discursive level and to maintain the Enlightenment tradition. There exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, yours truly would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

In Gelman’s essay there is  no explicit argument for abduction —  inference to the best explanation — but I would still argue that it is de facto nothing but a very strong argument for why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in. The focus on causality, model checking, anomalies and context-dependence — although here expressed in statistical terms — is as close to abductive reasoning as we get in statistics and econometrics today.

The value of uncertainty

1 Oct, 2020 at 16:05 | Posted in Theory of Science & Methodology | Comments Off on The value of uncertainty

What is the evolutionary utility of the predictive strategy if it allows our models to remain so persistently disconnected from our external situation?

bucketTo fully understand the self-reinforcing power of such habits, we need to look once more beyond the brain. We need to attend to how the process of acting to minimise surprise ensnares our environment into the overarching error-minimising process. At the simplest level, such actions might just involve ignoring immediate sources of error – as when alcoholics preserve the belief that they are functioning well by not looking at how much they’re regularly spending on drink. But our actions can also have a lasting effect on the structure of our environment itself, by moulding it into the shape of our cognitive model. Through this process, addicted predictors can create a personal niche in which elements incompatible with their model are excluded altogether – for instance, by associating only with others who similarly engage in, and thus do not challenge, their addictive behaviours.

This mutually reinforcing circularity of habit and habitat is not a unique feature of substance addiction. In 2010, the internet activist Eli Pariser introduced the term ‘filter bubble’ to describe the growing fragmentation of the internet as individuals increasingly interact only with a limited subset of sources that fit their pre-existing biases …

As the journalist Bill Bishop argued in The Big Sort (2008), by charting the movement of US citizens into increasingly like-minded neighbourhoods over the past century, this homophilic drive has long directed our movements through physical space. In the online world, it now occurs through more than a million subreddits and innumerable Tumblr communities serving everyone from queer skateboarders to incels, flat-Earthers and furries …

Perversely, the more flexible the environment, the more it allows for the creation of self-protective bubbles and micro-niches, and hence affords the entrenchment of rigid models.

Mark Miller et al.

Why economic models do not explain

16 Sep, 2020 at 12:33 | Posted in Theory of Science & Methodology | 6 Comments

ProfessorNancy_CartwrightAnalogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.

Nancy Cartwright

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

The problem articulated by Cartwright is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they …

Economic models frequently invoke entities that do not exist, such as perfectly rational agents, perfectly inelastic demand functions, and so on. As economists often defensively point out, other sciences too invoke non-existent entities, such as the frictionless planes of high-school physics. But there is a crucial difference: the false-ontology models of physics and other sciences are empirically constrained. If a physics model leads to successful predictions and interventions, its false ontology can be forgiven, at least for instrumental purposes – but such successful prediction and intervention is necessary for that forgiveness. The
idealizations of economic models, by contrast, have not earned their keep in this way. So the problem is not the idealizations in themselves so much as the lack of empirical success they buy us in exchange. As long as this problem remains, claims of explanatory credit will be unwarranted.

A. Alexandrova & R. Northcott

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modelling project?

Continue Reading Why economic models do not explain…

Hegel — wenn der Geist aufs Ganze geht

6 Aug, 2020 at 12:36 | Posted in Theory of Science & Methodology | Comments Off on Hegel — wenn der Geist aufs Ganze geht

Georg Wilhelm Friedrich Hegel ist unbestritten einer der wichtigsten philosophischen Denker der Neuzeit. Aber — 250 Jahre nach der Geburt des deutschen Philosophen kann man sich fragen: Was bleibt von Hegel? Wer war er? Was wollte er? Und wie würde er unsere Gegenwart und Zukunft fassen?

Der Glaube an die Wissenschaft

2 Aug, 2020 at 18:17 | Posted in Theory of Science & Methodology | Comments Off on Der Glaube an die Wissenschaft

Betrachtet man den Verschwörungsirrsinn, der sich dieser Tage im Internet tummelt, kann man nur zu dem Schluss kommen: Der Irrationalismus ist auf dem Vormarsch. Aber dieser erschreckende Vormarsch lässt sich nicht stoppen, indem die Wissenschaft selbst in einen ideologischen Tunnel fährt.ein Ganz gleich, wie überzeugt man von seiner Sache ist: Als Aktivist muss man in einer Demokratie bereit sein, für seine Überzeugung auf dem Feld des weltanschaulichen Dissenses zu streiten – statt einen unsauberen Begriff von Wissenschaft wie einen Zauberspeer vor sich her zu tragen, der alle Gegner als “wissenschaftsfeindliche Tölpel” brandmarken und vor Scham verstummen lassen soll. Damit erreicht man nur, dass die Grenzen zwischen Wissenschaft und Ideologie, zwischen Vernunft und Unvernunft immer weiter verschwimmen.

All denjenigen, die heute ihre Heilserwartungen in die Wissenschaft projizieren, die an den Lippen von Forschern hängen, weil sie auf erlösende Sätze hoffen wie “Wir haben die Pandemie ein für alle Mal unter Kontrolle gebracht” oder “Der Klimawandel ist abgewendet”, sei gesagt: Wirkliche Seelenruhe, Zuversicht, den Glauben, dass “alles” gut wird, kann kein seriöser Wissenschaftler bieten. Die moderne Wissenschaft kommt von der Physik her, nicht von der Metaphysik. Deshalb kann sie keine Antworten darauf geben, wie der Mensch mit seiner Angst vor dem Ungewissen, seiner Angst vor dem Tod umgehen soll, wie er seinen Frieden mit der Tatsache machen kann, dass er nicht nur Herr seines Schicksals, sondern durch seine Sterblichkeit letzten Endes ein radikal Unterworfener ist.

Thea Dorn / Die Zeit

Next Page »

Blog at
Entries and comments feeds.