What is causality?

22 Oct, 2020 at 13:27 | Posted in Theory of Science & Methodology | 8 Comments

.                                                                                                                             

Inference to the best explanation

7 Oct, 2020 at 19:28 | Posted in Theory of Science & Methodology | Comments Off on Inference to the best explanation

dilbert

One of the few statisticians that I have on my blogroll is Andrew Gelman.  Although not sharing his Bayesian leanings, yours truly finds  his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer infra for  “reverse causal questioning” is typical Gelmanian:

When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …

Reverse causal reasoning is different; it involves asking questions and searching for new variables that might not yet even be in our model. We can frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to “need an explanation”? It means that existing explanations — the existing model of the phenomenon — does not do the job …

By formalizing reverse casual reasoning within the process of data analysis, we hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. This is consistent with views such as Cartwright (2007) that causal inference in reality is more complex than is captured in any theory of inference … What we are really suggesting is a way of talking about reverse causal questions in a way that is complementary to, rather than outside of, the mainstream formalisms of statistics and econometrics.

In a time when scientific relativism is expanding, it is more important than ever not to reduce science to a pure discursive level and to maintain the Enlightenment tradition. There exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, yours truly would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

In Gelman’s essay there is  no explicit argument for abduction —  inference to the best explanation — but I would still argue that it is de facto nothing but a very strong argument for why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in. The focus on causality, model checking, anomalies and context-dependence — although here expressed in statistical terms — is as close to abductive reasoning as we get in statistics and econometrics today.

The value of uncertainty

1 Oct, 2020 at 16:05 | Posted in Theory of Science & Methodology | Comments Off on The value of uncertainty

What is the evolutionary utility of the predictive strategy if it allows our models to remain so persistently disconnected from our external situation?

bucketTo fully understand the self-reinforcing power of such habits, we need to look once more beyond the brain. We need to attend to how the process of acting to minimise surprise ensnares our environment into the overarching error-minimising process. At the simplest level, such actions might just involve ignoring immediate sources of error – as when alcoholics preserve the belief that they are functioning well by not looking at how much they’re regularly spending on drink. But our actions can also have a lasting effect on the structure of our environment itself, by moulding it into the shape of our cognitive model. Through this process, addicted predictors can create a personal niche in which elements incompatible with their model are excluded altogether – for instance, by associating only with others who similarly engage in, and thus do not challenge, their addictive behaviours.

This mutually reinforcing circularity of habit and habitat is not a unique feature of substance addiction. In 2010, the internet activist Eli Pariser introduced the term ‘filter bubble’ to describe the growing fragmentation of the internet as individuals increasingly interact only with a limited subset of sources that fit their pre-existing biases …

As the journalist Bill Bishop argued in The Big Sort (2008), by charting the movement of US citizens into increasingly like-minded neighbourhoods over the past century, this homophilic drive has long directed our movements through physical space. In the online world, it now occurs through more than a million subreddits and innumerable Tumblr communities serving everyone from queer skateboarders to incels, flat-Earthers and furries …

Perversely, the more flexible the environment, the more it allows for the creation of self-protective bubbles and micro-niches, and hence affords the entrenchment of rigid models.

Mark Miller et al.

Why economic models do not explain

16 Sep, 2020 at 12:33 | Posted in Theory of Science & Methodology | 6 Comments

ProfessorNancy_CartwrightAnalogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.

Nancy Cartwright

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

The problem articulated by Cartwright is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they …

Economic models frequently invoke entities that do not exist, such as perfectly rational agents, perfectly inelastic demand functions, and so on. As economists often defensively point out, other sciences too invoke non-existent entities, such as the frictionless planes of high-school physics. But there is a crucial difference: the false-ontology models of physics and other sciences are empirically constrained. If a physics model leads to successful predictions and interventions, its false ontology can be forgiven, at least for instrumental purposes – but such successful prediction and intervention is necessary for that forgiveness. The
idealizations of economic models, by contrast, have not earned their keep in this way. So the problem is not the idealizations in themselves so much as the lack of empirical success they buy us in exchange. As long as this problem remains, claims of explanatory credit will be unwarranted.

A. Alexandrova & R. Northcott

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modelling project?

Continue Reading Why economic models do not explain…

Hegel — wenn der Geist aufs Ganze geht

6 Aug, 2020 at 12:36 | Posted in Theory of Science & Methodology | Comments Off on Hegel — wenn der Geist aufs Ganze geht


Georg Wilhelm Friedrich Hegel ist unbestritten einer der wichtigsten philosophischen Denker der Neuzeit. Aber — 250 Jahre nach der Geburt des deutschen Philosophen kann man sich fragen: Was bleibt von Hegel? Wer war er? Was wollte er? Und wie würde er unsere Gegenwart und Zukunft fassen?

Der Glaube an die Wissenschaft

2 Aug, 2020 at 18:17 | Posted in Theory of Science & Methodology | Comments Off on Der Glaube an die Wissenschaft

Betrachtet man den Verschwörungsirrsinn, der sich dieser Tage im Internet tummelt, kann man nur zu dem Schluss kommen: Der Irrationalismus ist auf dem Vormarsch. Aber dieser erschreckende Vormarsch lässt sich nicht stoppen, indem die Wissenschaft selbst in einen ideologischen Tunnel fährt.ein Ganz gleich, wie überzeugt man von seiner Sache ist: Als Aktivist muss man in einer Demokratie bereit sein, für seine Überzeugung auf dem Feld des weltanschaulichen Dissenses zu streiten – statt einen unsauberen Begriff von Wissenschaft wie einen Zauberspeer vor sich her zu tragen, der alle Gegner als “wissenschaftsfeindliche Tölpel” brandmarken und vor Scham verstummen lassen soll. Damit erreicht man nur, dass die Grenzen zwischen Wissenschaft und Ideologie, zwischen Vernunft und Unvernunft immer weiter verschwimmen.

All denjenigen, die heute ihre Heilserwartungen in die Wissenschaft projizieren, die an den Lippen von Forschern hängen, weil sie auf erlösende Sätze hoffen wie “Wir haben die Pandemie ein für alle Mal unter Kontrolle gebracht” oder “Der Klimawandel ist abgewendet”, sei gesagt: Wirkliche Seelenruhe, Zuversicht, den Glauben, dass “alles” gut wird, kann kein seriöser Wissenschaftler bieten. Die moderne Wissenschaft kommt von der Physik her, nicht von der Metaphysik. Deshalb kann sie keine Antworten darauf geben, wie der Mensch mit seiner Angst vor dem Ungewissen, seiner Angst vor dem Tod umgehen soll, wie er seinen Frieden mit der Tatsache machen kann, dass er nicht nur Herr seines Schicksals, sondern durch seine Sterblichkeit letzten Endes ein radikal Unterworfener ist.

Thea Dorn / Die Zeit

Postmodern thinking

1 Jul, 2020 at 15:08 | Posted in Theory of Science & Methodology | 1 Comment

adornoThe compulsive types there correspond to the paranoids here. The wistful opposition to factual research, the legitimate consciousness that scientism forgets what is best, exacerbates through its naïvété the split from which it suffers. Instead of comprehending the facts, behind which others are barricaded, it hurriedly throws together whatever it can grab from them, rushing off to play so uncritically with apochryphal cognitions, with a couple isolated and hypostatized categories, and with itself, that it is easily disposed of by referring to the unyielding facts. It is precisely the critical element which is lost in the apparently independent thought. The insistence on the secret of the world hidden beneath the shell, which dares not explain how it relates to the shell, only reconfirms through such abstemiousness the thought that there must be good reasons for that shell, which one ought to accept without question. Between the pleasure of emptiness and the lie of plenitude, the ruling condition of the spirit [Geistes: mind] permits no third option.

Long before ‘postmodernism’ became fashionable among a certain kind of ‘intellectuals’, Adorno wrote searching critiques of this kind of thinking.

When listening to — or reading — the postmodern mumbo-jumbo​ that surrounds​ us today in social sciences and humanities, I often find myself wishing for that special Annie Hall moment of truth:

Cultures of expertise

21 Jun, 2020 at 23:32 | Posted in Theory of Science & Methodology | Comments Off on Cultures of expertise

 

Social science — a plaidoyer

16 Jun, 2020 at 08:37 | Posted in Theory of Science & Methodology | 4 Comments

One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. But the researcher cannot stop at this. As a consequence of the relations and connections that the researcher finds, a will and demand arise for critical reflection on the findings. To show that unemployment depends on rigid social institutions or adaptations to European economic aspirations to integration, for instance, constitutes at the same time a critique of these conditions. It also entails an implicit critique of other explanations that one can show to be built on false beliefs. The researcher can never be satisfied with establishing that false beliefs exist but must go on to seek an explanation for why they exist. What is it that maintains and reproduces them? To show that something causes false beliefs – and to explain why – constitutes at the same time a critique.

bhskThis I think is something particular to the humanities and social sciences. There is no full equivalent in the natural sciences since the objects of their study are not fundamentally created by human beings in the same sense as the objects of study in social sciences. We do not criticize apples for falling to earth in accordance with the law of gravity.

The explanatory critique that constitutes all good social science thus has repercussions on the reflective person in society. To digest the explanations and understandings that social sciences can provide means a simultaneous questioning and critique of one’s self-understanding and the actions and attitudes it gives rise to. Science can play an important emancipating role in this way. Human beings can fulfill and develop themselves only if they do not base their thoughts and actions on false beliefs about reality. Fulfillment may also require changing fundamental structures of society. Understanding of the need for this change may issue from various sources like everyday praxis and reflection as well as from science.

Explanations of social phenomena must be subject to criticism, and this criticism must be an essential part of the task of social science. Social science has to be an explanatory critique. The researcher’s explanations have to constitute a critical attitude toward the very object of research, society. Hopefully, the critique may result in proposals for how the institutions and structures of society can be constructed. The social scientist has a responsibility to try to elucidate possible alternatives to existing institutions and structures.

In a time when scientific relativism is on the march, it is important to keep up the claim for not reducing science to a pure discursive level. Against all kinds of social constructivism we have to maintain the Enlightenment tradition of thinking of reality as something that is not created by our views of it and of the main task of science as studying the structure of this reality. Ontology is important. It is the foundation for all sustainable epistemologies.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that the social reality is closed, and since social reality is fundamentally open, models of that kind do not explain anything of what happens in such a universe.

Crazy econometricians

12 May, 2020 at 17:47 | Posted in Theory of Science & Methodology | Comments Off on Crazy econometricians

the-dappled-worldWith a few notable exceptions, such as the planetary systems, our most beautiful and exact applications of the laws of physics are all within the entirely artificial and precisely constrained environment of the modern laboratory … Haavelmo remarks that physicists are very clever. They confine their predictions to the outcomes of their experiments. They do not try to predict the course of a rock in the mountains and trace the development of the avalanche. It is only the crazy econometrician who tries to do that, he says.

John von Neumann on mathematics

12 May, 2020 at 15:53 | Posted in Theory of Science & Methodology | 2 Comments

587aaabdfdf42f314b0da9f7fcf2a47d

My ten favourite science books

2 May, 2020 at 18:43 | Posted in Theory of Science & Methodology | 1 Comment

top-10-retail-news-thumb-610xauto-79997-600x240

• Bhaskar, Roy (1978). A realist theory of science

• Cartwright, Nancy (2007). Hunting causes and using them

• Freedman, David (2010). Statistical models and causal inferences

• Georgescu-Roegen, Nicholas (1971). The Entropy Law and the Economic Process

• Harré, Rom (1960). An introduction to the logic of the sciences

• Keynes, John Maynard (1936). The General Theory

• Lawson, Tony (1997). Economics and reality

• Lipton, Peter (2004). Inference to the best explanation 

• Marx, Karl (1867). Das Kapital

Polanyi, Karl (1944). The Great Transformation

The relationship between logic and truth

26 Apr, 2020 at 14:20 | Posted in Theory of Science & Methodology | 13 Comments

 

To be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

anBut that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

Mathematics and logic cannot establish the truth value of facts. Never has. Never will.

Why we need Big Theories

4 Apr, 2020 at 18:13 | Posted in Theory of Science & Methodology | Comments Off on Why we need Big Theories

 

Judea Pearl and interventionist causal models (wonkish)

11 Mar, 2020 at 10:53 | Posted in Theory of Science & Methodology | Comments Off on Judea Pearl and interventionist causal models (wonkish)

largepreviewAs X’s effect on some other variable in the system S depends on there
being a possible intervention on X, and the possibility of an intervention in
turn depends on the modularity of S, it is a necessary condition for something
to be a cause that the system in which it is a cause is modular with respect
to that factor. The requirement that all systems are modular with respect to
their causes can, in a way, be regarded as an interventionist addition to the
unmanipulable causes problem … This implication has also been criticized in particular by Nancy Cartwright. She has proposed that many causal systems are not modular … Pearl has responded to this in 2009 (sect. 11.4.7), where he proposes, on the one hand, that it is in general sufficient that a symbolic intervention can be performed on the causal model, for the determination of causal effects, and on the other hand that we nevertheless could isolate the individual causal contributions …

It is tempting—to philosophers at least—to equate claims in this literature,
about the meaning of causal claims being given by claims about what would
happen under a hypothetical intervention—or an explicit definition of causation to the same effect—with that same claim as it would be interpreted in a philosophical context. That is to say, such a claim would normally be understood there as giving the truth conditions of said causal claims. It is generally hard to know whether any such beliefs are involved in the scientific context. However, Pearl in particular has denied, in increasingly explicit terms, that this is what is intended … He has recently
liked to describe a factor Y , that is causally dependent on another factor X, as
“listening” to X and determining “its value in response to what it hears” … This formulation suggests to me that it is the fact that Y is “listening” to X that explains why and how Y changes under an intervention on X. That is, what a possible intervention does, is to isolate the influence that X has on Y , in virtue of Y ’s “listening” to X. Thus, Pearl’s theory does not imply an interventionist theory of causation, as we understand that concept in this monograph. This, moreover, suggests that the intervention that is always available, for any cause that is represented by a variable in a causal model, is a formal operation. I take this to be supported by the way he responds to Nancy Cartwright’s objection that modularity does not hold of all causal systems: it is sufficient that a symbolic intervention can be performed. Thus, the operation alluded to in Pearl’s operationalization of causation is a formal operation, always available, regardless of whether it corresponds to any possible intervention event or not.

Interesting dissertation well worth reading for anyone interested in the ongoing debate on the reach of interventionist causal theories.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.