Immanuel Kant at 300 

22 Apr, 2024 at 18:04 | Posted in Theory of Science & Methodology | Leave a comment

April 22, 2024, marks the 300th anniversary of the birth of Immanuel Kant, one of the greatest philosophers in the history of philosophy. 

Kant’s ideas of the Enlightenment are still relevant, despite the numerous criticisms that have been levelled against them. The Enlightenment was characterized by a spirit of exploration that led to new discoveries in both science and culture. Rather than promoting a narrow worldview, it encouraged people to question assumptions and religious beliefs. It still provides a framework for addressing some of the most pressing problems facing society, such as climate change and social inequality.

While the Enlightenment has been criticized for its flaws and limitations, its ideas and values still have much to offer us today. Discussing philosophy and philosophers — just as economics and economists — has to take place in a contextualized place and time. Judging people who lived more than 200 years ago from the standpoint of present-day (scientific) knowledge is nothing but anachronistic. That the person that was Kant and most people in his time held views that expressed ‘misogyny’, or were discriminatory, or even ‘racist’, is not the question. We all know that. What is much more interesting is to situate and try to analyze and understand why and in which historical, social, and cultural contexts these views were anchored. That said, I think that the philosopher Kant — and the philosophers and scientists who have followed in his footsteps — if he had lived today would strongly condemn all kinds of racism and other attacks on universal human rights and enlightenment.

Enlightenment is man’s emergence from his self-imposed nonage. Nonage is the inability to use one’s own understanding without another’s guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one’s own mind without another’s guidance. Sapere aude! “Have the courage to use your own understanding,” is therefore the motto of the enlightenment.

An Answer to the Question: What is Enlightenment? - Immanuel Kant - Ljudbok  - BookBeatLaziness and cowardice are the reasons why such a large part of mankind gladly remain minors all their lives, long after nature has freed them from external guidance … Those guardians who have kindly taken supervision upon themselves see to it that the overwhelming majority of mankind — among them the entire fair sex — should consider the step to maturity, not only as hard, but as extremely dangerous. First, these guardians make their domestic cattle stupid and carefully prevent the docile creatures from taking a single step without the leading-strings to which they have fastened them. Then they show them the danger that would threaten them if they should try to walk by themselves. Now this danger is really not very great; after stumbling a few times they would, at last, learn to walk. However, examples of such failures intimidate and generally discourage all further attempts …

Dogmas and formulas, these mechanical tools designed for reasonable use — or rather abuse — of his natural gifts, are the fetters of an everlasting nonage …

Enlightenment requires nothing but freedom — and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters. Now I hear the cry from all sides: “Do not argue!” The officer says: “Do not argue — drill!” The tax collector: “Do not argue — pay!” The pastor: “Do not argue — believe!” Only one ruler in the world says: “Argue as much as you please, but obey!” We find restrictions on freedom everywhere. But which restriction is harmful to enlightenment? Which restriction is innocent, and which advances enlightenment? I reply: the public use of one’s reason must be free at all times, and this alone can bring enlightenment to mankind.

Immanuel Kant

How Einstein taught me a great lesson

27 Feb, 2024 at 19:06 | Posted in Theory of Science & Methodology | 2 Comments

Albert Einstein Quote | Einstein quotes, Albert einstein quotes, Einstein

The Keynes-Ramsey-Savage debate on probability

31 Jan, 2024 at 13:20 | Posted in Theory of Science & Methodology | 2 Comments

Mainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) — that is, they maximize expected utility for some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately — via some “Dutch book” or “money pump” argument — susceptible to being ruined by some clever “bookie”.

Bayesian Stats Joke | Data science, Mathematik meme, Mathe witzeBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics, one could argue that there is simply not enough adequate and relevant information to ground beliefs of a probabilistic kind and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on experience and tons of data) that the probability of you becoming unemployed in Sweden is 10 %. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign a probability of 10% of becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary and ungrounded probability claims are more irrational than being undecided in the face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes, we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by Bayesian economists.

Stressing the importance of Keynes’ view on uncertainty John Kay writes in the Financial Times:

For Keynes, probability was about believability, not frequency. He denied that our thinking could be described by a probability distribution over all possible future events, a statistical distribution that could be teased out by shrewd questioning – or discovered by presenting a menu of trading opportunities. In the 1920s he became engaged in an intellectual battle on this issue, in which the leading protagonists on one side were Keynes and the Chicago economist Frank Knight, opposed by a Cambridge philosopher, Frank Ramsey, and later by Jimmie Savage, another Chicagoan.

Keynes and Knight lost that debate, and Ramsey and Savage won, and the probabilistic approach has maintained academic primacy ever since …

I used to tell students who queried the premise of “rational” behaviour in financial markets – where rational means are based on Bayesian subjective probabilities – that people had to behave in this way because if they did not, people would devise schemes that made money at their expense. I now believe that observation is correct but does not have the implication I sought. People do not behave in line with this theory, with the result that others in financial markets do devise schemes that make money at their expense.

Although this on the whole gives a succinct and correct picture of Keynes’s view on probability, I think it’s necessary to somewhat qualify in what way and to what extent Keynes “lost” the debate with the Bayesians Frank Ramsey and Jim Savage.

In economics, it’s an indubitable fact that few mainstream neoclassical economists work within the Keynesian paradigm. All more or less subscribe to some variant of Bayesianism. And some even say that Keynes acknowledged he was wrong when presented with Ramsey’s theory. This is a view that has unfortunately also been promulgated by Robert Skidelsky in his otherwise masterly biography of Keynes. But I think it’s fundamentally wrong. Let me elaborate on this point (the argumentation is more fully presented in my book John Maynard Keynes (SNS, 2007)).

It’s a debated issue in newer research on Keynes if he, as some researchers maintain, fundamentally changed his view on probability after the critique levelled against his A Treatise on Probability by Frank Ramsey. It has been exceedingly difficult to present evidence for this being the case.

Ramsey’s critique was mainly that the kind of probability relations that Keynes was speaking of in Treatise actually didn’t exist and that Ramsey’s own procedure  (betting) made it much easier to find out the “degrees of belief” people were having. I question this both from a descriptive and a normative point of view.

Keynes is saying in his response to Ramsey only that Ramsey “is right” in that people’s “degrees of belief” basically emanate in human nature rather than in formal logic.

Patrick Maher, former professor of philosophy at the University of Illinois, even suggests that Ramsey’s critique of Keynes’s probability theory in some regards is invalid:

Keynes’s book was sharply criticized by Ramsey. In a passage that continues to be quoted approvingly, Ramsey wrote:

“But let us now return to a more fundamental criticism of Mr. Keynes’ views, which is the obvious one that there really do not seem to be any such things as the probability relations he describes. He supposes that, at any rate in certain cases, they can be perceived; but speaking for myself I feel confident that this is not true. I do not perceive them, and if I am to be persuaded that they exist it must be by argument; moreover, I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.” (Ramsey 1926, 161)

I agree with Keynes that inductive probabilities exist and we sometimes know their values. The passage I have just quoted from Ramsey suggests the following argument against the existence of inductive probabilities. (Here P is a premise and C is the conclusion.)

P: People are able to come to very little agreement about inductive proba- bilities.
C: Inductive probabilities do not exist.

P is vague (what counts as “very little agreement”?) but its truth is still questionable. Ramsey himself acknowledged that “about some particular cases there is agreement” (28) … In any case, whether complicated or not, there is more agreement about inductive probabilities than P suggests …

I have been evaluating Ramsey’s apparent argument from P to C. So far I have been arguing that P is false and responding to Ramsey’s objections to unmeasurable probabilities. Now I want to note that the argument is also invalid. Even if P were true, it could be that inductive probabilities exist in the (few) cases that people generally agree about. It could also be that the disagreement is due to some people misapplying the concept of inductive probability in cases where inductive probabilities do exist. Hence it is possible for P to be true and C false …

I conclude that Ramsey gave no good reason to doubt that inductive probabilities exist.

Ramsey’s critique made Keynes more strongly emphasize the individuals’ own views as the basis for probability calculations, and less stress that their beliefs were rational. But Keynes’s theory doesn’t stand or fall with his view on the basis for our “degrees of belief” as logical. The core of his theory — when and how we can measure and compare different probabilities —he doesn’t change. Unlike Ramsey, he wasn’t at all sure that probabilities always were one-dimensional, measurable, quantifiable or even comparable entities.

A philosopher’s look at science

11 Jan, 2024 at 17:08 | Posted in Theory of Science & Methodology | 1 Comment

A Philosopher Looks at Science eBook by Nancy Cartwright - EPUB Book |  Rakuten Kobo United States You will already be familiar with the fact that broad swathes of social science research are given over to establishing, analysing, generalising, theorising about and using statistical associations that are manipulated with the assumptions of probability theory.

This makes sense if probabilities can be attached to broad swathes of the phenomena that social science is meant to deal with. But can they? Here we face the same issue that you will meet when I discuss the assumption of universal determinism: is the social world really that orderly? Perhaps it is my failure to see the forest for the trees, but when I look at various studies across the social sciences, from psychology, sociology and political science to economics and public health, I often cannot see grounds for this assumption, I sometimes see good evidence against it and I also see places where it seems to be leading us astray, with respect both to the accumulation and the use of knowledge.

To understand ‘non-routine’ decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and a fortiori in any relevant sense timeless — is not a sensible way of dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € — because we here envision one universe (market) where the asset price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all. What is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty — not calculable risk — rules the roost. That was something both Keynes and Knight basically said in their 1921 books. Thinking about uncertainty in terms of ‘rational expectations’ and ‘ensemble averages’ has had seriously bad repercussions on the financial system.

Knight’s uncertainty concept has an epistemological founding and Keynes’ definitely an ontological founding. Of course, this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’ view is the most warranted of the two.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if one subscribes to the former, Knightian view, one opens up to the mistaken belief that with better information and greater computer power, we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As Keynes convincingly argued, that is ontologically just not possible.

To Keynes, the source of uncertainty was in the nature of the real — nonergodic — world. It had to do, not only — or primarily — with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilities and expectations at all.

Sometimes we do not know because we cannot know. Using models based on unsubstantiated beliefs in the existence of probabilities when in fact there are none, is one of the main reasons for the severe shortcomings of mainstream economics.

Why yours truly is a critical realist

4 Jan, 2024 at 17:31 | Posted in Theory of Science & Methodology | 7 Comments

roy What properties do societies possess that might make them possible objects of knowledge for us? My strategy in developing an answer to this question will be effectively based on a pincer movement. But in deploying the pincer I shall concentrate first on the ontological question of the properties that societies possess, before shifting to the epistemological question of how these properties make them possible objects of knowledge for us. This is not an arbitrary order of development. It reflects the condition that, for transcendental realism, it is the nature of objects that determines their cognitive possibilities for us; that, in nature, it is humanity that is contingent and knowledge, so to speak, accidental. Thus it is because sticks and stones are solid that they can be picked up and thrown, not because they can be picked up and thrown that they are solid (though that they can be handled in this sort of way may be a contingently necessary condition for our knowledge of their solidity).

No philosopher of science has influenced yours truly’s thinking more than Roy did. In a time when scientific relativism is still on the march, it is important to keep up his claim for not reducing science to a pure discursive level.

Critical Realism - Roy Bhaskar - YouTubeScience is made possible by the fact that there exists a reality beyond our theories and concepts of it. It is this reality that our theories in some way deal with. Contrary to positivism, I cannot see that the main task of science is to detect event-regularities between observed facts. Rather, the task must be conceived as identifying the underlying structure and forces that produce the observed events.

The problem with positivist social science is not that it gives the wrong answers, but rather that in a strict sense it does not give answers at all. Its explanatory models presuppose that social reality is ‘closed,’ and since social reality is fundamentally ‘open,’ models of that kind cannot explain anything about​ what happens in such a universe. Positivist social science has to postulate closed conditions to make its models operational and then – totally unrealistically – impute these closed conditions to society’s real structure.

What makes knowledge in social sciences possible is the fact that society consists of social structures and positions that influence the individuals of society, partly through their being the necessary prerequisite for the actions of individuals but also because they dispose individuals to act (within a given structure) in a certain way. These structures constitute the ‘deep structure’ of society.

Our observations and theories are concept-dependent without therefore necessarily being concept-determined. There is a reality existing independently of our knowledge and theories of it. Although we cannot apprehend it without using our concepts and theories, these are not the same as reality itself. Reality and our concepts of it are not identical. Social science is made possible by existing structures and relations in society that are continually reproduced and transformed by different actors.

Explanations and predictions of social phenomena require theory constructions. Just looking for correlations between events is not enough. One has to get under the surface and see the deeper underlying structures and mechanisms that essentially constitute the social system.

The basic question one has to pose when studying social relations and events is​ what are the fundamental relations without which they would cease to exist. The answer will point to causal mechanisms and tendencies that act in the concrete contexts we study. Whether these mechanisms are activated and what effects they will have in that case it is not possible to predict, since these depend on accidental and variable relations. Every social phenomenon is determined by a host of both necessary and contingent relations, and it is impossible in practice to have complete knowledge of these constantly changing relations. That is also why we can never confidently predict them. What we can do, through learning about the mechanisms of the structures of society, is to identify the driving forces behind them, thereby making it possible to indicate the direction in which things tend to develop.

The world itself should never be conflated with the knowledge we have of it. Science can only produce meaningful, relevant and realistic knowledge if it acknowledges its dependence on the​ world out there. Ultimately that also means that the critique yours truly wages against mainstream economics is that it doesn’t take that ontological requirement seriously.

Bad faith economics

15 Nov, 2023 at 17:48 | Posted in Theory of Science & Methodology | 1 Comment

Itzhak Gilboa, Andrew Postlewaite, Larry Samuelson and David Schmeidler (2022) describe a practice that … is not uncommon among theorists:

[O]ne may suggest a model with a descriptive interpretation in mind, but, when facing an aggressive audience, one might take a step back and rather than promoting the model as an explanation of a real-life phenomenon, present it as a ‘proof of concept’ or ‘merely an exercise’ in testing the scope of the standard paradigm. (p. 7)

WHAT IS BAD FAITH LITIGATION? | Edenfield, Cox & BruceThink what is going on here. A theorist has created a model which he customarily presents as a proposed explanation of some empirical phenomenon. But now he is facing an audience that has sufficient knowledge of the evidence about this phenomenon, or of other related phenomena within the explanatory scope of the model, to raise pertinent questions about the plausibility of that explanation … The theorist’s response is to make a temporary change in the way he ‘promotes’ his model … before reverting to the original one when facing less knowledgeable or less critical audiences. This surely amounts to acting in bad faith …

Suppose an experiment has been run and the opponent’s hypothesis has been rejected. Suppose the opponent then says: ‘I’m not at all surprised about your findings. When I proposed that mechanism as an explanation of your original observations, I knew it was implausible. My model was merely a theoretical exercise to show that your original observations were logically consistent with the standard theory. Now I’ll try to find an implausible explanation of your new results, and you can test that.’ This is not good science. If all that can be claimed for a model is that it is a theoretical exercise, empirical scientists should not be expected to treat its results as hypotheses that deserve to be tested.

Robert Sugden

Being able to model a ‘disciplined’ credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance, etc.). At the very least, the minimalist demand on models in terms of credibility has to give way to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. The robustness of claims in a model does not per se give a warrant for exporting the claims to real-world target systems.

Even if epistemology is important and interesting in itself, it ought never to be anything but secondary in science. The primary questions asked have to be ontological. First after having asked questions about ontology can we start thinking about what and how we can know anything about the world. If we do that, I think it is more or less necessary also to be more critical of the reasoning by modelling that has come to be considered the only and right way to reason in mainstream economics for more than half a century now.

Lars Pålsson Syll: books, biography, latest update - Amazon.com If we can’t warrant that the premises (assumptions) on which our model conclusions are built are true, then what’s the value of the logically correct deductions we are supposed to make with our models? From false assumptions, anything logically follows!

Most mainstream economists, subscribe (although not often consciously or explicitly) to a deductive-nomological view on scientific explanation and prediction (an explanation of an event being nothing but a prediction of its occurrence), in which prediction and explanation are things to deduce from law-like hypotheses and a set of antecedent/initial conditions.  But — and on this, both Hempel and Popper were very explicit — to count as adequate/sound, the explanans must be true. Explanation and prediction in the models we construct and use is not only a question of logical form.  Models that intend to say something about the real world can’t escape dealing with ‘Truth’!

Model reasoning as an ‘object to enquire’ into activities, is not, from a scientific point of view, on a par with the much more important question if these models really have export-certificates to the real world or not. Just presenting logical possibilities in analytical modelling exercises has little or no scientific value at all.

Yours truly has for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

I have spent a considerable part of my life building economic models, and examining the models that other economists have built. I believe that I am making reasonably good use of my talents in an attempt to understand the social world.

I have no fellow-feeling with those economic theorists who, off the record at seminars and conferences, admit that they are only playing a game with other theorists. If their models are not intended seriously, I want to say (and do say when I feel sufficiently combative), why do they expect me to spend my time listening to their expositions? Count me out of the game.

Robert Sugden

Economic methodology — a critical realist perspective

15 Sep, 2023 at 10:26 | Posted in Theory of Science & Methodology | 2 Comments

The field of economics has long been hailed as a bastion of rationality and objectivity, offering insights into the workings of present-day complex economic systems. However, questions about the foundations of economics and its prevailing methodological approach have to be raised. My own critique challenges traditional assumptions and argues for a more pluralistic and realistic understanding of economic phenomena.

• The problem of formalism
One of my key criticisms centres around the excessive reliance on formal models and mathematical abstractions within mainstream economics. The discipline’s preoccupation with mathematical rigour often leads to a detachment from real-world complexities, rendering economic models divorced from actual human behaviour. Economists should embrace a more open-ended approach that acknowledges the limitations of formalism and places a greater emphasis on empirical observations and qualitative analysis.

• Assumptions and simplifying idealizations
Many of the assumptions and simplifying idealizations that underpin mainstream economic models — perfect rationality, market efficiency, and equilibrium — do not accurately reflect the messy realities of human decision-making and market dynamics. Economists have to have a more nuanced understanding of human behaviour that acknowledges the presence of bounded rationality, social norms, and institutional constraints. By incorporating these factors, economic models can better capture the complexities of economic systems.

• Pluralism and interdisciplinarity
Economics should embrace a more pluralistic approach, drawing insights from various disciplines such as sociology, psychology, history, and ecology. By integrating diverse perspectives, economists can develop a richer understanding of economic phenomena and avoid the pitfalls of narrow and reductionist thinking. Complex problems require holistic and multidimensional approaches.

• Policy implications
Overlooking real-world complexities and relying on overly simplified models — typical of mainstream economics — can lead to seriously misguided policy prescriptions. A more context-sensitive and pragmatic approach to policy-making —  taking into account the specific institutional, cultural, and historical factors at play — is needed. This shift in policy orientation can lead to more effective interventions and address the concerns of more or less marginalized communities that are often excluded from mainstream economic discourse.

• Advancing the discipline
While mainly posing challenges to the foundations of mainstream economics and the severe real-world limitations of its methodology, my critique also provides an opportunity for the discipline to evolve and grow. By embracing a more pluralistic and realistic framework, economics can address the criticisms and broaden its analytical toolkit. This would involve reevaluating many standard approaches, fostering interdisciplinary collaborations, and promoting critical engagement with alternative heterodox economic theories. Challenging the dominance of deductive-axiomatic formalism and advocating for a pluralistic approach also create space for a more realistic and nuanced understanding of economic phenomena. This is necessary if we want to be able to reimagine economics as a discipline that is more in tune with the complexities of the real world and better equipped to tackle the pressing social and economic challenges we face today.

Unsimple truths

12 Sep, 2023 at 10:47 | Posted in Theory of Science & Methodology | 1 Comment

Unsimple Truths: Science, Complexity, and Policy, MitchellWhen people hear the word “complexity,” they respond in different ways. Some think “complicated” or “messy,” not being able to see the forest for the trees. Others think of a clutter of matter going this way and that with no chance to get a purchase on its behavior, to take hold of the “blooming, buzzing confusion” (James 1890, 462). Others think “chaos,” in the traditional sense, something unrestrained and uncontrollable, a realm of unpredictability and uncertainty that doesn’t yield to human understanding. None of these interpretations does justice to the tractable, understandable, evolved, and dynamic complexity that contemporary sciences say aptly characterizes our world.’ Neither its complications nor its chaotic dynamics should scare away the curious, nor drive them to replace a clear-eyed investigation of the nuanced beauty of complexity with the austere, clean lines of the simple and timeless.

The world is indeed complex; so, too, should be our representations and analyses of it. Yet science has traditionally sought to reduce the “blooming, buzzing confusion” to simple, universal, and timeless underlying laws to explain what there is and how it behaves.

As a philosopher of science, it is interesting to note that many economists and other social scientists appeal to a requirement that explanations, in order to be considered scientific, must be capable of “reducing an individual case to a general law.” As a fundamental principle, a general law is often invoked as “if A, then B,” and if one can demonstrate in individual cases that if A and B are present, then one has ‘explained’ B.

However, this positivist-inductive view of science is fundamentally untenable.

According to a positivist-inductive view of science, the knowledge possessed by science constitutes proven knowledge. Starting with entirely unbiased observations, an ‘impartial scientific observer’ can formulate observational statements from which scientific theories and laws can be derived. Using the principle of induction, it becomes possible to formulate universal statements in the form of laws and theories that refer to occurrences of properties that hold always and everywhere. Based on these laws and theories, science can derive various consequences with which one can explain and predict what happens. Through logical deduction, statements can be derived from other statements. The logic of research follows the schema of observation — induction — deduction.

In more uncomplicated cases, scientists conduct experiments to justify the inductions with which they establish their scientific theories and laws. As Francis Bacon vividly put it, experimentation involves “putting nature on the rack” and forcing it to answer our questions. With the help of a set of statements that accurately describe the circumstances surrounding the experiment — initial conditions — and the scientific laws, scientists can deduce statements that can explain or predict the phenomenon under investigation.

As a result of the well-known problems with the hypothetico-deductive method, more moderate empiricists have reasoned that since there is usually no logical procedure for discovering a law or theory, one simply starts with laws and theories from which a series of statements that serve as explanations or predictions are deduced. Instead of investigating how scientific laws and theories are arrived at, the focus is on explaining what a scientific explanation and prediction are, the role theories and models play in them, and how they can be evaluated.

In the positivist (hypothetico-deductive, deductive-nomological) model of explanation, explanation refers to the subordination or derivation of specific phenomena from universal regularities. To explain a phenomenon (explanandum) is the same as deducing a description of it from a set of premises and universal laws of the form “If A, then B” (explanans). Explanation simply involves being able to place something under a specific regularity, and this approach is sometimes referred to as the “covering law model”. However, theories should not be used to explain specific individual phenomena but to explain the universal regularities that are part of a hypothetico-deductive explanation. [But there are problems with this view even in natural science. Many of the laws of natural science do not really say what things do but rather what they tend to do. This is largely due to the fact that the laws describe the behaviour of different parts rather than the entire phenomenon itself (except possibly in experimental situations). And many of the laws of natural science actually apply to fictional entities rather than real ones. Often, this is a consequence of the use of mathematics within the respective science and leads to the fact that its laws can only be exemplified in models (and not in reality).] The positivist model of explanation also exists in a weaker variant known as the probabilistic explanation, according to which explaining essentially means showing that the probability of an event B is very high if event A occurs. This variant dominates in the social sciences. From a methodological standpoint, this probabilistic relativization of the positivist explanatory approach does not make a significant difference.

One consequence of accepting the hypothetico-deductive model of explanation is often the acceptance of the so-called symmetry thesis. According to this thesis, the only difference between prediction and explanation is that in the former, the explanans is assumed to be known, and a prediction is attempted, while in the latter, the explanandum is assumed to be known, and the goal is to find initial conditions and laws from which the observed phenomenon can be derived.

However, a problem with the symmetry thesis is that it does not consider that causes can be confused with correlations. The fact that storks appear at the same time as human babies does not explain the origin of children.

The symmetry thesis also fails to acknowledge that causes can be sufficient but not necessary. The fact that a cancer patient gets run over does not make cancer the cause of death. Cancer could have been the actual explanation for the individual’s death. Even if we could construct a medical law – in accordance with the deductive model – stating that individuals with a specific type of cancer will die from that cancer, the law does not explain this individual’s death. Therefore, the thesis is simply incorrect.

Finding a pattern is not the same as explaining something. To receive the answer that the bus is usually late when asking why it is delayed does not constitute an acceptable explanation. Ontology and natural necessity must be part of a relevant answer, at least if one seeks something more than “constant conjunctions of events” in an explanation.

The original idea behind the positivist model of explanation was to provide a complete clarification of what an explanation is and to show that an explanation that did not meet its requirements was actually a pseudo-explanation. It aimed to provide a method for testing explanations and demonstrate that explanations in accordance with the model were the goal of science. Clearly, all these claims can be legitimately questioned.

An important reason why this model has gained traction in science is that it seemed to offer a way to explain things without needing to use ‘metaphysical’ notions of causality. Many scientists see causality as a problematic concept that should be avoided if possible. Simple observable variables should suffice. The problem is that specifying these variables and their possible correlations does not explain anything at all. The fact that union representatives often wear grey suits and employer representatives wear pinstriped suits does not explain why youth unemployment in Sweden is so high today. What is missing in these “explanations” is the necessary adequacy, relevance, and causal depth without which science risks becoming empty science fiction and a mere playground for models.

Many social scientists seem to be convinced that in order for research to be considered science, it must apply some variant of the hypothetico-deductive method. From the complex array of facts and events in reality, one should extract a few common lawful correlations that can serve as explanations. Within certain fields of social science, this endeavour to reduce explanations of social phenomena to a few general principles or laws has been a significant driving force. By employing a few general assumptions, the aim is to explain the nature of an entire macrophenomenon we call society. Unfortunately, there are no truly sustainable arguments provided as to why the fact that a theory can explain different phenomena in a unified manner would be a decisive reason to accept or prefer it. Uniformity and adequacy are not synonymous.

Hegel

3 Sep, 2023 at 11:25 | Posted in Theory of Science & Methodology | Comments Off on Hegel

.

Against modularity

29 Aug, 2023 at 16:11 | Posted in Theory of Science & Methodology | 1 Comment

Disentangling causality: assumptions in causal discovery and inference |  Artificial Intelligence ReviewIsn’t it the mark of a successful theory of a range of phenomena that it unites and embraces the causally relevant parameters and state variables within a single theoretical perspective? This question suggests that if our theories are successful, then they should produce descriptions of systems according to which the systems are interactionally simple. I think that this would be to put the conceptual cart before the phenomenal horse. As the criterion (one of many) for the adequacy of a theory of a system, this statement seems correct but it is hardly sufficient. Also, one should not automatically assume that our existing theories are adequate theories of complex systems. The belief that they are is based largely on a still unfilled reductionist promise.

William Wimsatt

Experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought experiments’ build on assumptions that are far away from the kind of idealized conditions under which natural scientists perform their experiments. The ‘nomological machines’ that natural scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s a fact, and contrary to the beliefs of most mainstream economists, it won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these assumptions, they are of little substantial value.

Take the modularity assumption for example. Modularity refers to the possibility of independent manipulability of causal relationships in a system. Trying to identify causal relations most economists today — especially when performing experiments — assume some kind of invariance or modularity, meaning basically that you can make an intervention on a part of a model without changing other dependencies in that model.

Modularity makes causal inferences made on the basis of ‘interventions’ stable. But although making causal inferences is not possible without making some kind of assumptions, you always have to argue why it is reasonable to make those assumptions. In the case of modularity that means you have to show that for the target system you are analyzing —  the economy — it is possible to make ‘surgical interventions,’ ‘wiggle,’ or manipulate parts of the system without changing other parts of the system. Since economies basically are interactionally complex open systems, it is de facto hard to find causes that are separately manipulable and show such invariance under intervention. Most social mechanisms and relations are not modular. Extraordinary claims require extraordinary evidence. So if economists want to continue to use models that presuppose modularity they have to start arguing for the reasonableness of it. As scientists, we should not merely accept what is standardly assumed. When is modularity a reasonable assumption and when is it not? That modularity makes it possible to identify causality in ‘epistemically convenient systems’ is no argument for assuming it to apply to real-world economies.

Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

The Poverty of Fictional Storytelling in Mainstream Economics: Syll, Lars  P.: 9781911156635: Amazon.com: BooksComing up with models that show how things may possibly be explained is not what we are looking for. It is not enough. We want to have models that build on assumptions that are not in conflict with known facts and that show how things actually are to be explained. Our aspirations have to be more far-reaching than just constructing coherent and ‘credible’ models about ‘possible worlds’. We want to understand and explain ‘difference-making’ in the real world and not just in some made-up fantasy world. No matter how many mechanisms or coherent relations you represent in your model, you still have to show that these mechanisms and relations are at work and exist in society if we are to do real science. Science has to be something more than just more or less realistic storytelling or ‘explanatory fictionalism.’ You have to provide decisive empirical evidence that what you can infer in your model also helps us to uncover what actually goes on in the real world. It is not enough to present epistemically informative insights about logically possible models. You also, and more importantly, have to have a world-linking argumentation and show how those models explain or teach us something about real-world economies. If you fail to support your models in that way, why should we care about them? And if you do not inform us about what are the real-world intended target systems of your modelling, how are we going to be able to value or test them? Without giving that kind of information it is impossible for us to check if the ‘possible world’ models you come up with also hold for the one world in which we live — the real world.

A ‘tractable’ model is of course great since it usually means you can solve it. But — using ‘simplifying’ tractability assumptions like modularity, because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt scientists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. Suppose economists do not really think their tractability assumptions — such as modularity — make for good and realist models. In that case, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

What is meant by ‘rigour’ in evidence-based educational policy?

27 Aug, 2023 at 18:05 | Posted in Theory of Science & Methodology | 1 Comment

It's Time to Cancel the Word 'Rigor'The bad news is, first, that there is no reason in general to suppose that an ATE [Average Treatment Effect] observed in one population will hold in others. That is what the slogan widespread now in education and elsewhere registers: “Context matters”.  The issue in this paper is not though about when we can expect a study result to hold elsewhere but rather when we can have EBPP-style [Evidence-Based Policy and Practice] “rigorous” evidence about any of the kinds of claims needed in practice. Here too, the news is bad: There are no good explicit methods with detailed content for inferring either general causal claims or causal predictions about what will happen in a specific case that look anything like rigorous in the sense for which RCTs are extolled and which we might hope for given the talk of rigour throughout the EBPP literature …

Although RCTs and other study designs can provide rigorous evidence about the effects of educational programmes in study populations, you need a lot more for figuring out whether and how to use those same programmes in your school …

The take-home lesson for EBPP institutions is that there is work to be done, new work of a new kind. It is time to rethink EBPP philosophy and in train to refocus where our efforts are put. The primary focus currently is on piling up gold nuggets (or good stiff twigs) – study results one can be sure of. But no heap of gold nuggets will add up to a general truth nor tell us what will happen next. That requires a panoply of information different in kind from what we are now vetting and disseminating. How shall we categorise this additional information, how organise it, how relate it to different plans of action? The big job now is to take on this far more amorphous, far more ambitious – and far more helpful – project.

Nancy Cartwright

Science and policy proposals should never be based on making heroically unreal tractability assumptions in the pursuit of ‘rigour’. Models and research designs building on such assumptions should make us naturally suspicious about their relevance and definitely weaken our degree of confidence in the proposals they produce.

Why it is better to be roughly right than precisely wrong | Real-World  Economics Review Blog

Sander Greenland on cognitive biases in science

17 Aug, 2023 at 09:15 | Posted in Theory of Science & Methodology | 1 Comment

.

Nancy Cartwright’s Pufendorf lectures

15 Aug, 2023 at 17:05 | Posted in Theory of Science & Methodology | Comments Off on Nancy Cartwright’s Pufendorf lectures

.

Yours truly is fond of science philosophers like Nancy Cartwright. With razor-sharp intellects, they immediately go for the essentials. They have no time for bullshit. And neither should we. These Pufendorf lectures are a must-watch for everyone with an interest in the methodology of science.

The difference between logic and science

23 Jul, 2023 at 12:07 | Posted in Theory of Science & Methodology | Comments Off on The difference between logic and science

The Critique of Pure Reason - Kindle edition by Kant, Immanuel. Politics &  Social Sciences Kindle eBooks @ Amazon.com.That logic should have been thus successful is an advantage which it owes entirely to its limitations, whereby it is justified in abstracting — indeed, it is under obligation to do so — from all objects of knowledge and their differences, leaving the understanding nothing to deal with save itself and its form. But for reason to enter on the sure path of science is, of course, much more difficult, since it has to deal not with itself alone but also with objects. Logic, therefore, as a propaedeutic, forms, as it were, only the vestibule of the sciences; and when we are concerned with specific modes of knowledge, while logic is indeed presupposed in any critical estimate of them, yet for the actual acquiring of them we have to look to the sciences properly so called, that is, to the objective sciences.

In mainstream economics, both logic and mathematics are used extensively. And most mainstream economists sure look upon themselves as “twice blessed.”

Is there any scientific ground for that blessedness? None whatsoever!

If scientific progress in economics lies in our ability to tell ‘better and better stories’ one would, of course, expect economics journals being filled with articles supporting the stories with empirical evidence confirming the predictions. However, the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though explicit discussion, argumentation and justification on the subject aren’t considered to be required.

In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in modern economics. Applying it to real-world open systems immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyse confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science, we argue and try to substantiate our beliefs and hypotheses with reliable evidence. Propositional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Science and philosophy

19 Jun, 2023 at 10:51 | Posted in Theory of Science & Methodology | 1 Comment

La Sagesse des modernes | André Comte-Sponville,Luc Ferry | Robert LaffontLa philosophie n’est pas une science, ni ne peut l’être. Prétendre le contraire, c’est la vouer immanquablement à l’échec, comme elle l’est en effet, mais aussi à l’illusion ou à la mauvaise foi … Il n’y a pas de démonstration philosophique, et s’il y en avait ce serait la fin de la philosophie – puisqu’elle ne se nourrit que de désaccords et d’incertitudes. Qu’est-ce que philosopher? C’est penser sans preuves, c’est penser plus loin qu’on ne sait, tout en se soumettant pourtant – le plus qu’on peut, le mieux qu’on peut – aux contraintes de la raison, de l’expérience et du savoir. C’est comme une science impossible, qui ne se nourrirait que de sa propre impossibilité. Pour la surmonter? Sans doute, puisqu’elle n’existe qu’à cette condition. Mais sans pourtant en sortir, puisqu’elle cesserait alors d’être philosophique … Philosopher c’est penser sans preuves, mais point n’importe comment. C’est penser plus loin qu’on ne sait, mais point contre les savoirs disponibles. C’est se confronter à l’impossible, mais point s’enfoncer dans le ridicule ou la niaiserie. C’est s’affronter à l’inconnu, mais point s’enfermer dans l’ignorance. Qui ne voit que les sciences, aujourd’hui, nous en apprennent plus, sur le monde et sur le vivant, que les philosophes?

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.