Den antiintellektuella avgrunden är nära när den postmoderna sanningsrelativismen infekterar det offentliga samtalet på alla nivåer, inklusive den akademiska världen.
I Sverige tycks den pedagogiska disciplinen vara värst smittad. En docent i pedagogik fick för några år sedan Skolverkets uppgift att skriva en rapport om fysikundervisningen i den svenska skolan, samt komma med förslag på hur den skulle attrahera fler flickor.
”Föreställningen om det vetenskapliga tänkandets självklara överhöghet rimmar illa med jämställdhets- och demokratiidealen. […] Vissa sätt att tänka och resonera premieras mera än andra i naturvetenskapliga sammanhang. […] Om man inte uppmärksammar detta riskerar man att göra missvisande bedömningar. Till exempel genom att oreflekterat utgå från att ett vetenskapligt tänkande är mer rationellt och därför borde ersätta ett vardagstänkande” …
Pedagogen skriver vidare i rapporten: ”En genusmedveten och genuskänslig fysik förutsätter en relationell infallsvinkel på fysiken samt att en hel del av det traditionella vetenskapliga kunskapsinnehållet i fysiken plockas bort.”
Det vetenskapliga kunskapsinnehållet i fysiken ska alltså ”plockas bort” för att ”underlätta” för flickor. Inte nog med att detta är en förfärlig kunskapssyn, det är dessutom kränkande att betrakta flickor som oförmögna eller sämre på att ta till sig kunskap i fysik.
Författaren till rapporten heter Moira von Wright och är numera professor i pedagogik och rektor för Södertörns högskola. När nu en sådan kunskapsteoretisk grundsyn slagit rot i våra högre lärosäten har vi ett problem …
Efter att ha läst i ett av de senaste numren av Pedagogisk Forskning i Sverige (2-3 2014) — där författaren till artikeln “En pedagogisk relation mellan människa och häst. På väg mot en pedagogisk filosofisk utforskning av mellanrummet” ger följande intressanta “programförklaring” — är man dock föga förvånad över sakernas tillstånd inom svensk pedagogisk “vetenskap”:
Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.
Chameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic …
Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.
Reading Pfleiderer’s article reminds me of what H. L. Mencken once famously said:
There is always an easy solution to every problem – neat, plausible and wrong.
Pfleiderer’s perspective may be applied to many of the issues involved when modeling complex and dynamic economic phenomena. Let me take just one example — simplicity.
When it come to modeling I do see the point emphatically made time after time by e. g. Paul Krugman in simplicity — as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconomic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconomics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fulfill its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of “simple” macroeconomic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.
All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.
Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.
The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.
Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.
If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.
That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.
As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.
The primary aim of this study is the development of a systematic realist account of science. In this way I hope to provide a comprehensive alternative to the positivism that has usurped the title of science. I think that only the position developed here can do full justice to the rationality of scientific practice or sustain the intelligibility of such scientific activities as theoryconstruction and experimentation. And that while recent developments in the philosophy of science mark a great advance on positivism they must eventually prove vulnerable to positivist counter-attack, unless carried to the limit worked out here.
My subsidiary aim is thus to show once-and-for-all why no return to positivism is possible. This of course depends upon my primary aim.For any adequate answer to the critical metaquestion ‘what are the conditions of the plausibility of an account of science ?’ presupposes an account which is capable of thinking of those conditions as special cases. That is to say, to adapt an image of Wittgenstein’s, one can only see the fly in the fly-bottle if one’s perspective is different from that of the fly. And the sting is only removed from a system of thought when the particular conditions under which it makes sense are described. In practice this task is simplified for us by the fact that the conditions under which positivism is plausible as an account of science are largely co-extensive with the conditions under which experience is significant in science. This is of course an important and substantive question which we could say, echoing Kant, no account of science can decline, but positivism cannot ask, because (it will be seen) the idea of insignificant experiences transcends the very bounds of its thought.
This book is written in the context of vigorous critical activity in the philosophy of science. In the course of this the twin templates of the positivist view of science, viz. the ideas that science has a certain base and a deductive structure, have been subjected to damaging attack. With a degree of arbitrariness one can separate this critical activity into two strands. The first, represented by writers such as Kuhn, Popper, Lakatos, Feyerabend, Toulmin, Polanyi and Ravetz, emphasises the social character of science and focusses particularly on the phenomena of scientific change and development. It is generally critical of any monistic interpretation of scientific development, of the kind characteristic of empiricist historiography and implicit in any doctrine of the foundations of knowledge. The second strand, represented by the work of Scriven, Hanson, Hesse and Harré among others, calls attention to the stratification of science. It stresses the difference between explanation and prediction and emphasises the role played by models in scientific thought. It is highly critical of the deductivist view of the structure of scientific theories, and more generally of any exclusively formal account of science. This study attempts to synthesise these two critical strands; and to show in particular why and how the realism presupposed by the first strand must be extended to cover the objects of scientific thought postulated by the second strand. In this way I will be describing the nature and the development of what has been hailed as the ‘Copernican Revolution’ in the philosophy of science.
To see science as a social activity, and as structured and discriminating in its thought, constitutes a significant step in our understanding of science. But, I shall argue, without the support of a revised ontology, and in particular a conception of the world as stratified and differentiated too, it is impossible to steer clear of the Scylla of holding the structure dispensable in the long run (back to empiricism) without being pulled into the Charybdis of justifying it exlusively in terms of the fixed or changing needs of the scientific community (a form of neoKantian pragmatism exemplified by e.g. Toulmin and Kuhn). In this study I attempt to show how such a revised ontology is in fact presupposed by the social activity of science. The basic principle of realist philosophy of science, viz. that perception gives us access to things and experimental activity access to structures that exist independently of us, is very simple. Yet the full working out of this principle implies a radical account of the nature of causal laws, viz. as expressing tendencies of things, not conjunctions of events. And it implies that a constant conjunction of events is no more a necessary than a sufficient condition for a causal law.
A passage from Stanley Lieberson’s classic book on the methodology of social research, Making It Count (1985), has always stuck with me. In it, he considers what a social scientist might conclude from a regression model predicting black and white earnings from various background characteristics, including education. Invariably the coefficient for schooling is strong, positive, and significant—the more education one has, the greater one’s earnings. Moreover, the apparent gap between black and white earnings is much smaller when schooling is included as a predictor in the equation than when it is left out. In this sense, the racial gap is “explained” by lower average levels of education among blacks compared with whites. Obviously, therefore, all one has to do to reduce the racial gap in earnings is to increase levels of black education. The social scientist thus recommends that policymakers design and implement programs to reduce black dropout rates and increase the odds of college attendance.
“Suppose we start with a radically different perspective on this question and see where it leads us.
Let us hypothesize that racial or other interest groups will tend to take as much as they can for
themselves and will give as little as necessary to maintain the system and avoid having it overturned.
In this case, whites will give blacks as little as they can. Under such circumstances, one would assume that observed interrelations between income gaps and features such as education . . . describe . . . the current pathways leading from a specific causal force to the outcome of that force. If so, a complicated causal analysis of factors contributing to the racial gaps in income has not the causal value one might have assumed. It describes the given set of events at a given time; it describes what a black person might well follow as a get-ahead strategy if he or she can assume that not many other blacks will follow the same strategy and hence the basic [social] matrix will remain unaltered. But there is no assurance that this matrix will continue to operate—indeed, there is virtual certainty that the matrix will not continue to operate if some superficial factor that appears to cause the income gap is no longer relevant (for example, if the groups end up with the same educational distribution). In which case, new rules and regulations will operate; the other regression coefficients will change in value in order to maintain the existing system.” (pp. 191–92)
Simply put, Lieberson argues that if whites are selfishly motivated to discriminate against blacks to enhance their own material well-being, then when the government forces them to end a particular discriminatory practice, they will simply look for other means to maintain white privilege. If an older discriminatory mechanism based explicitly on race becomes impossible to sustain, whites will substitute new ones that are more subtly associated with race. The specific mechanisms by which racial stratification is achieved may thus be expected to change over time as practices shift in response to civil rights enforcement.
In my eyes, social orders are normally fragile and precarious; unpleasant surprises may turn up at any moment. I also think it wrong to demand that someone who identifies a problem should immediately offer a solution as well. I do not bow to such prescriptions … Problems may be such that there is no solution to them — or anyway, none achievable here and now. If someone were to ask me reproachfully where was ‘the positive,’ this would then indeeed be a case where I could appeal to Adorno. For his reply, much better formulated, would doubtless have been: what if there is nothing positive?
[h/t Lord Keynes]
For more on my own objections to Bayesianism:
Bayesianism — a patently absurd approach to science
Bayesianism — preposterous mumbo jumbo
One of the reasons I’m a Keynesian and not a Bayesian
Keynes and Bayes in paradise
One of my favourite “problem situating lecture arguments” against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule
P(H|e) = [P(e|H)P(H)]/P(e),
where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than e, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …
When applying deductivist thinking to economics, neoclassical economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply don’t hold.
If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.
Or as Hans Albert has it on the neoclassical style of thought:
In everyday situations, if, in answer to an inquiry about the weather forecast, one is told that the weather will remain the same as long as it does not change, then one does not normally go away with the impression of having been particularly well informed, although it cannot be denied that the answer refers to an interesting aspect of reality, and, beyond that, it is undoubtedly true …
We are not normally interested merely in the truth of a statement, nor merely in its relation to reality; we are fundamentally interested in what it says, that is, in the information that it contains …
Information can only be obtained by limiting logical possibilities; and this in principle entails the risk that the respective statement may be exposed as false. It is even possible to say that the risk of failure increases with the informational content, so that precisely those statements that are in some respects most interesting, the nomological statements of the theoretical hard sciences, are most subject to this risk. The certainty of statements is best obtained at the cost of informational content, for only an absolutely empty and thus uninformative statement can achieve the maximal logical probability …
The neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures – appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …
Science progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …
The connections sketched out above are part of the general logic of the sciences and can thus be applied to the social sciences. Above all, with their help, it appears to be possible to illuminate a methodological peculiarity of neoclassical thought in economics, which probably stands in a certain relation to the isolation from sociological and social-psychological knowledge that has been cultivated in this discipline for some time: the model Platonism of pure economics, which comes to expression in attempts to immunize economic statements and sets of statements (models) from experience through the application of conventionalist strategies …
Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …
A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.
One of the few statisticians that I have on my blogroll is Andrew Gelman. Although not sharing his Bayesian leanings, yours truly finds his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer infra for “reverse causal questioning” is typical Gelmanian:
When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …
Reverse causal reasoning is different; it involves asking questions and searching for new variables that might not yet even be in our model. We can frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to “need an explanation”? It means that existing explanations — the existing model of the phenomenon — does not do the job …
By formalizing reverse casual reasoning within the process of data analysis, we hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. This is consistent with views such as Cartwright (2007) that causal inference in reality is more complex than is captured in any theory of inference … What we are really suggesting is a way of talking about reverse causal questions in a way that is complementary to, rather than outside of, the mainstream formalisms of statistics and econometrics.
In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.
Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.
In Gelman’s essay there is no explicit argument for abduction — inference to the best explanation — but I would still argue that it is de facto nothing but a very strong argument for why scientific realism and inference to the best explanation are the best alternatives for explaining what’s going on in the world we live in. The focus on causality, model checking, anomalies and context-dependence — although here expressed in statistical terms — is as close to abductive reasoning as we get in statistics and econometrics today.
Yours truly and people like Tony Lawson have for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.
Within mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!
Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.
Mathematical axiomatic systems lead to analytic truths, which do not require empirical verification, since they are true by virtue of definitions and logic. It is a startling discovery of the twentieth century that sufficiently complex axiomatic systems are undecidable and incomplete. That is, the system of theorem and proof can never lead to ALL the true sentences about the system, and ALWAYS contain statements which are undecidable – their truth values cannot be determined by proof techniques. More relevant to our current purpose is that applying an axiomatic hypothetico-deductive system to the real world can only be done by means of a mapping, which creates a model for the axiomatic system. These mappings then lead to assertions about the real world which require empirical verification. These assertions (which are proposed scientific laws) can NEVER be proven in the sense that mathematical theorems can be proven …
Many more arguments can be given to explain the difference between analytic and synthetic truths, which corresponds to the difference between mathematical and scientific truths. As I have explained in greater detail in my paper, the scientific method arose as a rejection of the axiomatic method used by the Greeks for scientific methodology. It was this rejection of axiomatics and logical certainty in favour of empirical and observational approach which led to dramatic progress in science. However, this did involve giving up the certainties of mathematical argumentation and learning to live with the uncertainties of induction. Economists need to do the same – abandon current methodology borrowed from science and develop a new methodology suited for the study of human beings and societies.