Why hasn’t the crisis discredited neoclassical economics more?

26 July, 2013 at 22:32 | Posted in Economics | 3 Comments

There are probably many answers to the question. I suggested before that the best way to look at it is from a sociological standpoint. The same people hold the same positions at the key ‘respectable’ universities, go to the same ‘relevant’ meetings, and award the same ‘important’ prizes. And research does build on previous research. Let alone that the economics profession, like the others, is there to protect and reproduce the status quo.

At any rate, in his new book Philip Mirowski, from Notre Dame, and a member of Institute for New Economic Thinking (INET; which has funded I should say several heterodox authors) dedicates, in part, his first chapter to the topic. He says about the INET meetings, which were supposed to display some of the changes in the profession after the crisis:

“[…] the first INET meeting at Cambridge University in 2010 bore some small promise—for instance, when protestors disrupted the IMF platitudes of Dominique Strauss-Kahn in Kings great hall, or when Lord Adair Turner bravely suggested we needed a much smaller financial sector.weather 007But the sequel turned out to be a profoundly more unnerving and chilly affair, and not just due to the caliginous climate. The nightmare scenario began with a parade of figures whom one could not in good conscience admit to anyone’s definition of “New Economic Thinking”: Ken Rogoff, Larry Summers, Barry Eichengreen, Niall Ferguson and Gordon Brown … The range of economic positions proved much less varied than at the first meeting, and couldn’t help notice that the agenda seemed more pitched toward capturing the attention of journalists and bloggers [oh my, I’m included in this one], and those more interested in getting to see more star power up close than sampling complex thinking outside the box. It bespoke an unhealthy obsession with Guaranteed Legitimacy and Righteous Sound Thinking.”

I always thought naïve to think that the crisis would lead to the demise of neoclassical economics. In fact, in the US it was the Great Depression and the development of a certain type of Keynesianism (the Neoclassical Synthesis one) that led to the domination of neoclassical economics (before that the profession was more eclectic and if anything dominated, in the US, by institutionalists). But I had some hopes for INET to open dialogue with less crazy (sold out?) within the mainstream. The fact that Mirowski calls the second meeting a nightmare scenario does not bode well for the future of the profession.

Naked Keynesianism

Advertisements

What has Keynes got to do with New Keynesian Macroeconomics? Nothing!

24 July, 2013 at 17:39 | Posted in Economics | 1 Comment

Paul Krugman has a post on his blog discussing “New Keynesian” macroeconomics and the definition of neoclassical economics:

So, what is neoclassical economics? … I think we mean in practice economics based on maximization-with-equilibrium. We imagine an economy consisting of rational, self-interested players, and suppose that economic outcomes reflect a situation in which each player is doing the best he, she, or it can given the actions of all the other players …

Some economists really really believe that life is like this — and they have a significant impact on our discourse. But the rest of us are well aware that this is nothing but a metaphor; nonetheless, most of what I and many others do is sorta-kinda neoclassical because it takes the maximization-and-equilibrium world as a starting point or baseline, which is then modified — but not too much — in the direction of realism.

This is, not to put too fine a point on it, very much true of Keynesian economics as practiced … New Keynesian models are intertemporal maximization modified with sticky prices and a few other deviations …

Why do things this way? Simplicity and clarity. In the real world, people are fairly rational and more or less self-interested; the qualifiers are complicated to model, so it makes sense to see what you can learn by dropping them. And dynamics are hard, whereas looking at the presumed end state of a dynamic process — an equilibrium — may tell you much of what you want to know.

Being myself sorta-kinda Keynesian I  find this analysis utterly unconvincing. Why? Let me try to explain.

Macroeconomic models may be an informative tool for research. But if practitioners of “New Keynesian” macroeconomics do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of “New Keynesian” macroeconomics. So far, I cannot really see that it has yielded very much in terms of realistic and relevant economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that “New Keynesians” cannot give supportive evidence for their considering it fruitful to analyze macroeconomic structures and events as the aggregated result of optimizing representative actors. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that “New Keynesian” macroeconomics on the whole has not delivered anything else than “as if” unreal and irrelevant models.

If we are going to be able to show that the mechanisms or causes that we isolate and handle in our microfunded macromodels are stable in the sense that they do not change when we “export” them to our “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems. Or as the always eminently quotable Keynes wrote in Treatise on Probability(1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations. But models can never be more than a starting point in that endeavour. There is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model.

This is a more fundamental and radical problem than the celebrated “Lucas critique” have suggested. This is not the question if deep parameters, absent on the macro-level, exist in “tastes” and “technology” on the micro-level. It goes deeper. Real world social systems are not governed by stable causal mechanisms or capacities. It is the criticism that Keynes first launched against the “atomistic fallacy” already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that economics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of economic theoretical modeling – rather useless.

Keynes basically argued that it was inadmissible to project history on the future. Consequently an economic policy cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different “variables” was not enough. If they could not get at the causal structure that generated the data, they were not really “identified”. Dynamic stochastic general euilibrium (DSGE) macroeconomists – including “New Keynesians” – has drawn the conclusion that the problem with unstable relations is to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioural parameters are seen to be stable even to changes in economic policies. As yours truly has argued in a couple of post (e. g. here and here), this, however, is a dead end.

So here we are getting close to the heart of darkness in “New Keynesian” macroeconomics. Where “New Keynesian” economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterize all open social systems – including the economic system. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the idividual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms. And although one may easily agree with Krugman’s emphasis on simple models, the simplifications used may have to be simplifications adequate for macroeconomics and not those adequate for microeconomics.

“New Keynesian” macromodels describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their “macroeconomic laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in these macroeconomic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery deliver “precise” and “rigorous” results. This, of course, makes exporting to real world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity, when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real world target systems does not take us very far, unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real world target system.

In microeconomics we know that aggregation really presupposes homothetic an identical preferences, something that almost never exist in real economies. The results given by these assumptions are therefore not robust and do not capture the underlying mechanisms at work in any real economy. And models that are critically based on particular and odd assumptions – and are neither robust nor congruent to real world economies – are of questionable value.

Even if economies naturally presuppose individuals, it does not follow that we can infer or explain macroeconomic phenomena solely from knowledge of these individuals. Macroeconomics is to a large extent emergent and cannot be reduced to a simple summation of micro-phenomena. Moreover, as we have already argued, even these microfoundations aren’t immutable. The “deep parameters” of  “New Keynesian” DSGE models– “tastes” and “technology” – are not really the bedrock of constancy that they believe (pretend) them to be.

So I cannot concur with Krugman – and other sorta-kinda “New Keynesians” – when they try to reduce Keynesian economics to “intertemporal maximization modified with sticky prices and a few other deviations”. “New Keynesian” macroeconomics is a gross misnomer, since it has nothing to do with the fundamentals of Keynes’s economic thoughts. As John Quiggin so aptly writes:

If there is one thing that distinguished Keynes’ economic analysis from that of his predecessors, it was his rejection of the idea of a unique full employment equilibrium to which a market economy will automatically return when it experiences a shock. Keynes argued that an economy could shift from a full-employment equilibrium to a persistent slump as the result of the interaction between objective macroeconomic variables and the subjective ‘animal spirits’ of investors and other decision-makers. It  is this perspective that has been lost in the absorption of New Keynesian macro into the DSGE framework.

A critical realist re-thinking of Das Adam Smith Problem

23 July, 2013 at 15:58 | Posted in Economics | Comments Off on A critical realist re-thinking of Das Adam Smith Problem

Talk of a ‘natural harmony’ in human affairs, of a ‘concord’ produced by the now-celebrated ‘invisible hand’, runs like a leitmotif through Adam Smith’s work. A key question in Smith-scholarship is then: how does Smith suppose this harmony to be constituted? According to the Problem-theorists, Smith claims in Wealth of Nations (WN) that individuals motivated by self-interest, and in virtue of that motivation alone, are able to co-ordinate their activities, whereas in Theory of Moral Sentiments (TMS) he claims that benevolence alone is supposed to do the job. Of course, if Smith had claimed these things, he would stand guilty (of inconsistency) as charged. But these assertions play no role in Smith’s social theory; the Problem, for whatever reason(s), is a post-Smith fabrication.

images-10Smith did claim that self-interest is endemic to human behaviour. But this kind of self-interest — and this kind of interest pervades TMS just as much as WN — is more a matter of perspective than some crude (economic) impulse to self-gratification: of course, as human actor, I have to see the act as mine and so, in some sense, as in my interest, even when I act ‘benevolently’.

As for the other kind of self-interest, or ‘self-love’. Yes, this kind of act — behaviour motivated by self-interest — dominates the discourse of WN, but not because Smith (sometime between TMS and WN) has changed his opinion on how people are motivated. It is rather that WN (unlike TMS ) is not concerned with situations in which a ‘benevolent’ disposition is to be expected: that is why benevolence is not much discussed. There is no inconsistency … it is all a matter of the ‘spheres of intimacy’.

But, in any case, Smith does not claim in WN (or in anywhere else for that matter) that people are able to co-ordinate their activities because they are motivated by self-interest; for Smith, motivation of any kind does not enable or capacitate anything at all. And Smith has not changed his opinion sometime between TMS and WN as to how people are capacitated to act, as to the competencies that they draw on, whatever the motivation. In TMS Smith offers ‘sympathy’ or ‘fellow-feeling’ as that core capacity or competence, and there is no reason in WN to suggest that he has changed his mind. Whether we act out of concern for self or for other, we are only able to act as we do because we are sympathisers.

Apropos Das Adam Smith Problem: For Smith to say that the human actor sympathises does not mean that the Smith of TMS postulates a naturally altruistic, rather than a naturally egoistic, actor — a view that he is then supposed to have reversed in the Wealth of Nations. Of course it is true (to paraphrase Smith himself) that we should not expect our dinner from the benevolence of the (commercially oriented) butcher and baker. On the other hand, it would be surprising (and worrying, for all sorts of reasons) if the dependent child did not expect his dinner from the benevolence of his kith and kin (who, for some people at least, are also commercial butchers and bakers). Smith recognises that, depending on circumstance, we are capable of both behavioural dispositions. But Smith also recognises that to say that we are capable of acting and that this acting takes different forms — of course we are and of course it does — is not to say how we are capable. Smith’s position on these matters hardly came as a bolt from the blue. Rather it is all part of a wider current of eighteenth century thought that rejects the crude Hobbesian view of self-interested behaviour. Like others in the so-called British Moralist tradition Smith wants to re-think the question as to what a viable (and prosperous) social order presupposes. The spontaneous emergence of a (relatively) liberal political economy in Britain by the early eighteenth century had called into question many of the fundamental assumptions Hobbes makes in regard to human nature. In Hobbes, individual self-interest needs to be held in check by an all-seeing, all-powerful Sovereign. Evidently, though, in the light of events, self-interest needed to be re-thought as a constructive, rather than destructive, force. The human being as sympathiser became a key element in that recon- ceptualisation. For Hume, for example, ‘no quality of human nature is more remarkable, both in itself and its consequences, than that propensity we have to sympathize with others, and to receive by communication their inclinations and sentiments, however different from, or even contrary to our own’. Hume here seems to come very close to anticipating Smithian sympathy. Ultimately, however, Hume cannot get there, because for Hume to hold to a Smithian view of sympathy would render what he has to say about other things incoherent …

Fortunately Smith is not bound by Hume’s self-imposed methodological strictures: entities for Smith do not need to be conspicuous to be real. Smithian sympathy, presupposing a third-person perspective within the self, cannot be conspicuous because, by definition, it can only ever be the first person that is on view. But it can be retroductively inferred from that which is conspicuous: sympathy is real enough, according to Smith’s lights, or how else would any form of (harmonised) behaviour be possible? In the terminology of the critical realist, Smith’s talk of sympathy is not concerned with the actual, not concerned with our acts as such — whether self-interested or benevolent — nor with the significance that the moralist reads into those acts: a significance that is also actual. Rather his concern is with the real: the condition of possibility of our actings and, related to this, how we are able, on reflection, to pass ‘moral’ judgement on the actions of others. Again, we cannot see the third-person perspective, the sense of right, that we carry around inside ourselves and that enables those actualities, but we can infer the existence of this capacity from the otherwise inexplicable “concords” that it produces. What we do in fact sense as right is context-sensitive. But the key to human action (and a fortiori human interaction) for Smith is that, always and everywhere, we do expect.

David Wilson & William Dixon

When randomized controlled trials fail

23 July, 2013 at 09:57 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 1 Comment

Using randomized controlled trials (RCTs) is not at all the “gold standard” that it has lately often been portrayed as. As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally:

Disappointing though its outcome was, the study represented a victory for science over guesswork, of hard data over hunches. As far as clinical trials went, Dr. Gilbert’s study was the gold standard …

The centerpiece of the country’s drug-testing system — the randomized, controlled trial — had worked.

Except in one respect: doctors had no more clarity after the trial about how to treat patients than they had before. Some patients did do better on the drug … but the trial was unable to discover these “responders” along the way, much less examine what might have accounted for the difference …

control-group1-1Rigorous statistical tests are done to make sure that the drug’s demonstrated benefit is genuine, not the result of chance. But chance turns out to be a hard thing to rule out. When the measured effects are small — as they are in the vast majority of clinical trials — mere chance is often the difference between whether a drug is deemed to work or not, says John P. A. Ioannidis, a professor of medicine at Stanford.

In a famous 2005 paper published in The Journal of the American Medical Association, Dr. Ioannidis, an authority on statistical analysis, examined nearly four dozen high-profile trials that found a specific medical intervention to be effective. Of the 26 randomized, controlled studies that were followed up by larger trials (examining the same therapy in a bigger pool of patients), the initial finding was wholly contradicted in three cases (12 percent). And in another 6 cases (23 percent), the later trials found the benefit to be less than half of what was first reported.

It wasn’t the therapy that changed in each case, but rather the sample size. And Dr. Ioannidis believes that if more rigorous, follow-up studies were actually done, the refutation rate would be far higher …

The fact that the pharmaceutical companies sponsor and run the bulk of investigative drug trials brings what Dr. Ioannidis calls a “constellation of biases” to the process. Too often, he says, trials are against “a straw-man comparator” like a placebo rather than a competing drug. So the studies don’t really help us understand which treatments for a disease work best.

But a more fundamental challenge has to do with the nature of clinical trials themselves. “When you do any kind of trial, you’re really trying to answer a question about truth in the universe,” says Hal Barron, the chief medical officer and head of global development at Roche and Genentech. “And, of course, we can’t know that. So we try to design an experiment on a subpopulation of the world that we think is generalizable to the overall universe” — that is, to the patients who will use the drug.

That’s a very hard thing to pull off. The rules that govern study enrollment end up creating trial populations that invariably are much younger, have fewer health complications and have been exposed to far less medical treatment than those who are likely to use the drug … Even if clinical researchers could match the demographics of study populations to those of the likely users of these medicines, no group of trial volunteers could ever match the extraordinary biological diversity of the drugs’ eventual consumers.

Clifton Leaf/New York Times

Ergodic and stationary random processes (wonkish)

22 July, 2013 at 22:07 | Posted in Statistics & Econometrics | Comments Off on Ergodic and stationary random processes (wonkish)

 

The price of human liberty

22 July, 2013 at 10:18 | Posted in Politics & Society | Comments Off on The price of human liberty

BuryingtheDeadonOmahaBeachbyWilliam

So you want to run your millionth regression?

21 July, 2013 at 22:00 | Posted in Economics, Statistics & Econometrics | Comments Off on So you want to run your millionth regression?


The cost of computing has dropped exponentially, but the cost of thinking is what it always was. That is why we see so many articles with so many regressions and so little thought.

Zvi Griliches
 
 
 
 

Läs om!

21 July, 2013 at 14:16 | Posted in Varia | Comments Off on Läs om!

Hur ska man läsa? Det är nog en rätt vanlig fundering alla någon gång har haft kring vårt förhållningssätt till litteratur och läsande.

Själv får jag anledning fundera kring detta när min älskade – som oförtrutet slukar nya böcker i ett rasande tempo – åter igen försöker övertala mig att ställa tillbaka Röda rummet, Hemsöborna, Martin Bircks ungdom, Den allvarsamma leken, Jerusalem eller Gösta Berlings saga i biblioteket och “pröva någon ny bok” för en gångs skull. För henne är det obegripligt att någon ens kommer på tanken att läsa om en roman när det finns så mycket oupptäckt och nytt att kasta sig över i litteraturens underbara värld. För henne – och säkert många andra bokslukare – är det bara dårar som hängivet ägnar sig åt ständiga omläsningar.

Men låt mig trots det försöka försvara oss dårar! Till min hjälp tar jag en bok som jag – just det – läst om gång på gång under snart tre årtionden. Olof Lagercrantz skriver i sin underbara lilla bok:

lagercrantz-olof-om-konsten-att-lasa-och-skrivaNär vi läser andra gången är det som att läsa en döds biografi eller se vårt liv strax innan vi ska lämna det. Nu står det klart varför den där upplevelsen i första kapitlet gjorde så starkt intryck på hjältinnan. Den avgjorde i själva verket hennes liv. Ett mönster träder fram. Det som var oöverskådligt blir enkelt och begripligt.

Nu kan vi också, liksom vi gör när vi minns vårt eget liv, stanna upp vid särskilt vackra och meningsfulla avsnitt. Vi behöver inte skynda oss ty vi vet fortsättningen. Ingen oro för framtiden hindrar oss att njuta av nuet.
 
 

How to understand science

21 July, 2013 at 10:21 | Posted in Theory of Science & Methodology | Comments Off on How to understand science

The primary aim of this study is the development of a systematic realist account of science. In this way I hope to provide a comprehensive alternative to the positivism that has usurped the title of science. I think that only the position developed here can do full justice to the rationality of scientific practice or sustain the intelligibility of such scientific activities as theoryconstruction and experimentation. And that while recent developments in the philosophy of science mark a great advance on positivism they must eventually prove vulnerable to positivist counter-attack, unless carried to the limit worked out here.

My subsidiary aim is thus to show once-and-for-all why no return to positivism is possible. This of course depends upon my primary aim.9781844672042-frontcoverFor any adequate answer to the critical metaquestion ‘what are the conditions of the plausibility of an account of science ?’ presupposes an account which is capable of thinking of those conditions as special cases. That is to say, to adapt an image of Wittgenstein’s, one can only see the fly in the fly-bottle if one’s perspective is different from that of the fly. And the sting is only removed from a system of thought when the particular conditions under which it makes sense are described. In practice this task is simplified for us by the fact that the conditions under which positivism is plausible as an account of science are largely co-extensive with the conditions under which experience is significant in science. This is of course an important and substantive question which we could say, echoing Kant, no account of science can decline, but positivism cannot ask, because (it will be seen) the idea of insignificant experiences transcends the very bounds of its thought.

This book is written in the context of vigorous critical activity in the philosophy of science. In the course of this the twin templates of the positivist view of science, viz. the ideas that science has a certain base and a deductive structure, have been subjected to damaging attack. With a degree of arbitrariness one can separate this critical activity into two strands. The first, represented by writers such as Kuhn, Popper, Lakatos, Feyerabend, Toulmin, Polanyi and Ravetz, emphasises the social character of science and focusses particularly on the phenomena of scientific change and development. It is generally critical of any monistic interpretation of scientific development, of the kind characteristic of empiricist historiography and implicit in any doctrine of the foundations of knowledge. The second strand, represented by the work of Scriven, Hanson, Hesse and Harré among others, calls attention to the stratification of science. It stresses the difference between explanation and prediction and emphasises the role played by models in scientific thought. It is highly critical of the deductivist view of the structure of scientific theories, and more generally of any exclusively formal account of science. This study attempts to synthesise these two critical strands; and to show in particular why and how the realism presupposed by the first strand must be extended to cover the objects of scientific thought postulated by the second strand. In this way I will be describing the nature and the development of what has been hailed as the ‘Copernican Revolution’ in the philosophy of science.

To see science as a social activity, and as structured and discriminating in its thought, constitutes a significant step in our understanding of science. But, I shall argue, without the support of a revised ontology, and in particular a conception of the world as stratified and differentiated too, it is impossible to steer clear of the Scylla of holding the structure dispensable in the long run (back to empiricism) without being pulled into the Charybdis of justifying it exlusively in terms of the fixed or changing needs of the scientific community (a form of neoKantian pragmatism exemplified by e.g. Toulmin and Kuhn). In this study I attempt to show how such a revised ontology is in fact presupposed by the social activity of science. The basic principle of realist philosophy of science, viz. that perception gives us access to things and experimental activity access to structures that exist independently of us, is very simple. Yet the full working out of this principle implies a radical account of the nature of causal laws, viz. as expressing tendencies of things, not conjunctions of events. And it implies that a constant conjunction of events is no more a necessary than a sufficient condition for a causal law.

Greetings from Greg Mankiw

20 July, 2013 at 23:28 | Posted in Varia | Comments Off on Greetings from Greg Mankiw

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.