Top 20 Heterodox Economics Books

28 February, 2014 at 12:04 | Posted in Economics | 29 Comments

Top 20 KC Cafe Radio Logo
 

  • Karl Marx, Das Kapital (1867)
  • Thorstein Veblen, The Theory of the Leisure Class (1899)
  • Joseph Schumpeter, The Theory of Economic Development (1911)
  • Nikolai Kondratiev, The Major Economic Cycles (1925)
  • Gunnar Myrdal, The Political Element in the Development of Economic Theory (1930)
  • John Maynard Keynes, The General Theory (1936)
  • Paul Sweezy, Theory of Capitalist Development (1956)
  • Joan Robinson, Accumulation of Capital (1956)
  • John Kenneth Galbraith, The Affluent Society (1958)
  • Piero Sraffa, Production of Commodities by Means of Commodities (1960)
  • Johan Åkerman, Theory of Industrialism (1961)
  • Axel Leijonhufvud, Keynes and the Classics (1969)
  • Nicholas Georgescu-Roegen, The Entropy Law and the Economic Process (1971)
  • Michal Kalecki, Selected Essays on the Dynamics of the Capitalist Economy (1971)
  • Paul Davidson, Money and the Real World (1972)
  • Hyman Minsky, John Maynard Keynes (1975)
  • Philip Mirowski, More Heat than Light (1989)
  • Tony Lawson, Economics and Reality (1997)
  • Steve Keen, Debunking Economics (2001)
  • John Quiggin, Zombie Economics (2010)

Teaching of economics — captured by a small and dangerous sect

28 February, 2014 at 10:56 | Posted in Economics | 4 Comments

Dept_of_Econ_Fac_Pic
 
The fallacy of composition basically consists of the false belief that the whole is nothing but the sum of its parts.  In the society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori can’t proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

This fact shows up when orthodox/mainstream/neoclassical economics tries to argue for the existence of The Law of Demand – when the price of a commodity falls, the demand for it will increase – on the aggregate. Although it may be said that one succeeds in establishing The Law for single individuals it soon turned out – in the Sonnenschein-Mantel-Debreu theorem firmly established already in 1976 – that it wasn’t possible to extend The Law of Demand to apply on the market level, unless one made ridiculously unrealistic assumptions such as individuals all having homothetic preferences – which actually implies that all individuals have identical preferences.

This could only be conceivable if all agents are identical (i. e. there is in essence only one actor) — the (in)famous representative actor. So, yes, it was possible to generalize The Law of Demand – as long as we assumed that on the aggregate level there was only one commodity and one actor. What generalization! Does this sound reasonable? Of course not. This is pure nonsense!

How has neoclassical economics reacted to this devastating findig? Basically by looking the other way, ignoring it and hoping that no one sees that the emperor is naked.

Having gone through a handful of the most frequently used textbooks of economics at the undergraduate level today, I can only conclude that the models that are presented in these modern neoclassical textbooks try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent.

That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

So what modern economics textbooks present to students are really models built on the assumption that an entire economy can be modeled as a representative actor and that this is a valid procedure. But it isn’t — as the Sonnenschein-Mantel-Debreu theorem irrevocably has shown.

Of course one could say that it is too difficult on undergraduate levels to show why the procedure is right and to defer it to masters and doctoral courses. It could justifiably be reasoned that way – if what you teach your students is true, if The Law of Demand is generalizable to the market level and the representative actor is a valid modeling abstraction! But in this case it’s demonstrably known to be false, and therefore this is nothing but a case of scandalous intellectual dishonesty. It’s like telling your students that 2 + 2 = 5 and hope that they will never run into Peano’s axioms of arithmetics.

Once the dust has settled, there is a strong case for an inquiry into whether the teaching of economics has been captured by a small but dangerous sect.

Larry Elliott/The Guardian

28 February 1986 — a date which will live in infamy

28 February, 2014 at 09:06 | Posted in Politics & Society | Leave a comment

Olof Palme. Born 30 January 1927. Murdered 28 February 1986.
 

Robert “The Keynesian” Lucas

27 February, 2014 at 16:28 | Posted in Economics | 1 Comment

In his Keynote Address to the 2003 History of Political Economy Conference, Nobel laureate Robert Lucas said:

Well, I’m not here to tell people in this group about the history of
monetary thought. I guess I’m here as a kind of witness from a vanished
culture, the heyday of Keynesian economics. It’s like historians rushing
to interview the last former slaves before they died, or the last of the
people who remembered growing up in a Polish shtetl. I am going to tell
you what it was like growing up in a day when Keynesian economics
was taught as a solid basis on which macroeconomics could proceed.

keynesdanceMy credentials? Was I a Keynesian myself? Absolutely. And does my
Chicago training disqualify me for that? No, not at all. David Laidler
[who was present at the conference] will agree with me on this, and I will
explain in some detail when I talk about my education. Our Keynesian
credentials, if we wanted to claim them, were as good as could be obtained
in any graduate school in the country in 1963.

I thought when I was trying to prepare some notes for this talk
that people attending the conference might be arguing about Axel
Leijonhufvud’s thesis that IS-LM was a distortion of Keynes, but I didn’t
really hear any of this in the discussions this afternoon. So I’m going to
think about IS-LM and Keynesian economics as being synonyms. I remember
when Leijonhufvud’s book2 came out and I asked my colleague
Gary Becker if he thought Hicks had got the General Theory right with
his IS-LM diagram. Gary said, “Well, I don’t know, but I hope he did,
because if it wasn’t for Hicks I never would have made any sense out of
that damn book.” That’s kind of the way I feel, too, so I’m hoping Hicks
got it right.

Mirabile dictu! I’m a Keynesian – although I haven’t understood anything of what Keynes wrote, but I’ve read anoher guy who said he had read his book, so I hope for the best and assume he got it right (which Hicks actually didn’t, and was intellectually honest to admit in at least three scientific publications published about twenty years before Lucas statement). In truth a very scientific attitude. No wonder the guy after having deluded himself into believing (?) being a Keynesian – although actually only elaborating upon a model developed and then disowned by John Hicks – got the “Nobel prize” in economics …

On chance, probability, randomness, uncertainty and all that

26 February, 2014 at 23:04 | Posted in Statistics & Econometrics | 2 Comments

 

How to escape Mankiwian brainwash

25 February, 2014 at 16:56 | Posted in Economics | 1 Comment

 

[h/t Phil Pilkington]

Taking rational expectations seriously? You’ve got to be joking!

25 February, 2014 at 10:53 | Posted in Economics | 1 Comment

shackleIf at some time my skeleton should come to be used by a teacher of osteology to illustrate his lectures, will his students seek to infer my capacities for thinking, feeling, and deciding from a study of my bones? If they do, and any report of their proceedings should reach the Elysian Fields, I shall be much distressed, for they will be using a model which entirely ignores the greater number of relevant variables, and all of the important ones. Yet this is what ‘rational expectations’ does to economics.

G. L. S. Shackle

Since I have already put forward a rather detailed theoretical-methodological critique of the rational expectations hypothesis in Rational expectations – a fallacious foundation for macroeconomics in a non-ergodic world (real-world economics review, issue 62, 2012), I will limit myself here to elaborate on a couple of the rather unwarranted allegations that defenders of rational expectations put forward in their attempts at rescuing the rational expectations hypothesis from the critique.

In a laboratory experiment run by James Andreoni and Tymofiy Mylovanov, the researchers induced common probability priors, and then told all participants of the actions taken by the others. Their findings are very interesting, and says something rather profound on the value of the rational expectations hypothesis in standard neoclassical economic models:

We look at choices in round 1, when individuals should still maintain common priors, being indifferent about the true state. Nonetheless, we see that about 20% of the sample erroneously disagrees and favors one point of view. Moreover, while other errors tend to diminish as the experiment progresses, the fraction making this type of error is nearly constant. One may interpret disagreement in this case as evidence of erroneous or nonrational choices.

Next, we look at the final round where information about disagreement is made public and, under common knowledge of rationality, should be sufficient to eliminate disagreement. Here we find that individuals weigh their own information more than twice that of the five others in their group. When we look separately at those who err by disagreeing in round 1, we find that these people weigh their own information more than 10 times that of others, putting virtually no stock in public information. This indicates a different type of error, that is, a failure of some individuals to learn from each other. This error is quite large and for a nontrivial minority of the population.

Setting aside the subjects who make systematic errors, we find that individuals still put 50% more weight on their own information than they do on the information revealed through the actions of others, although this difference is not statistically significant.

So in this experiment there seems to be some irrational idiots who don’t understand that they are exactly that — idiots. When told that the earth is flat they still adhere to their own beliefs of a circular earth. It is as if people thought that the probability that all others are idiots — with irrational beliefs –is higher than the probability that the earth is circular.

Now compare these experimental results with rational expectations models, where the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution.

The tiny little problem that there is no hard empirical evidence that verifies these models doesn’t usually bother its protagonists too much. Rational expectations überpriest Thomas Sargent has the following to say on the epistemological status of the rational expectations hypothesis (emphasis added):

Partly because it focuses on outcomes and does not pretend to have behavioral content, the hypothesis of rational epectations has proved to be a powerful tool for making precise statements about complicarted dynamic economic systems.

Precise, yes, but relevant and realistic? I’ll be dipped!

And a few years later, when asked in an interview in Macroeconomic Dynamics — in 2005 — if he thought “that differences among people’s models are important aspects of macroeconomic policy debates”, Sargent replied (emphasis added):

The fact is you simply cannot talk about their differences within the typical rational expectations model. There is a communism of models. All agents within the model, the econometricians, and God share the same model.

One might perhaps find it odd to juxtapose God and people, but I think Leonard Rapping – himself a former rational expectationist – was on the right track when in 1984 interviewed by Arjo Klamer — The New Classical Macroeconomics — he said:

Frankly, I do not think that the rational expectations theorists are in the real world. Their approach is much to abstract.

Building models on rational expectations either means we are Gods or Idiots. Most of us know we are neither. So, God may share Sargent’s and Wren-Lewis’s models, but they certainly aren’t my models.

In their attempted rescue operations, rational expectationists try to give the picture that only heterodox economists like yours truly are critical of the rational expectations hypothesis. But, on this, they are, simply, eh, wrong. Let’s listen to Nobel laureate Edmund Phelps — hardly a heterodox economist — and what he has to say (emphasis added):

Question: In a new volume with Roman Frydman, “Rethinking Expectations: The Way Forward for Macroeconomics,” you say the vast majority of macroeconomic models over the last four decades derailed your “microfoundations” approach. Can you explain what that is and how it differs from the approach that became widely accepted by the profession?

frydAnswer: In the expectations-based framework that I put forward around 1968, we didn’t pretend we had a correct and complete understanding of how firms or employees formed expectations about prices or wages elsewhere. We turned to what we thought was a plausible and convenient hypothesis. For example, if the prices of a company’s competitors were last reported to be higher than in the past, it might be supposed that the company will expect their prices to be higher this time, too, but not that much. This is called “adaptive expectations:” You adapt your expectations to new observations but don’t throw out the past. If inflation went up last month, it might be supposed that inflation will again be high but not that high.

Q: So how did adaptive expectations morph into rational expectations?

A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let’s be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what’s the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted — thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.

Q: One of the issues I have with rational expectations is the assumption that we have perfect information, that there is no cost in acquiring that information. Yet the economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.

A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.

Bloomberg

And this is what another non-heterodox economist, Willem Buiter, has to say about the state of the standard macroeconomic theory that builds on the twin assumptions of rational expectations and efficient markets:

buiterIn both the New Classical and New Keynesian approaches to monetary theory (and to aggregative macroeconomics in general), the strongest version of the efficient markets hypothesis (EMH) was maintained. This is the hypothesis that asset prices aggregate and fully reflect all relevant fundamental information, and thus provide the proper signals for resource allocation. Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. education. But most of the profession continued to swallow the EMH hook, line and sinker, although there were influential advocates of reason throughout, including James Tobin, Robert Shiller, George Akerlof, Hyman Minsky, Joseph Stiglitz and behaviourist approaches to finance. The influence of the heterodox approaches from within macroeconomics and from other fields of economics on mainstream macroeconomics – the New Classical and New Keynesian approaches – was, however, strictly limited.

But let’s see how rational expectations fares as an empirical assumption. Empirical efforts at testing the correctnes of the hypothesis has resulted in a series of empirical studies that have more or less concluded that it is not consistent with the facts. In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:

it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.

And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:

More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.

But how about the large mainstream literature on learning? Let me shortly adress the issue.

The rational expectations hypothesis presupposes – basically for reasons of consistency – that agents have complete knowledge of all of the relevant probability distribution functions. And when trying to incorporate learning in these models – trying to take the heat of some of the criticism launched against it up to date – it is always a very restricted kind of learning that is considered. A learning where truly unanticipated, surprising, new things never take place, but only rather mechanical updatings – increasing the precision of already existing information sets – of existing probability functions.

Nothing really new happens in these ergodic models, where the statistical representation of learning and information is nothing more than a caricature of what takes place in the real world target system. This follows from taking for granted that people’s decisions can be portrayed as based on an existing probability distribution, which by definition implies the knowledge of every possible event (otherwise it is in a strict mathematical-statistically sense not really a probability distribution) that can be thought of taking place.

But in the real world it is – as shown again and again by behavioural and experimental economics – common to mistake a conditional distribution for a probability distribution. Mistakes that are impossible to make in the kinds of economic analysis – built on the rational expectations hypothesis – that Levine is such an adamant propagator for. On average rational expectations agents are always correct. But truly new information will not only reduce the estimation error but actually change the entire estimation and hence possibly the decisions made. To be truly new, information has to be unexpected. If not, it would simply be inferred from the already existing information set.

In rational expectations models new information is typically presented as something only reducing the variance of the parameter estimated. But if new information means truly new information it actually could increase our uncertainty and variance (information set (A, B) => (A, B, C)).

Truly new information give birth to new probabilities, revised plans and decisions – something the rational expectations hypothesis cannot account for with its finite sampling representation of incomplete information.

In the world of rational expectations, learning is like being better and better at reciting the complete works of Shakespeare by heart – or at hitting bull’s eye when playing dart. It presupposes that we have a complete list of the possible states of the world and that by definition mistakes are non-systematic (which, strictly seen, follows from the assumption of “subjective” probability distributions being equal to the “objective” probability distribution). This is a rather uninteresting and trivial kind of learning. It is a closed world learning, synonymous to improving one’s adaptation to a world which is fundamentally unchanging. But in real, open world situations, learning is more often about adapting and trying to cope with genuinely new phenomena.

The rational expectations hypothesis presumes consistent behaviour, where expectations do not display any persistent errors. In the world of rational expectations we are always, on average, hitting the bull’s eye. In the more realistic, open systems view, there is always the possibility (danger) of making mistakes that may turn out to be systematic. It is because of this, presumably, that we put so much emphasis on learning in our modern knowledge societies.

So, where does all this leave us? I think John Kay sums it up pretty well:

kayProf Sargent and colleagues appropriated the term “rational expectations” for their answer. Suppose the economic world evolves according to some predetermined model, in which uncertainties are “known unknowns” that can be described by probability distributions. Then economists could gradually deduce the properties of this model, and businesses and individuals would naturally form expectations in that light. If they did not, they would be missing obvious opportunities for advantage.

This approach, which postulates a universal explanation into which economists have privileged insight, was as influential as it was superficially attractive. But a scientific idea is not seminal because it influences the research agenda of PhD students. An important scientific advance yields conclusions that differ from those derived from other theories, and establishes that these divergent conclusions are supported by observation. Yet as Prof Sargent disarmingly observed, “such empirical tests were rejecting too many good models” in the programme he had established with fellow Nobel laureates Bob Lucas and Ed Prescott. In their world, the validity of a theory is demonstrated if, after the event, and often with torturing of data and ad hoc adjustments that are usually called “imperfections”, it can be reconciled with already known facts – “calibrated”. Since almost everything can be “explained” in this way, the theory is indeed universal; no other approach is necessary, or even admissible …

Rational expectations consequently fail for the same reason communism failed – the arrogance and ignorance of the monopolist.

David Ricardo

25 February, 2014 at 08:56 | Posted in Economics | 1 Comment

ricardoI morgon sänder Sveriges Radio P1 — kl. 21:03-21:37 — det andra programmet i en serie om nationalekonomins historia.

Yours truly medverkar.
 

Det sägs att börshandlaren David Ricardo blev nationalekonom för att han hade tråkigt på semestern. Detta kan vara en skröna, men sant är däremot att hans sätt att resonera logiskt och skapa modeller av verkligheten blev stilbildande.

Hicks on the inapplicability of probability calculus

24 February, 2014 at 18:48 | Posted in Economics, Statistics & Econometrics | 11 Comments

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

hicksbbcWhen we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

To simply assume that economic processes are ergodic — and a fortiori in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Added 25 February: Commenting on this article, Paul Davidson writes:

After reading my article on the fallacy of rational expectations, Hicks wrote to me in a letter dated 12 February 1983 in which he said “I have just been reading your RE [rational expectations] paper … I do like it very much … You have now rationalized my suspicions and shown me that I missed a chance of labeling my own point of view as nonergodic. One needs a name like that to ram a point home.”

On abstraction and idealization in neoclassical economics

23 February, 2014 at 20:55 | Posted in Economics | 2 Comments

Of course macroeconomists research many things, and only a minority are using New Keynesian models, and probably even some of those do not really need the New Keynesian bit. That is the great thing about abstraction. Working with what can be called ‘flex price’ models does not imply that you think price rigidity is unimportant, but instead that it can often be ignored if you want to focus on other processes.

Simon Wren-Lewis

It would, of course, be interesting to know on what reasoning Wren-Lewis based this rather unsubstantiated view.

When applying deductivist thinking to economics, neoclassical economists like Wren-Lewis usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations (what Wren-Lewis, wrongly, calls abstractions) necessary for the deductivist machinery to work — as e. g. “flex price” models — simply don’t hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.

Or as Hans Albert has it on the neoclassical style of thought:

In everyday situations, if, in answer to an inquiry about the weather forecast, one is told that the weather will remain the same as long as it does not change, then one does not normally go away with the impression of having been particularly well informed, although it cannot be denied that the answer refers to an interesting aspect of reality, and, beyond that, it is undoubtedly true …

hansalbertWe are not normally interested merely in the truth of a statement, nor merely in its relation to reality; we are fundamentally interested in what it says, that is, in the information that it contains …

Information can only be obtained by limiting logical possibilities; and this in principle entails the risk that the respective statement may be exposed as false. It is even possible to say that the risk of failure increases with the informational content, so that precisely those statements that are in some respects most interesting, the nomological statements of the theoretical hard sciences, are most subject to this risk. The certainty of statements is best obtained at the cost of informational content, for only an absolutely empty and thus uninformative statement can achieve the maximal logical probability …

The neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures – appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …

Science progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

The connections sketched out above are part of the general logic of the sciences and can thus be applied to the social sciences. Above all, with their help, it appears to be possible to illuminate a methodological peculiarity of neoclassical thought in economics, which probably stands in a certain relation to the isolation from sociological and social-psychological knowledge that has been cultivated in this discipline for some time: the model Platonism of pure economics, which comes to expression in attempts to immunize economic statements and sets of statements (models) from experience through the application of conventionalist strategies …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.