Ergodicity and stationarity in random processes (super wonkish)
30 Apr, 2013 at 18:25 | Posted in Statistics & Econometrics | 5 Comments
Deductive models leading us astray
30 Apr, 2013 at 09:39 | Posted in Economics, Theory of Science & Methodology | 2 CommentsOxford professor John Kay has a very interesting article on why economists have tended to go astray in their – as my old mentor Erik Dahmén used to say – tool sheds:
Consistency and rigour are features of a deductive approach, which draws conclusions from a group of axioms – and whose empirical relevance depends entirely on the universal validity of the axioms. The only descriptions that fully meet the requirements of consistency and rigour are completely artificial worlds, such as the “plug-and-play” environments of DSGE – or the Grand Theft Auto computer game.
For many people, deductive reasoning is the mark of science: induction – in which the argument is derived from the subject matter – is the characteristic method of history or literary criticism. But this is an artificial, exaggerated distinction. Scientific progress – not just in applied subjects such as engineering and medicine but also in more theoretical subjects including physics – is frequently the result of observation that something does work, which runs far ahead of any understanding of why it works.
Not within the economics profession. There, deductive reasoning based on logical inference from a specific set of a priori deductions is “exactly the right way to do things”. What is absurd is not the use of the deductive method but the claim to exclusivity made for it. This debate is not simply about mathematics versus poetry. Deductive reasoning necessarily draws on mathematics and formal logic: inductive reasoning, based on experience and above all careful observation, will often make use of statistics and mathematics.
Economics is not a technique in search of problems but a set of problems in need of solution. Such problems are varied and the solutions will inevitably be eclectic. Such pragmatic thinking requires not just deductive logic but an understanding of the processes of belief formation, of anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses and governments do.
The belief that models are not just useful tools but are capable of yielding comprehensive and universal descriptions of the world blinded proponents to realities that had been staring them in the face. That blindness made a big contribution to our present crisis, and conditions our confused responses to it. Economists – in government agencies as well as universities – were obsessively playing Grand Theft Auto while the world around them was falling apart.
The article is essential reading for all those who want to understand why mainstream – neoclassical – economists actively have contributed to causing todays’s economic crisis rather than to solving it.
Perhaps this becomes less perplexing to grasp when considering what one of its main proponents today – Robert Lucas – maintained already in 2003:
My thesis in this lecture is that macroeconomics in this original sense has succeeded: its central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.
And this comes from an economist who has built his whole career on the assumption that people are hyper rational “robot imitations” with rational expectations and next to perfect ability to process information. Mirabile dictu!
The kings and queens around my room with their quiet dirty looks
28 Apr, 2013 at 11:45 | Posted in Varia | Comments Off on The kings and queens around my room with their quiet dirty looks
Money is … uhmm … ehhh …
27 Apr, 2013 at 15:35 | Posted in Economics | Comments Off on Money is … uhmm … ehhh …Macro 101
27 Apr, 2013 at 15:01 | Posted in Economics | Comments Off on Macro 101
1. The economy isn’t like an individual family that earns a certain amount and spends some other amount, with no relationship between the two. My spending is your income and your spending is my income. If we both slash spending, both of our incomes fall.
2. We are now in a situation in which many people have cut spending, either because they chose to or because their creditors forced them to, while relatively few people are willing to spend more. The result is depressed incomes and a depressed economy, with millions of willing workers unable to find jobs.
3. Things aren’t always this way, but when they are, the government is not in competition with the private sector. Government purchases don’t use resources that would otherwise be producing private goods, they put unemployed resources to work. Government borrowing doesn’t crowd out private borrowing, it puts idle funds to work. As a result, now is a time when the government should be spending more, not less. If we ignore this insight and cut government spending instead, the economy will shrink and unemployment will rise. In fact, even private spending will shrink, because of falling incomes.
4. This view of our problems has made correct predictions over the past four years, while alternative views have gotten it all wrong. Budget deficits haven’t led to soaring interest rates (and the Fed’s “money-printing” hasn’t led to inflation); austerity policies have greatly deepened economic slumps almost everywhere they have been tried.
5. Yes, the government must pay its bills in the long run. But spending cuts and/or tax increases should wait until the economy is no longer depressed, and the private sector is willing to spend enough to produce full employment.
What is money?
27 Apr, 2013 at 14:26 | Posted in Economics | Comments Off on What is money?
(h/t Jan Milch)
Rogoff-Reinhart-skandalen och sifferexercisens gränser
27 Apr, 2013 at 12:42 | Posted in Varia | 1 CommentAll ekonomi är i slutändan summan av mänskliga handlingar, vars logik sällan ryms i ett Excelark …
Inte ens den matematiskt mest fulländade ekvation kan användas som ursäkt för att strunta i det viktigaste: en manuell rimlighetsbedömning. Om bilens GPS ger instruktioner om högersväng utför ett stup betyder det inte nödvändigtvis att det är en bra idé. Förklaringen till att den ortodoxa siffertron vinner allt mer terräng kanske snarare ligger i att det är en bekväm metod att få sin vilja igenom. Det är nämligen med ekonomisk statistik som med så mycket annat: vi väljer att se det vi vill se.
Andreas Cervenka, SvD Näringsliv
Axel Leijonhufvud interviewing Hayek (!)
26 Apr, 2013 at 17:31 | Posted in Economics, Politics & Society | Comments Off on Axel Leijonhufvud interviewing Hayek (!)
(h/t Lord Keynes)
Tillväxt och miljö
26 Apr, 2013 at 16:01 | Posted in Varia | Comments Off on Tillväxt och miljöYours truly var igår inbjuden till en debatt på Handelshögskolan i Göteborg. Temat var tillväxt och miljö. Jag vet inte om jag lyckades övertyga alla som lyssnade om att det faktiskt är möjligt att leva bra och fullvärdiga liv utan att överkonsumera.
De som till äventyrs inte övertygades av mina åberopade teoretiska inspirationskällor – Soddy, Georgeescu-Roegen och Boulding m fl- kanske istället kan få sig åtminstone en tankeväckare av Nina Hedenius underbara – och via SvT:s Öppet arkiv numera nedladdningsbara – dokumentärfilm Gubben i stugan – som handlar om den pensionerade skogsarbetaren Ragnars liv i Dalarnas finnskogar.
The state of modern macroeconomics – indescribable misery
23 Apr, 2013 at 12:59 | Posted in Economics | 3 CommentsNeoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.
Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.
Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. But “facts kick”, as Gunnar Myrdal used to say. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.
The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?
In modern neoclassical macroeconomics – Dynamic Stochastic General Equilibrium (DSGE), New Synthesis, New Classical and “New Keynesian” – variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore allegedly have access to heaps of historical time-series.
Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.
In the end this is what it all boils down to. We all know that many activities, relations, processes and events are genuinely uncertaint. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.
Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption.
I think that macroeconomists ought to be more critical of the present state of macroeconomics than they are. If macroeconomic models – no matter of what ilk – build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations-microfoundations is not a symptom of “irrationality”. It rather shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality.
A gadget is just a gadget – and brilliantly silly models do not help us working with the fundamental issues of modern economies. So let’s take a critical look at modern macroeconomics and try to pinpoint what makes its state today such an indescribable misery.
Real business cycles theory
Real business cycles theory (RBC) basically says that economic cycles are caused by technologyinduced changes in productivity. It says that employment goes up or down because people choose to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense. In yours truly’s History of Economic Theories (4th ed, 2007, p. 405) it was concluded that
the problem is that it has turned out to be very difficult to empirically verify the theory’s view on economic fluctuations as being effects of rational actors’ optimal intertemporal choices … Empirical studies have not been able to corroborate the assumption of the sensitivity of labour supply to changes in intertemporal relative prices. Most studies rather points to expected changes in real wages having only rather little influence on the supply of labour.
And this is what Lawrence Summers – in Some Skeptical Observations on Real Business Cycle Theory – had to say about RBC:
The increasing ascendancy of real business cycle theories of various stripes, with their common view that the economy is best modeled as a floating Walrasian equilibrium, buffeted by productivity shocks, is indicative of the depths of the divisions separating academic macroeconomists …
If these theories are correct, they imply that the macroeconomics developed in the wake of the Keynesian Revolution is well confined to the ashbin of history. And they suggest that most of the work of contemporary macroeconomists is worth little more than that of those pursuing astrological science …
The appearance of Ed Prescott’ s stimulating paper, “Theory Ahead of Business Cycle Measurement,” affords an opportunity to assess the current state of real business cycle theory and to consider its prospects as a foundation for macroeconomic analysis …
My view is that business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in The United States or other capitalist economies …
Presoctt’s growth model is not an inconceivable representation of reality. But to claim that its prameters are securely tied down by growth and micro observations seems to me a gross overstatement. The image of a big loose tent flapping in the wind comes to mind …
In Prescott’s model, the central driving force behind cyclical fluctuations is technological shocks. The propagation mechansim is intertemporal substitution in employment. As I have argued so far, there is no independent evidence from any source for either of these phenomena …
Imagine an analyst confronting the market for ketchup. Suppose she or he decided to ignore data on the price of ketchup. This would considerably increase the analyst’s freedom in accounting for fluctuations in the quantity of ketchup purchased … It is difficult to believe that any explanation of fluctuations in ketchup sales that did not confront price data would be taken seriously, at least by hard-headed economists.
Yet Pescott offers an exercise in price-free economics … Others have confronted models like Prescott’s to data on prices with what I think can fairly be labeled dismal results. There is simply no evidence to support any of the price effects predicted by the model …
Improvement in the track record of macroeconomics will require the development of theories that can explain why exchange sometimes work and other times breaks down. Nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes.
Thomas Sargent was awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2011 for his “empirical research on cause and effect in the macroeconomy”. In an interview with Sargent in The Region , one could read the following defense of “modern macro”:
Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.
Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?
Sargent:Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.
Are these the words of an empirical macroeconomist? To me it sounds like the same old axiomatic-deductivist mumbo jumbo that parades as economic science of today.
Calibration
There are many kinds of useless economics held in high regard within mainstream economics establishment today . Few – if any – are less deserved than the macroeconomic theory – mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent – called calibration.
In an interview by Seppo Honkapohja and Lee Evans (Macroeconomic Dynamics 2005, vol. 9) Thomas Sargent says:
Evans and Honkapohja: What were the profession’s most important responses to the Lucas Critique?
Sargent: There were two. The first and most optimistic response was complete rational expectations econometrics. A rational expectations equilibrium is a likelihood function. Maximize it.
Evans and Honkapohja: Why optimistic?
Sargent: You have to believe in your model to use the likelihood function. it provides a coherent way to estimate objects of interest (preferences, technologies, information sets, measurement processes) within the context of a trusted model.
Evans and Honkapohja: What was the second response?
Sargent: Various types of calibration. Calibration is less optimistic about what your theory can accomplish because you would only use it if you din’t fully trust your entire model, meaning that you think your model is partly misspecified or incompetely specified, or if you trusted someone else’s model and data set more than your own. My recollection is that Bob Lucas and Ed Prescott were initially very enthusiastic about rational expetations econometrics. After all, it simply involved imposing on ourselves the same high standards we had criticized the Keynesians for failing to live up to. But after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models. The idea of calibration is to ignore some of the probabilistic implications of your model but to retain others. Somehow, calibration was intended as a balanced response to professing that your model, although not correct, is still worthy as a vehicle for quantitative policy analysis….
Evans and Honkapohja: Do you think calibration in macroeconomics was an advance?
Sargent: In many ways, yes. I view it as a constructive response to Bob’ remark that “your likelihood ratio tests are rejecting too many good models”. In those days… there was a danger that skeptics and opponents would misread those likelihood ratio tests as rejections of an entire class of models, which of course they were not…. The unstated cse for calibration was that it was a way to continue the process of acquiring experience in matching rational expectations models to data by lowering our standards relative to maximum likelihood, and emphasizing those features of the data that our models could capture. Instead of trumpeting their failures in terms of dismal likelihood ratio statistics, celebrate the featuers that they could capture and focus attention on the next unexplained feature that ought to be explained. One can argue that this was a sensible response… a sequential plan of attack: let’s first devote resources to learning how to create a range of compelling equilibrium models to incorporate interesting mechanisms. We’ll be careful about the estimation in later years when we have mastered the modelling technology…
But is the Lucas-Kydland-Prescott-Sargent calibration really an advance?
Let’s see what two eminent econometricians have to say. In Journal of Economic Perspective (1996, vol. 10) Lars Peter Hansen and James J. Hickman writes:
It is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.
This is the view of econometric methodologist Kevin Hoover :
The calibration methodology, to date, lacks any discipline as stern as that imposed by econometric methods.
Error-probabilistic statistician Aris Spanos – in Error and Inference (Mayo & Spanos, 2010, p. 240) – is no less critical:
Given that “calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …
The idea that it should suffice that a theory “is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity
And this is the verdict of Nobel laureate Paul Krugman :
The point is that if you have a conceptual model of some aspect of the world, which you know is at best an approximation, it’s OK to see what that model would say if you tried to make it numerically realistic in some dimensions.
But doing this gives you very little help in deciding whether you are more or less on the right analytical track. I was going to say no help, but it is true that a calibration exercise is informative when it fails: if there’s no way to squeeze the relevant data into your model, or the calibrated model makes predictions that you know on other grounds are ludicrous, something was gained. But no way is calibration a substitute for actual econometrics that tests your view about how the world works.
In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.
The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Sargent and other calibrationists one comes to think of Robert Clower’s apt remark that
much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.
Instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expecteations is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.
To say, as Edward Prescott that
one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations
is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectatons one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is, as Paul Krugman has it, that economists often mistake
beauty, clad in impressive looking mathematics, for truth.
But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not science.
“New Keynesianism”
Not that long ago, Paul Krugman had a post up on his blog discussing “New Keynesian” macroeconomics and the definition of necoclassical economics:
So, what is neoclassical economics? … I think we mean in practice economics based on maximization-with-equilibrium. We imagine an economy consisting of rational, self-interested players, and suppose that economic outcomes reflect a situation in which each player is doing the best he, she, or it can given the actions of all the other players …
Some economists really really believe that life is like this — and they have a significant impact on our discourse. But the rest of us are well aware that this is nothing but a metaphor; nonetheless, most of what I and many others do is sorta-kinda neoclassical because it takes the maximization-and-equilibrium world as a starting point or baseline, which is then modified — but not too much — in the direction of realism.
This is, not to put too fine a point on it, very much true of Keynesian economics as practiced … New Keynesian models are intertemporal maximization modified with sticky prices and a few other deviations …
Why do things this way? Simplicity and clarity. In the real world, people are fairly rational and more or less self-interested; the qualifiers are complicated to model, so it makes sense to see what you can learn by dropping them. And dynamics are hard, whereas looking at the presumed end state of a dynamic process — an equilibrium — may tell you much of what you want to know.
Being myself sorta-kinda Keynesian I find this analysis utterly unconvincing. Let me try to elaborate on why.
Macroeconomic models may be an informative tool for research. But if practitioners of “New Keynesian” macroeconomics do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of “New Keynesian” macroeconomics. So far, I cannot really see that it has yielded very much in terms of realistic and relevant economic knowledge.
The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious underlabouring of its deeper philosophical and methodological foundations. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that “New Keynesians” cannot give supportive evidence for their considering it fruitful to analyze macroeconomic structures and events as the aggregated result of optimizing representative actors. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that “New Keynesian” macroeconomics on the whole has not delivered anything else than “as if” unreal and irrelevant models.
If we are going to be able to show that the mechanisms or causes that we isolate and handle in our microfounded macromodels are stable in the sense that they do not change when we “export” them to our “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems. Or as the always eminently quotable Keynes wrote in Treatise on Probability(1921):
The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.
Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations. But models can never be more than a starting point in that endeavour. There is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model.
This is a more fundamental and radical problem than the celebrated “Lucas critique” have suggested. This is not the question if deep parameters, absent on the macro-level, exist in “tastes” and “technology” on the micro-level. It goes deeper. Real world social systems are not governed by stable causal mechanisms or capacities. It is the criticism that Keynes first launched against the “atomistic fallacy” already in the 1920s:
The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.
The kinds of laws and relations that economics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of economic theoretical modeling – rather useless.
Keynes basically argued that it was inadmissible to project history on the future. Consequently an economic policy cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different “variables” was not enough. If they could not get at the causal structure that generated the data, they were not really “identified”. Dynamic stochastic general euilibrium (DSGE) macroeconomists – including “New Keynesians” – has drawn the conclusion that the problem with unstable relations is to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioural parameters are seen to be stable even to changes in economic policies. This, however, is a dead end.
So here we are getting close to the heart of darkness in “New Keynesian” macroeconomics. Where “New Keynesian” economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterize all open social systems – including the economic system. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the idividual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms. And although one may easily agree with Krugman’s emphasis on simple models, the simplifications used may have to be simplifications adequate for macroeconomics and not those adequate for microeconomics.
“New Keynesian” macromodels describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their “macroeconomic laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.
Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in these macroeconomic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery deliver “precise” and “rigorous” results. This, of course, makes exporting to real world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity, when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real world target systems does not take us very far, unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real world target system.
In microeconomics we know that aggregation really presupposes homothetic an identical preferences, something that almost never exist in real economies. The results given by these assumptions are therefore not robust and do not capture the underlying mechanisms at work in any real economy. And models that are critically based on particular and odd assumptions – and are neither robust nor congruent to real world economies – are of questionable value.
Even if economies naturally presuppose individuals, it does not follow that we can infer or explain macroeconomic phenomena solely from knowledge of these individuals. Macroeconomics is to a large extent emergent and cannot be reduced to a simple summation of micro-phenomena. Moreover, as we have already argued, even these microfoundations aren’t immutable. The “deep parameters” of “New Keynesian” DSGE models– “tastes” and “technology” – are not really the bedrock of constancy that they believe (pretend) them to be.
So I cannot concur with Krugman – and other sorta-kinda “New Keynesians” – when they try to reduce Keynesian economics to “intertemporal maximization modified with sticky prices and a few other deviations”. As John Quiggin so aptly writes:
If there is one thing that distinguished Keynes’ economic analysis from that of his predecessors, it was his rejection of the idea of a unique full employment equilibrium to which a market economy will automatically return when it experiences a shock. Keynes argued that an economy could shift from a full-employment equilibrium to a persistent slump as the result of the interaction between objective macroeconomic variables and the subjective ‘animal spirits’ of investors and other decision-makers. It is this perspective that has been lost in the absorption of New Keynesian macro into the DSGE framework.
The purported strength of new-classical and new-Keynesian macroeconomics is that they have firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing “forward-loooking” individuals.
To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world (how that could even be conceivable is beyond my imagination, given that the ongoing debate on microfoundations, if anything, shows that not even we, the economists, can come to agreement on a common model).
How could it go so wrong?
Following the greatest economic depression since the 1930s, the grand old man of modern economic growth theory, Nobel laureate Robert Solow, on July 20, 2010, gave a prepared statement on “Building a Science of Economics for the Real World” for a hearing in the U. S. Congress. According to Solow modern macroeconomics has not only failed at solving present economic and financial problems, but is “bound” to fail. Building dynamically stochastic general equilibrium models (DSGE) on “assuming the economy populated by a representative agent” – consisting of “one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever” – do not pass “the smell test: does this really make sense?” One cannot but concur in Solow’s surmise that a thoughtful person “faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.”
Already in 2008 Solow had – in “The State of Macroeconomics” (Journal of Economic Perspectives 2008:243-249) – told us of what he thought of microfounded modern macroeconomics:
[When modern macroeconomists] speak of macroeconomics as being firmly grounded in economic theory, we know what they mean … They mean a macroeconomics that is deduced from a model in which a single immortal consumer-worker-owner maximizes a perfectly conventional time-additive utility function over an infinite horizon, under perfect foresight or rational expectations, and in an institutional and technological environment that favors universal price-taking behavior …
No one would be driven to accept this story because of its obvious “rightness”. After all, a modern economy is populated by consumers, workers, pensioners, owners, managers, investors, entrepreneurs, bankers, and others, with different and sometimes conflicting desires, information, expectations, capacities, beliefs, and rules of behavior … To ignore all this in principle does not seem to qualify as mere abstraction – that is setting aside inessential details. It seems more like the arbitrary suppression of clues merely because they are inconvenient for cherished preconceptions …
Friends have reminded me that much effort of ‘modern macro’ goes into the incorporation of important deviations from the Panglossian assumptions … [But] a story loses legitimacy and credibility when it is spliced to a simple, extreme, and on the face of it, irrelevant special case. This is the core of my objection: adding some realistic frictions does not make it any more plausible than an observed economy is acting out the desires of a single, consistent, forward-looking intelligence …
It seems to me, therefore, that the claim that ‘modern macro’ somehow has the special virtue of following the principles of economic theory is tendentious and misleading … The other possible defense of modern macro is that, however special it may seem, it is justified empirically. This strikes me as a delusion …
So I am left with a puzzle, or even a challenge. What accounts for the ability of ‘modern macro’ to win hearts and minds among bright and enterprising academic economists? … There has always been a purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts … The theory is neat, learnable, not terribly difficult, but just technical enough to feel like ‘science’. Moreover it is practically guaranteed to give laissez-faire-type advice, which happens to fit nicely with the general turn to the political right that began in the 1970s and may or may not be coming to an end.
In case you’re still not convinced – here’s another masterpiece that essentially says it all:
So how did macroeconomics arrive at its current state?
The original impulse to look for better or more explicit micro foundations was probably reasonable. What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.
How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up? My impression is that this approach (which seems now to be the mainstream, and certainly dominates the journals, if not the workaday world of macroeconomics) has had no empirical success; but that is not the point here. I start from the presumption that we want macroeconomics to account for the occasional aggregative pathologies that beset modern capitalist economies, like recessions, intervals of stagnation, inflation, “stagflation,” not to mention negative pathologies like unusually good times. A model that rules out pathologies by definition is unlikely to help. It is always possible to claim that those “pathologies” are delusions, and the economy is merely adjusting optimally to some exogenous shock. But why should reasonable people accept this? …
What is needed for a better macroeconomics? [S]ome of the gross implausibilities … need to be eliminated. The clearest candidate is the representative agent. Heterogeneity is the essence of a modern economy. In real life we worry about the relations between managers and shareowners, between banks and their borrowers, between workers and employers, between venture capitalists and entrepreneurs, you name it. We worry about those interfaces because they can and do go wrong, with likely macroeconomic consequences. We know for a fact that heterogeneous agents have different and sometimes conflicting goals, different information, different capacities to process it, different expectations, different beliefs about how the economy works. Representative-agent models exclude all this landscape, though it needs to be abstracted and included in macro-models.
I also doubt that universal rational expectations provide a useful framework for macroeconomics …
Now here is a peculiar thing. When I was in advanced middle age, I suddenly woke up to the fact that my colleagues in macroeconomics, the ones I most admired, thought that the fundamental problem of macro theory was to understand how nominal events could have real consequences. This is just a way of stating some puzzle or puzzles about the sources for sticky wages and prices. This struck me as peculiar in two ways.
First of all, when I was even younger, nobody thought this was a puzzle. You only had to look around you to stumble on a hundred different reasons why various prices and factor prices should be much less than perfectly flexible. I once wrote, archly I admit, that the world has its reasons for not being Walrasian. Of course I soon realized that what macroeconomists wanted was a formal account of price stickiness that would fit comfortably into rational, optimizing models. OK, that is a harmless enough activity, especially if it is not taken too seriously. But price and wage stickiness themselves are not a major intellectual puzzle unless you insist on making them one.
Robert Solow, “Dumb and dumber in macroeconomics”
Of course there are alternatives. For those of us who have not forgotten the history of our discipline, and not bought the sweet-water nursery tale of Lucas et consortes that Keynes was not “serious thinking,” we can easily see that there exists a macroeconomic tradition inspired by Keynes (that has absolutely nothing to do with any New Synthesis or “New Keynesianism” to do).
Its ultimate building-block is the perception of genuine uncertainty and that people often “simply do not know.” Real actors can’t know everything and their acts and decisions are not simply possible to sum or aggregate without the economist risking to succumb to “the fallacy of composition”.
Instead of basing macroeconomics on unreal and unwarranted generalizations of microeconomic behaviour and relations, it is far better to accept the ontological fact that the future to a large extent is uncertain, and rather conduct macroeconomics on this fact of reality.
The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away by assuming uncertainty to be reducible to stochastic risk. That is scientific cheating. And it has been going on for too long now.
The sooner we are intellectually honest and ready to admit that the modern macroeconomic microfoundationalist programme has come to way’s end – the sooner we can redirect our macroeconomic aspirations and knowledge in more fruitful endeavours.
Modeling a changing world (wonkish)
22 Apr, 2013 at 14:03 | Posted in Economics, Statistics & Econometrics | Comments Off on Modeling a changing world (wonkish)
“Covering law explanation” i nationalekonomi och andra samhällsvetenskaper
21 Apr, 2013 at 18:12 | Posted in Economics, Theory of Science & Methodology | 4 CommentsSom vetenskapsteoretiker är det intressant att konstatera att många ekonomer och andra samhällsvetare appellerar till ett krav på att förklaringar för att kunna sägas vara vetenskapliga kräver att ett enskilt fall ska kunna ”föras tillbaka på en allmän lag”. Som grundläggande princip åberopas ofta en allmän lag i form av ”om A så B” och att om man i de enskilda fallen kan påvisa att om ”A och B är förhanden så har man ’förklarat’ B”.
Denna positivistisk-induktiva vetenskapssyn är dock i grunden ohållbar. Låt mig förklara varför.
Enligt en positivistisk-induktivistisk syn på vetenskapen utgör den kunskap som vetenskapen besitter bevisad kunskap. Genom att börja med helt förutsättningslösa observationer kan en ”fördomsfri vetenskaplig observatör” formulera observationspåståenden utifrån vilka man kan härleda vetenskapliga teorier och lagar. Med hjälp av induktionsprincipen blir det möjligt att utifrån de singulära observationspåståendena formulera universella påståenden i form av lagar och teorier som refererar till förekomster av egenskaper som gäller alltid och överallt. Utifrån dessa lagar och teorier kan vetenskapen härleda olika konsekvenser med vars hjälp man kan förklara och förutsäga vad som sker. Genom logisk deduktion kan påståenden härledas ur andra påståenden. Forskningslogiken följer schemat observation – induktion – deduktion.
I mer okomplicerade fall måste vetenskapsmannen genomföra experiment för att kunna rättfärdiga de induktioner med vars hjälp han upprättar sina vetenskapliga teorier och lagar. Experiment innebär – som Francis Bacon så måleriskt uttryckte det – att lägga naturen på sträckbänk och tvinga den att svara på våra frågor. Med hjälp av en uppsättning utsagor som noggrant beskriver omständigheterna kring experimentet – initialvillkor – och de vetenskapliga lagarna kan vetenskapsmannen deducera påståenden som kan förklara eller förutsäga den undersökta företeelsen.
Den hypotetisk-deduktiva metoden för vetenskapens förklaringar och förutsägelser kan beskrivas i allmänna termer på följande vis:
1 Lagar och teorier
2 Initialvillkor
——————
3 Förklaringar och förutsägelser
Enligt en av den hypotetisk-deduktiva metodens främsta förespråkare – Carl Hempel – har alla vetenskapliga förklaringar denna form, som också kan uttryckas enligt schemat nedan:
Alla A är B Premiss 1
a är A Premiss 2
——————————
a är B Konklusion
Som exempel kan vi ta följande vardagsnära företeelse:
Vatten som värms upp till 100 grader Celsius kokar
Denna kastrull med vatten värms till 100 grader Celsius
———————————————————————–
Denna kastrull med vatten kokar
Problemet med den hypotetisk-deduktiva metoden ligger inte så mycket i premiss 2 eller konklusionen, utan i själva hypotesen, premiss 1. Det är denna som måste bevisas vara riktig och det är här induktionsförfarandet kommer in.
Den mest uppenbara svagheten i den hypotetisk-deduktiva metoden är själva induktionsprincipen. Det vanligaste rättfärdigandet av den ser ut som följer:
Induktionsprincipen fungerade vid tillfälle 1
Induktionsprincipen fungerade vid tillfälle 2
…
Induktionsprincipen fungerade vid tillfälle n
—————————————————–
Induktionsprincipen fungerar alltid
Detta är dock tveksamt eftersom ”beviset” använder induktion för att rättfärdiga induktion. Man kan inte använda singulära påståenden om induktionsprincipens giltighet för att härleda ett universellt påstående om induktionsprincipens giltighet.
Induktion är tänkt att spela två roller. Dels ska den göra det möjligt att generalisera och dels antas den utgöra bevis för slutsatsernas riktighet. Som induktionsproblemet visar klarar induktionen inte av båda dessa uppgifter. Den kan stärka sannolikheten av slutsatserna (under förutsättning att induktionsprincipen är riktig, vilket man dock inte kan bevisa utan att hamna i ett cirkelresonemang) men säger inte att dessa nödvändigtvis är sanna.
En annan ofta påpekad svaghet hos den hypotetisk-deduktiva metoden är att teorier alltid föregår observationspåståenden och experiment och att det därför är fel att hävda att vetenskapen börjar med observationer och experiment. Till detta kommer att observationspåståenden och experiment inte kan antas vara okomplicerat tillförlitliga och att de för sin giltighetsprövning kräver att man hänvisar till teori. Att även teorierna i sin tur kan vara otillförlitliga löser man inte främst med fler observationer och experiment, utan med andra och bättre teorier. Man kan också invända att induktionen inte på något sätt gör det möjligt för oss att få kunskap om verklighetens djupareliggande strukturer och mekanismer, utan endast om empiriska generaliseringar och lagbundenheter. Inom vetenskapen är det oftast så att förklaringen av händelser på en nivå står att finna i orsaker på en annan, djupare, nivå. Induktivismens syn på vetenskap leder till att vetenskapens huvuduppgift beskrivs som att ange hur något äger rum, medan andra vetenskapsteorier menar att vetenskapens kardinaluppgift måste vara att förklara varför det äger rum.
Till följd av de ovan anförda problemen har mer moderata empirister resonerat som att kommit att eftersom det i regel inte existerar något logiskt tillvägagångssätt för hur man upptäcker en lag eller teori startar man helt enkelt med lagar och teorier utifrån vilka man deducerar fram en rad påståenden som fungerar som förklaringar eller prediktioner. I stället för att undersöka hur man kommit fram till vetenskapens lagar och teorier försöker man att förklara vad en vetenskaplig förklaring och prediktion är, vilken roll teorier och modeller spelar i dessa, och hur man ska kunna värdera dem.
I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklarings-modellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. [Men det finns problem med denna uppfattning t. o. m. inom naturvetenskapen. Många av naturvetenskapens lagar säger egentligen inte något om vad saker gör, utan om vad de tenderar att göra. Detta beror till stor del på att lagarna beskriver olika delars beteende, snarare än hela fenomenet som sådant (utom möjligen i experimentsituationer). Och många av naturvetenskapens lagar gäller egentligen inte verkliga entiteter, utan bara fiktiva entiteter. Ofta är detta en följd av matematikens användande inom den enskilda vetenskapen och leder till att dess lagar bara kan exemplifieras i modeller (och inte i verkligheten).] Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad.
En följd av att man accepterar den hypotetisk-deduktiva förklaringsmodellen är oftast att man också accepterar den s. k. symmetritesen. Enligt denna är den enda skillnaden mellan förutsägelse och förklaring att man i den förstnämnda antar explanansen vara känd och försöker göra en prediktion, medan man i den senare antar explanandum vara känd och försöker finna initialvillkor och lagar ur vilka det undersökta fenomenet kan härledas.
Ett problem med symmetritesen äer dock att den inte tar hänsyn till att orsaker kan förväxlas med korrelationer. Att storken dyker upp samtidigt med människobarnen utgör inte någon förklaring till barns tillkomst.
Symmetritesen tar inte heller hänsyn till att orsaker kan vara tillräckliga men inte nödvändiga. Att en cancersjuk individ blir överkörd gör inte cancern till dödsorsak. Cancern skulle kunna ha varit den riktiga förklaringen till individens död. Men även om vi t. o. m. skulle kunna konstruera en medicinsk lag – i överensstämmelse med den deduktivistiska modellen – som säger att individer med den aktuella typen av cancer kommer att dö av denna cancer, förklarar likväl inte lagen denna individs död. Därför är tesen helt enkelt inte riktig.
Att finna ett mönster är inte detsamma som att förklara något. Att på frågan varför bussen är försenad få till svar att den brukar vara det, utgör inte någon acceptabel förklaring. Ontologi och naturlig nödvändighet måste ingå i ett relevant svar, åtminstone om man i en förklaring söker något mer än ”constant conjunctions of events”.
Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, ge en metod för testning av förklaringar, och visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.
En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.
Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.
Blog at WordPress.com.
Entries and Comments feeds.