Saint Ralph (personal)

30 July, 2016 at 22:58 | Posted in Economics, Varia | Leave a comment

 

One of my absolute favourites. A gem of a movie showing the importance of chasing miracles. Hallelujah!

Friedman’s ‘as if’ methodology — a total disaster

30 July, 2016 at 20:12 | Posted in Economics, Theory of Science & Methodology | 1 Comment

the-only-function-of-economic-forecasting-is-to-make-astrology-look-respectable-quote-1The explicit and implicit acceptance of Friedman’s as if methodology by mainstream economists has proved to be disastrous. The fundamental paradigm of economics that emerged from this methodology not only failed to anticipative the Crash of 2008 and its devastating effects, this paradigm has proved incapable of producing a consensus within the discipline of economics as to the nature and cause of the economic stagnation we find ourselves in the midst of today. In attempting to understand why this is so it is instructive to examine the nature of Friedman’s arguments within the context in which he formulated them, especially his argument that the truth of a theory’s assumptions is irrelevant so long as the inaccuracy of a theory’s predictions are cataloged and we argue as if those assumptions are true …

A scientific theory is, in fact, the embodiment of its assumptions. There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.

George Blackford

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journal being filled with articles supporting the stories with empirical evidence confirming the predictions. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject isn’t considered required.

If the ultimate criteria of success of a deductivist system is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.

How evidence is treated in modern macroeconomics

29 July, 2016 at 17:16 | Posted in Economics | 1 Comment

‘New Keynesian’ macroeconomist Simon Wren-Lewis has a post on his blog discussing how evidence is treated in modern macroeconomics (emphasis added):

quote-Oscar-Wilde-consistency-is-the-last-refuge-of-the-58It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

Being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance, etc.). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity are important more specifically also when it comes to microfounded DSGE macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Yours truly and people like Tony Lawson have for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

an-unconvenient-truthWithin mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

To have valid evidence is not enough. What economics needs is sound evidence. Why? Simply because the premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid, but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Giving is loving (personal)

29 July, 2016 at 17:07 | Posted in Varia | Leave a comment

 

C H Hermansson (1917-2016)

28 July, 2016 at 21:59 | Posted in Politics & Society | Leave a comment

 

Ytterligare en av det svenska 1900-talets politiska giganter har gått ur tiden.

How true is Friedman’s permanent income hypothesis?

28 July, 2016 at 10:39 | Posted in Economics | 2 Comments

main-qimg-1b106c1df117b1c788bd8f4089d394e3-cNoah Smith has an article up on Bloomberg View on Milton Friedman’s permanent income hypothesis (PIH). Noah argues that almost all modern macroeconomic theories are based on PIH, especially used in formulating the consumption Euler equations that make up a vital part of ‘modern’ New Classical and New Keynesian macro models.

So, what’s the problem? Well, only that PIH according to Smith is ‘most certainly wrong.’

Chris Dillow has commented on Noah’s PIH critique, arguing that although Noah is right

he overstates the newness of the evidence against the PIH. In fact, we’ve known it was flawed ever since the early 80s. He also overstates the PIH’s intellectual hegemony. The standard UK undergraduate textbook says:

“One strong prediction of the simple PIH model … is that changes in income that are predictable from past information should have no effect on current consumption. But there is by now a considerable body of work on aggregate consumption data that suggests this is wrong…This is an important result for economic policy because it suggests that changes in income as a result, say, of tax changes can have a marked effect on consumption and hence on economic activity.” (Carlin and Soskice, Macroeconomics: Imperfections, Institutions and Policies, p 221-22)”

Dillow then goes on arguing that PIH is not always wrong, that it is useful, and that it is basically all about ‘context’ and that this reinforces what Dani Rodrik has written in Economics Rules (p 5-6) :

Different social settings require different models. Economists are unlikely ever to uncover universal, general-purpose models. But in part because economists take the natural sciences as their example, they have a tendency to misuse models. They are prone to mistake a model for the model, relevant and applicable under all conditions. Economists must overcome this temptation.

Well yours truly thinks that on this issue both Noah and Chris are not critical enough.

Let me elaborate, and start with Carlin & Soskice and then move over to Rodrik and his smorgasbord view on economic models.

Wendy Carlin’s and David Soskice‘s macroeonomics textbook Macroeconomics: Institutions, Instability, and the Financial System (Oxford University Press 2015) builds more than most other intermediate macroeconomics textbooks on supplying the student with a ‘systematic way of thinking through problems’ with the help of formal-mathematical models.

They explicitly adapt a ‘New Keynesian’ framework including price rigidities and adding a financial system to the usual neoclassical macroeconomic set-up. But although I find things like the latter amendment an improvement, it’s definitely more difficult to swallow their methodological stance, and especially their non-problematized acceptance of the need for macroeconomic microfoundations.

From the first page of the book they start to elaborate their preferred 3-equations ‘New Keynesian’ macromodel. And after twenty-two pages they have already come to specifying the demand side with the help of the Permanent Income Hypothesis and its Euler equations.

But if people — not the representative agent — at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in these ‘New Keynesian’ macromodels gonna help us?

My doubts regarding macro economic modelers’ obsession with Euler equations is basically that, as with so many other assumptions in ‘modern’ macroeconomics, Euler equations, and the PIH that they build on, don’t fit reality.

In the standard neoclassical consumption model — underpinning Carlin’s and Soskice’s microfounded macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? The Euler equation used implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Further, in the Euler equation we only have one interest rate, equated to the money market rate as set by the central bank. The crux is, however, that — given almost any specification of the utility function – the two rates are actually often found to be strongly negatively correlated in the empirical literature!

From a methodological pespective yours truly has to conclude that Carlin’s and Soskice’s microfounded macroeconomic model is a rather unimpressive attempt at legitimizing using fictitious idealizations — such as PIH and Euler equations — for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

Re Dani Rodrik‘s Economics Rules, there sure is much in the book I like and appreciate.But there is also a very disturbing apologetic tendency in the book to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection.

Underlying Rodrik and other mainstream economists views on models is a picturing of models as some kind of experiments. I’ve run into that view many times over the years when having discussions with mainstream economists on their ‘thought experimental’ obsession — and I still think it’s too vague and elusive to be helpful. Just repeating the view doesn’t provide the slightest reasont to believe it.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and hence only are of limited value to our understanding, explanations or predictions of real economic systems.

Mainstream economists usually do not want to get hung up on the assumptions that their models build on. But it is still an undeniable fact that theoretical models building on piles of known to be false assumptions — such as PIH and the Euler equations that build on it — in no way even get close to being scientific explanations. On the contrary. They are untestable and hence totally worthless from the point of view of scientific relevance.

Conclusion: mainstream macroeconomics, building on the standard neoclassical consumption model with its Permanent Income Hypothesis and Euler equations, has to be replaced with something else. Preferably with something that is both real and relevant, and not only chosen for reasons of mathematical tractability.

Gary Becker’s big mistake

27 July, 2016 at 18:21 | Posted in Economics | 3 Comments

The econometrician Henri Theil once said “models are to be used but not to be believed.” I use the rational actor model for thinking about marginal changes but Gary Becker really believed the model. Once, at a dinner with Becker, I remarked that extreme punishment could lead to so much poverty and hatred that it could create blowback. Becker was having none of it. For every example that I raised of blowback, he responded with a demand for yet more punishment …

1399387137298You can see the idea in his great paper, Crime and Punishment: An Economic Approach. In a famous section he argues that an optimal punishment system would combine a low probability of being punished with a high level of punishment if caught …

We have now tried that experiment and it didn’t work … Most spectacularly, the experiment with greater punishment led to more spending on crime control and many more people in prison …

In the economic theory, crime is in a criminal’s interest … But is crime always done out of interest? The rational actor model fits burglary, pick-pocketing and insider trading but lots of crime–including vandalism, arson, bar fights and many assaults–aren’t motivated by economic gain and perhaps not by any rational interest …

The rational choice theory was pushed beyond its limits and in so doing not only was punishment pushed too far we also lost sight of alternative policies that could reduce crime without the social disruption and injustice caused by mass incarceration.

Alex Tabarrok

Interesting article, but although Becker’s belief in his model being a real picture of reality is rather gobsmacking, I’m far from convinced by Tabarrok’s instrumentalist use of the same kind of models.

straight jacketThe alarm that sets off in my brain when reading Becker is that ‘rational actor models,’ rather than being helpful for understanding real world economic issues, sounds more like an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

The assumptions that Becker’s theory builds on are, as Tabarrok and almost all empirical testing of it has shown, totally unrealistic. That is, they are empirically false.

That said, one could indeed wonder why on earth anyone should be interested in applying that kind of theory to real world situations. As so many other mainstream mathematical models taught to economics students today, it has next to nothing to do  with the real world.

From a methodological point of view one can, of course, also wonder, how we are supposed to evaluate tests of a theories and models building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods anything logically follows.

Modern expected utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time — of which Gary Becker’s ‘rational actor model’ is a superb example.

Greetings from Hamburg (personal)

26 July, 2016 at 19:50 | Posted in Varia | Leave a comment

speicherstadt

On our way down to Heidelberg we spent a couple of days in Hamburg, visiting Speicherstadt, the largest warehouse district in the world on timber-pile foundations, and since 2015 part of UNESCO World Heritage.

Awesome.

Econometric forecasting — an assessment

26 July, 2016 at 16:44 | Posted in Statistics & Econometrics | Leave a comment

411e9aO5PCL._SY344_BO1,204,203,200_There have been over four decades of econometric research on business cycles … The formalization has undeniably improved the scientific strength of business cycle measures …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective, especially when the success rate in ex-ante forecasts of recessions is used as a key criterion. The fact that the onset of the 2008 financial-crisis-triggered recession was predicted by only a few ‘Wise Owls’ … while missed by regular forecasters armed with various models serves us as the latest warning that the efficiency of the formalization might be far from optimal. Remarkably, not only has the performance of time-series data-driven econometric models been off the track this time, so has that of the whole bunch of theory-rich macro dynamic models developed in the wake of the rational expectations movement, which derived its fame mainly from exploiting the forecast failures of the macro-econometric models of the mid-1970s recession.

The limits of econometric forecasting has, as noted by Qin, been critically pointed out many times before.

Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And as the quote below shows, even Ragnar Frisch shared some of Haavelmo’s — and Keynes’s — doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge. And, more specifically,  when it comes to forecasting activities, the results have been bleak indeed.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are scating on thin ice. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth,” nor robust forecasts. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a more cautious attitude towards probabilistic inference of causality in economic contexts. Science should help us penetrate to — as Keynes put it — “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.”  We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

This is a more fundamental and radical problem than the celebrated “Lucas critique” have suggested. This is not the question if deep parameters, absent on the macro-level, exist in “tastes” and “technology” on the micro-level. It goes deeper. Real world social systems are not governed by stable causal mechanisms or capacities. It is the criticism that Keynes — in Essays in Biography — first launched against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of economic theoretical modeling – rather useless.

Austerity policies — nothing but kindergarten economics

25 July, 2016 at 18:51 | Posted in Economics | 2 Comments


I definitely recommend everyone to watch this well-argued interview with Steve Keen.

To many conservative and neoliberal politicians and economists there seems to be a spectre haunting the United States and Europe today — Keynesian ideas on governments pursuing policies raising effective demand and supporting employment. And some of the favourite arguments used among these Keynesophobics to fight it are the confidence argument and the doctrine of ‘sound finance.’

Is this witless crusade against economic reason new? Not at all!

kale It should be first stated that, although most economists are now agreed that full employment may be achieved by government spending, this was by no means the case even in the recent past. Among the opposers of this doctrine there were (and still are) prominent so-called ‘economic experts’ closely connected with banking and industry. This suggests that there is a political background in the opposition to the full employment doctrine, even though the arguments advanced are economic. That is not to say that people who advance them do not believe in their economics, poor though this is. But obstinate ignorance is usually a manifestation of underlying political motives …

Clearly, higher output and employment benefit not only workers but entrepreneurs as well, because the latter’s profits rise. And the policy of full employment outlined above does not encroach upon profits because it does not involve any additional taxation. The entrepreneurs in the slump are longing for a boom; why do they not gladly accept the synthetic boom which the government is able to offer them? It is this difficult and fascinating question with which we intend to deal in this article …

We shall deal first with the reluctance of the ‘captains of industry’ to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of ‘sound finance’ is to make the level of employment dependent on the state of confidence.

Michal Kalecki Political aspects of full employment  (1943)

Next Page »

Blog at WordPress.com.
Entries and comments feeds.