Saint Ralph (personal)

30 July, 2016 at 22:58 | Posted in Economics, Varia | Comments Off on Saint Ralph (personal)

 

One of my absolute favourites. A gem of a movie showing the importance of chasing miracles. Hallelujah!

Advertisements

Friedman’s ‘as if’ methodology — a total disaster

30 July, 2016 at 20:12 | Posted in Economics, Theory of Science & Methodology | 1 Comment

the-only-function-of-economic-forecasting-is-to-make-astrology-look-respectable-quote-1The explicit and implicit acceptance of Friedman’s as if methodology by mainstream economists has proved to be disastrous. The fundamental paradigm of economics that emerged from this methodology not only failed to anticipative the Crash of 2008 and its devastating effects, this paradigm has proved incapable of producing a consensus within the discipline of economics as to the nature and cause of the economic stagnation we find ourselves in the midst of today. In attempting to understand why this is so it is instructive to examine the nature of Friedman’s arguments within the context in which he formulated them, especially his argument that the truth of a theory’s assumptions is irrelevant so long as the inaccuracy of a theory’s predictions are cataloged and we argue as if those assumptions are true …

A scientific theory is, in fact, the embodiment of its assumptions. There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.

George Blackford

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journal being filled with articles supporting the stories with empirical evidence confirming the predictions. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject isn’t considered required.

If the ultimate criteria of success of a deductivist system is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.

How evidence is treated in modern macroeconomics

29 July, 2016 at 17:16 | Posted in Economics | 1 Comment

‘New Keynesian’ macroeconomist Simon Wren-Lewis has a post on his blog discussing how evidence is treated in modern macroeconomics (emphasis added):

quote-Oscar-Wilde-consistency-is-the-last-refuge-of-the-58It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

Being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance, etc.). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity are important more specifically also when it comes to microfounded DSGE macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Yours truly and people like Tony Lawson have for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

an-unconvenient-truthWithin mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

To have valid evidence is not enough. What economics needs is sound evidence. Why? Simply because the premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid, but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Giving is loving (personal)

29 July, 2016 at 17:07 | Posted in Varia | Comments Off on Giving is loving (personal)

 

C H Hermansson (1917-2016)

28 July, 2016 at 21:59 | Posted in Politics & Society | Comments Off on C H Hermansson (1917-2016)

 

Ytterligare en av det svenska 1900-talets politiska giganter har gått ur tiden.

How true is Friedman’s permanent income hypothesis?

28 July, 2016 at 10:39 | Posted in Economics | 2 Comments

main-qimg-1b106c1df117b1c788bd8f4089d394e3-cNoah Smith has an article up on Bloomberg View on Milton Friedman’s permanent income hypothesis (PIH). Noah argues that almost all modern macroeconomic theories are based on PIH, especially used in formulating the consumption Euler equations that make up a vital part of ‘modern’ New Classical and New Keynesian macro models.

So, what’s the problem? Well, only that PIH according to Smith is ‘most certainly wrong.’

Chris Dillow has commented on Noah’s PIH critique, arguing that although Noah is right

he overstates the newness of the evidence against the PIH. In fact, we’ve known it was flawed ever since the early 80s. He also overstates the PIH’s intellectual hegemony. The standard UK undergraduate textbook says:

“One strong prediction of the simple PIH model … is that changes in income that are predictable from past information should have no effect on current consumption. But there is by now a considerable body of work on aggregate consumption data that suggests this is wrong…This is an important result for economic policy because it suggests that changes in income as a result, say, of tax changes can have a marked effect on consumption and hence on economic activity.” (Carlin and Soskice, Macroeconomics: Imperfections, Institutions and Policies, p 221-22)”

Dillow then goes on arguing that PIH is not always wrong, that it is useful, and that it is basically all about ‘context’ and that this reinforces what Dani Rodrik has written in Economics Rules (p 5-6) :

Different social settings require different models. Economists are unlikely ever to uncover universal, general-purpose models. But in part because economists take the natural sciences as their example, they have a tendency to misuse models. They are prone to mistake a model for the model, relevant and applicable under all conditions. Economists must overcome this temptation.

Well yours truly thinks that on this issue both Noah and Chris are not critical enough.

Let me elaborate, and start with Carlin & Soskice and then move over to Rodrik and his smorgasbord view on economic models.

Wendy Carlin’s and David Soskice‘s macroeonomics textbook Macroeconomics: Institutions, Instability, and the Financial System (Oxford University Press 2015) builds more than most other intermediate macroeconomics textbooks on supplying the student with a ‘systematic way of thinking through problems’ with the help of formal-mathematical models.

They explicitly adapt a ‘New Keynesian’ framework including price rigidities and adding a financial system to the usual neoclassical macroeconomic set-up. But although I find things like the latter amendment an improvement, it’s definitely more difficult to swallow their methodological stance, and especially their non-problematized acceptance of the need for macroeconomic microfoundations.

From the first page of the book they start to elaborate their preferred 3-equations ‘New Keynesian’ macromodel. And after twenty-two pages they have already come to specifying the demand side with the help of the Permanent Income Hypothesis and its Euler equations.

But if people — not the representative agent — at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in these ‘New Keynesian’ macromodels gonna help us?

My doubts regarding macro economic modelers’ obsession with Euler equations is basically that, as with so many other assumptions in ‘modern’ macroeconomics, Euler equations, and the PIH that they build on, don’t fit reality.

In the standard neoclassical consumption model — underpinning Carlin’s and Soskice’s microfounded macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? The Euler equation used implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Further, in the Euler equation we only have one interest rate, equated to the money market rate as set by the central bank. The crux is, however, that — given almost any specification of the utility function – the two rates are actually often found to be strongly negatively correlated in the empirical literature!

From a methodological pespective yours truly has to conclude that Carlin’s and Soskice’s microfounded macroeconomic model is a rather unimpressive attempt at legitimizing using fictitious idealizations — such as PIH and Euler equations — for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies.

Re Dani Rodrik‘s Economics Rules, there sure is much in the book I like and appreciate.But there is also a very disturbing apologetic tendency in the book to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection.

Underlying Rodrik and other mainstream economists views on models is a picturing of models as some kind of experiments. I’ve run into that view many times over the years when having discussions with mainstream economists on their ‘thought experimental’ obsession — and I still think it’s too vague and elusive to be helpful. Just repeating the view doesn’t provide the slightest reasont to believe it.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and hence only are of limited value to our understanding, explanations or predictions of real economic systems.

Mainstream economists usually do not want to get hung up on the assumptions that their models build on. But it is still an undeniable fact that theoretical models building on piles of known to be false assumptions — such as PIH and the Euler equations that build on it — in no way even get close to being scientific explanations. On the contrary. They are untestable and hence totally worthless from the point of view of scientific relevance.

Conclusion: mainstream macroeconomics, building on the standard neoclassical consumption model with its Permanent Income Hypothesis and Euler equations, has to be replaced with something else. Preferably with something that is both real and relevant, and not only chosen for reasons of mathematical tractability.

Gary Becker’s big mistake

27 July, 2016 at 18:21 | Posted in Economics | 3 Comments

The econometrician Henri Theil once said “models are to be used but not to be believed.” I use the rational actor model for thinking about marginal changes but Gary Becker really believed the model. Once, at a dinner with Becker, I remarked that extreme punishment could lead to so much poverty and hatred that it could create blowback. Becker was having none of it. For every example that I raised of blowback, he responded with a demand for yet more punishment …

1399387137298You can see the idea in his great paper, Crime and Punishment: An Economic Approach. In a famous section he argues that an optimal punishment system would combine a low probability of being punished with a high level of punishment if caught …

We have now tried that experiment and it didn’t work … Most spectacularly, the experiment with greater punishment led to more spending on crime control and many more people in prison …

In the economic theory, crime is in a criminal’s interest … But is crime always done out of interest? The rational actor model fits burglary, pick-pocketing and insider trading but lots of crime–including vandalism, arson, bar fights and many assaults–aren’t motivated by economic gain and perhaps not by any rational interest …

The rational choice theory was pushed beyond its limits and in so doing not only was punishment pushed too far we also lost sight of alternative policies that could reduce crime without the social disruption and injustice caused by mass incarceration.

Alex Tabarrok

Interesting article, but although Becker’s belief in his model being a real picture of reality is rather gobsmacking, I’m far from convinced by Tabarrok’s instrumentalist use of the same kind of models.

straight jacketThe alarm that sets off in my brain when reading Becker is that ‘rational actor models,’ rather than being helpful for understanding real world economic issues, sounds more like an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

The assumptions that Becker’s theory builds on are, as Tabarrok and almost all empirical testing of it has shown, totally unrealistic. That is, they are empirically false.

That said, one could indeed wonder why on earth anyone should be interested in applying that kind of theory to real world situations. As so many other mainstream mathematical models taught to economics students today, it has next to nothing to do  with the real world.

From a methodological point of view one can, of course, also wonder, how we are supposed to evaluate tests of a theories and models building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods anything logically follows.

Modern expected utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time — of which Gary Becker’s ‘rational actor model’ is a superb example.

Greetings from Hamburg (personal)

26 July, 2016 at 19:50 | Posted in Varia | Comments Off on Greetings from Hamburg (personal)

speicherstadt

On our way down to Heidelberg we spent a couple of days in Hamburg, visiting Speicherstadt, the largest warehouse district in the world on timber-pile foundations, and since 2015 part of UNESCO World Heritage.

Awesome.

Econometric forecasting — an assessment

26 July, 2016 at 16:44 | Posted in Statistics & Econometrics | Comments Off on Econometric forecasting — an assessment

411e9aO5PCL._SY344_BO1,204,203,200_There have been over four decades of econometric research on business cycles … The formalization has undeniably improved the scientific strength of business cycle measures …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective, especially when the success rate in ex-ante forecasts of recessions is used as a key criterion. The fact that the onset of the 2008 financial-crisis-triggered recession was predicted by only a few ‘Wise Owls’ … while missed by regular forecasters armed with various models serves us as the latest warning that the efficiency of the formalization might be far from optimal. Remarkably, not only has the performance of time-series data-driven econometric models been off the track this time, so has that of the whole bunch of theory-rich macro dynamic models developed in the wake of the rational expectations movement, which derived its fame mainly from exploiting the forecast failures of the macro-econometric models of the mid-1970s recession.

The limits of econometric forecasting has, as noted by Qin, been critically pointed out many times before.

Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And as the quote below shows, even Ragnar Frisch shared some of Haavelmo’s — and Keynes’s — doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge. And, more specifically,  when it comes to forecasting activities, the results have been bleak indeed.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are scating on thin ice. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth,” nor robust forecasts. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a more cautious attitude towards probabilistic inference of causality in economic contexts. Science should help us penetrate to — as Keynes put it — “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.”  We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

This is a more fundamental and radical problem than the celebrated “Lucas critique” have suggested. This is not the question if deep parameters, absent on the macro-level, exist in “tastes” and “technology” on the micro-level. It goes deeper. Real world social systems are not governed by stable causal mechanisms or capacities. It is the criticism that Keynes — in Essays in Biography — first launched against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of economic theoretical modeling – rather useless.

Austerity policies — nothing but kindergarten economics

25 July, 2016 at 18:51 | Posted in Economics | 2 Comments


I definitely recommend everyone to watch this well-argued interview with Steve Keen.

To many conservative and neoliberal politicians and economists there seems to be a spectre haunting the United States and Europe today — Keynesian ideas on governments pursuing policies raising effective demand and supporting employment. And some of the favourite arguments used among these Keynesophobics to fight it are the confidence argument and the doctrine of ‘sound finance.’

Is this witless crusade against economic reason new? Not at all!

kale It should be first stated that, although most economists are now agreed that full employment may be achieved by government spending, this was by no means the case even in the recent past. Among the opposers of this doctrine there were (and still are) prominent so-called ‘economic experts’ closely connected with banking and industry. This suggests that there is a political background in the opposition to the full employment doctrine, even though the arguments advanced are economic. That is not to say that people who advance them do not believe in their economics, poor though this is. But obstinate ignorance is usually a manifestation of underlying political motives …

Clearly, higher output and employment benefit not only workers but entrepreneurs as well, because the latter’s profits rise. And the policy of full employment outlined above does not encroach upon profits because it does not involve any additional taxation. The entrepreneurs in the slump are longing for a boom; why do they not gladly accept the synthetic boom which the government is able to offer them? It is this difficult and fascinating question with which we intend to deal in this article …

We shall deal first with the reluctance of the ‘captains of industry’ to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of ‘sound finance’ is to make the level of employment dependent on the state of confidence.

Michal Kalecki Political aspects of full employment  (1943)

Good reasons to worry about inequalities

25 July, 2016 at 13:22 | Posted in Economics, Politics & Society | 1 Comment

Focussing upon inequality statistics … misses an important point. What matters is not just the level of income inequality, but how that inequality arose. A free market society in which high incomes arise from the free choices of consenting adults – as in Robert Nozick’s Wilt Chamberlain parable – might have the same Gini coefficient as a crony capitalist society. But they are two different things. A good reason to be worried about current inequality – even if it hasn’t changed – is that it is a symptom of market failures such as corporate welfare, regulatory capture or the implicit subsidy to banks.

trickle-downIn this context, what matters is not just inequalities of income but inequalities of power. Top footballers and top bankers might be earning similar sums, but one’s salary is the product of market forces and the other of a tax-payer subsidy. The freelancer on £30,000 who’s worrying where his next contract is coming from has similar income to the bullying middle managers who created intolerable working conditions at (for example) Sports Direct. But they have very different degrees of economic power. And the low income that results from having to take a lousy job where your wages are topped up by tax credits gives you much less power than the same income that would come from a basic income and the freer choice to take or leave a low wage job.

My point here is a simple one. There are very good reasons why we should worry about inequality – not just leftists but also rightists who want freer markets and “bourgeois” virtues. Focusing only upon the stability of the Gini coefficient is a form of statistical fetishism which overlooks important questions.

Chris Dillow

Lucas-Rapping and ‘New Keynesian’ models of unemployment

25 July, 2016 at 12:06 | Posted in Economics | 2 Comments

unemployed-thumbLucas and Rapping (1969) claim that cyclical increases in unemployment occur when workers quit their jobs because wages or salaries fall below expectations …

According to this explanation, when wages are unusually low, people become unemployed in order to enjoy free time, substituting leisure for income at a time when they lose the least income …

According to the theory, quits into unemployment increase during recessions, whereas historically quits decrease sharply and roughly half of unremployed workers become jobless because they are laid off … During the recession I studied, people were even afraid to change jobs because new ones might prove unstable and lead to unemployment …

If wages and salaries hardly ever fall, the intertemporal substitution theory is widely applicable only if the unemployed prefer jobless leisure to continued employment at their old pay. However, the attitude and circumstances of the unemployed are not consistent with their having made this choice …

In real business cycle theory, unemployment is interpreted as leisure optimally selected by workers, as in the Lucas-Rapping model. It has proved difficult to construct business cycle models consistent with this assumption and with real wage fluctuations as small as they are in reality, relative to fluctuations in employment.

Truman F. Bewley

This is, of course, only what you would expect of New Classical Chicago economists.

But sadly enough this extraterrestial view of unemployment is actually shared by so called New Keynesians, whose microfounded dynamic stochastic general equilibrium models cannot even incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with microfunded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

To Keynes this was self-evident. But obviously not so to New Classical and ‘New Keynesian’ economists.

Thorbjörn Fälldin — en gigant har gått ur tiden

25 July, 2016 at 10:48 | Posted in Politics & Society | Comments Off on Thorbjörn Fälldin — en gigant har gått ur tiden

torbjörn

Thorbjörn Fälldin, Sveriges tidigare statsminister och ledare för Centerpartiet, dog i sitt hem i Ramvik, på lördagskvällen, 90 år gammal.

Under Thorbjörn Fälldins och föregångaren Gunnar Hedlunds tid representerade Centerpartiet fortfarande en genuin politisk kraft i vårt samhälle.

Efter Fälldin vände Centerpartiet blad och så småningom fick vi Maud Olofsson.

Med Olofsson såldes partiets själ ut och partiets kräftgång in i nyliberalism påbörjades.

Sen kom Annie Lööf. Och ett en gång respekterat parti kördes fullständigt i botten.

Att en nyliberal, tyckmyckentrutad, floskulös, Ayn Rand och Margaret Thatcher dyrkande broilerpolitiker idag kan sitta och styra över ett parti som letts av giganter som Hedlund och Fälldin är fullständigt monstruöst. Obegripligt. Och sorgligt.

New Keynesian unemployment — a paid vacation essentially!

24 July, 2016 at 10:45 | Posted in Economics | Comments Off on New Keynesian unemployment — a paid vacation essentially!

Franco Modigliani famously quipped that he did not think that unemployment during the Great Depression should be described, in an economic model, as a “sudden bout of contagious laziness”. Quite. For the past thirty years we have been debating whether to use classical real business cycle models (RBC), or their close cousins, modern New Keynesian (NK) models, to describe recessions. In both of these models, the social cost of persistent unemployment is less than a half a percentage point of steady state consumption.

0a7fb63c47d95f3138a81e711dabe9d3959138340aa3e78d26336fd2fab0f6b9What does that mean? Median US consumption is roughly $30,000 a year. One half of one percent of this is roughly 50 cents a day. A person inhabiting one of our artificial model RBC or NK model worlds, would not be willing to pay more than 50 cents a day to avoid another Great Depression. That is true of real business cycle models. It is also true of New Keynesian models …

That’s why I eschew NK and RBC models. They are both wrong. The high unemployment that follows a financial crisis is not the socially efficient response to technology shocks. And the slow recovery from a financial melt-down has nothing to do with the costs of reprinting menus that underpins the models of NK economists. It is a potentially permanent failure of private agents to coordinate on an outcome that is socially desirable.

Roger Farmer

wpid-mmb9qajq9swpi8xxy76a

In the basic DSGE models used by both New Classical and ‘New Keynesian’ macroeconomists, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its ‘equilibrium value,’ the representative agent adjust her labour supply, so that when the real wage is higher than its ‘equilibrium value,’ labour supply is increased, and when the real wage is below its ‘equilibrium value,’ labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be — a kind of prolonged vacation.

The D WordAlthough this picture of unemployment as a kind of self-chosen optimality, strikes most people as utterly ridiculous, there are also, unfortunately, a lot of neoclassical economists out there who still think that price and wage rigidities are the prime movers behind unemployment. DSGE models basically explains variations in employment (and a fortiori output) with assuming nominal wages being more flexible than prices – disregarding the lack of empirical evidence for this rather counterintuitive assumption.

Lowering nominal wages would not  clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. It would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen as a general substitute for an expansionary monetary or fiscal policy. And even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

The classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong. Flexible wages would probably only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labour market.

Obviously it’s rather embarrassing that the kind of DSGE models ‘modern’ macroeconomists use cannot incorporate such a basic fact of reality as involuntary unemployment. Of course, working with representative agent models, this should come as no surprise. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

And as if this is nonsense economics is not enough, in New Classical and ‘New Keynesian’ macroeconomists DSGE models increases in government spending leads to a drop in private consumption!

How on earth does one arrive at such as bizarre view?

In the most basic mainstream proto-DSGE models one often assumes that governments finance current expenditures with current tax revenues.  This will have a negative income effect on the households, leading — rather counterintuitively — to a drop in private consumption although both employment an production expands. This mechanism also holds when the (in)famous Ricardian equivalence is added to the models.

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

Why?

In the standard neoclassical consumption model — used in DSGE macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon  today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form

ct + cf/(1+r) = ft + yt + yf/(1+r),

where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form

U = u(ct) + au(cf),

where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when

u'(ct) = a(1+r)u'(cf).

This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as

1/ct = a(1+r)(1/cf),

or

cf/ct = a(1+r).

This importantly implies that according to the neoclassical consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes, since wealth — ft + yt + yf/(1+r) — has to be interpreted as present discounted value net of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged. As a result — households cut down on their consumption when governments increase their spendings. Mirabile dictu!

out of the fryingMacroeconomic models have to abandon Ricardian equivalence nonsense. But replacing it with “overlapping generations” and “infinite-horizon” models — is– in terms of realism and relevance — just getting out of the frying pan into the fire. All unemployment is still voluntary. Intertemporal substitution between labour and leisure is still ubiquitous. And the specification of the utility function is still hopelessly off the mark from an empirical point of view.

As one Nobel laureate had it:

Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.

Joseph E. Stiglitz, twitter 

And as one economics blogger has it:

New Classical and ‘New Keynesian’ DSGE modeling is taught in every graduate school in the country. It is also sheer nonsense.

Lars P Syll, twitter 

Economics vs. reality

24 July, 2016 at 09:41 | Posted in Economics | 1 Comment

Economics_Versus_Reality-e1431153441673Denicolò and Zanchettin, in an article published by the prestigious Economic Journal, claim to have shown among other things that “stronger patent protection may reduce innovation and growth.” As a prelude to forty pages of mathematics, they state of their model, “The economy is populated by L identical, infinitely lived, individuals … There is a unique final good in the economy that can be consumed, used to produce intermediate goods, or used in research …” Not only are all the people in this model world identical and immortal, they only produce a single product. The product has properties that are entirely unreal—not so much science fiction as pure magic. The conclusion may be justified, or not; but the idea that a model so remote from reality can be used to make public policy recommendations is, to anyone but a fully certified neoclassical economist, staggering.

The mathematization of economics since WW II has made mainstream — neoclassical — economists more or less obsessed with formal, deductive-axiomatic models. Confronted with the critique that they do not solve real problems, they  often react as Saint-Exupéry‘s Great Geographer, who, in response to the questions posed by The Little Prince, says that he is too occupied with his scientific work to be be able to say anything about reality. Confronting economic theory’s lack of relevance and ability to tackle real probems, one retreats into the wonderful world of economic models. One goes in to the “shack of tools” — as my old mentor Erik Dahmén used to say — and stays there. While the economic problems in the world around us steadily increase, one is rather happily playing along with the latest toys in the mathematical toolbox.

Modern mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?

Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more  than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

Racial bias in police shooting

23 July, 2016 at 18:23 | Posted in Politics & Society, Statistics & Econometrics | 3 Comments

roland-fryerRoland Fryer, an economics professor at Harvard University, recently published a working paper at NBER on the topic of racial bias in police use of force and police shootings. The paper gained substantial media attention – a write-up of it became the top viewed article on the New York Times website. The most notable part of the study was its finding that there was no evidence of racial bias in police shootings, which Fryer called “the most surprising result of [his] career”. In his analysis of shootings in Houston, Texas, black and Hispanic people were no more likely (and perhaps even less likely) to be shot relative to whites.

Fryer’s analysis is highly flawed, however … Fryer was not comparing rates of police shootings by race. Instead, his research asked whether these racial differences were the result of “racial bias” rather than merely “statistical discrimination”. Both terms have specific meanings in economics. Statistical discrimination occurs when an individual or institution treats people differently based on racial stereotypes that ‘truly’ reflect the average behavior of a racial group. For instance, if a city’s black drivers are 50% more likely to possess drugs than white drivers, and police officers are 50% more likely to pull over black drivers, economic theory would hold that this discriminatory policing is rational …

Once explained, it is possible to find the idea of “statistical discrimination” just as abhorrent as “racial bias”. One could point out that the drug laws police enforce were passed with racially discriminatory intent, that collectively punishing black people based on “average behavior” is wrong, or that – as a self-fulfilling prophecy – bias can turn into statistical discrimination (if black people’s cars are searched more thoroughly, for instance, it will appear that their rates of drug possession are higher) …

Even if one accepts the logic of statistical discrimination versus racial bias, it is an inappropriate choice for a study of police shootings. The method that Fryer employs has, for the most part, been used to study traffic stops and stop-and-frisk practices. In those cases, economic theory holds that police want to maximize the number of arrests for the possession of contraband (such as drugs or weapons) while expending the fewest resources. If they are acting in the most cost-efficient, rational manner, the officers may use racial stereotypes to increase the arrest rate per stop. This theory completely falls apart for police shootings, however, because officers are not trying to rationally maximize the number of shootings …

Economic theory aside, there is an even more fundamental problem with the Houston police shooting analysis. In a typical study, a researcher will start with a previously defined population where each individual is at risk of a particular outcome. For instance, a population of drivers stopped by police can have one of two outcomes: they can be arrested, or they can be sent on their way. Instead of following this standard approach, Fryer constructs a fictitious population of people who are shot by police and people who are arrested. The problem here is that these two groups (those shot and those arrested) are, in all likelihood, systematically different from one another in ways that cannot be controlled for statistically … Properly interpreted, the actual result from Fryer’s analysis is that the racial disparity in arrest rates is larger than the racial disparity in police shootings. This is an unsurprising finding, and proves neither a lack of bias nor a lack of systematic discrimination.

Justin Feldman

What makes most econometric models invalid

23 July, 2016 at 10:42 | Posted in Statistics & Econometrics | Comments Off on What makes most econometric models invalid

The assumption of additivity and linearity means that the outcome variable is, in reality, linearly related to any predictors … and that if you have several predictors then their combined effect is best described by adding their effects together …

catdogThis assumption is the most important because if it is not true then even if all other assumptions are met, your model is invalid because you have described it incorrectly. It’s a bit like calling your pet cat a dog: you can try to get it to go in a kennel, or to fetch sticks, or to sit when you tell it to, but don’t be surprised when its behaviour isn’t what you expect because even though you’ve called it a dog, it is in fact a cat. Similarly, if you have described your statistical model inaccurately it won’t behave itself and there’s no point in interpreting its parameter estimates or worrying about significance tests of confidence intervals: the model is wrong.

Andy Field

Economics — a kind of brain damage …

23 July, 2016 at 10:12 | Posted in Economics | 1 Comment


(h/t Nanikore)

Is Paul Romer nothing but a neo-colonial Washington Consensus libertarian?

22 July, 2016 at 17:42 | Posted in Economics | 2 Comments

On Monday the World Bank made it official that Paul Romer will be the new chief economist. This nomination can be seen as a big step back toward the infamous Washington Consensus, which World Bank and IMF seemed to have left behind. This is true, even though Paul Romer has learned quite well to hide the market fundamentalist and anti-democratic nature of his pet idea – charter cities – behind a veil of compassionate wording …

the_libertarian_plot_sticker-r61d02bbe203143f79e2ea3e1d5bd79ba_v9i40_8byvr_512Since about 2009 he has been promoting so-called charter cities as a model for development … His proposal amounts to declaring enlightened colonialism to be the best (or even only) way toward development of poor countries, and a good substitute to development aid …

Romer has in mind a version of the Hong Kong case, without the coercion. His cities are supposed to be extreme forms of free enterprise zones which some developing countries, including China, have been experimenting with for quite a while. The idea of the latter is to attract foreign investors by exempting them from certain regulations, duties etc. His charters cities go further. They build on the wholesale abrogation of all laws of the respective country. For countries with dysfunctional public institutions he suggested that they lease out the regions, where these charter cities are to be build, long-term to a consortium of enlightend industrial countries, which would do the management. What the British extracted at gunpoint from China, developing countries are expected to give voluntarily today. A World Bank manager commented on the idea in 2010 on the blog of the World Bank by quoting a magazine article, which called it “not only neo-medieval, but also neo-colonial”.

The libertarian spirit of the idea of the man who will be the World Bank’s chief economist from September reminds of the Washington Consensus that ruled into the 1990s. This is a name for the ideological position, enforced by World Bank and IMF, that the best and only way to development is the scrapping of government regulation and giving companies a maximum of freedom to go about their business.

Norbert Häring

Economics laws — the ultimate reduction to triviality

22 July, 2016 at 16:27 | Posted in Economics | Comments Off on Economics laws — the ultimate reduction to triviality

truth_and_lies_t-662x272What we discover is that the cash value of these laws lies beneath the surface — in the extent to which they approximate the behaviour of real gases or substances, since such substances do not exist in the world …

Notice that we are here regarding it as grounds for complaint that such claims are ‘reduced to the status of definitions’ … Their truth is obtained at a price, namely that they cease to tell us about this particular world and start telling us about the meaning of words instead …

The ultimate reduction to triviality makes the claim definitionally true, and obviously so, in which case it’s worth nothing to those who already know the language …

Michael Scriven

One of the main cruxes of economics laws — and regularities — is that they only hold ceteris paribus. That fundamentally means that these laws/regularites only hold when the right conditions are at hand for giving rise to them. Unfortunately, from an empirical point of view, those conditions are only at hand in artificially closed nomological models purposely designed to give rise to the kind of regular associations that economists want to explain. But, really, since these laws/regularities do not exist outside these ‘socio-economic machines,’ what’s the point in constructing these non-existent laws/regularities? When the almost endless list of narrow and specific assumptions necessary to allow the ‘rigorous’ deductions are known to be at odds with reality, what good do these models do?

Take ‘The Law of Demand.’

Although it may (perhaps) be said that neoclassical economics had succeeded in establishing The Law – when the price of a commodity falls, the demand for it will increase — for single individuals, it soon turned out, in the Sonnenschein-Mantel-Debreu theorem, that it wasn’t possible to extend The Law to apply on the market level, unless one made ridiculously unrealistic assumptions such as individuals all having homothetic preferences – which actually implies that all individuals have identical preferences.

This could only be conceivable if there was in essence only one actor – the (in)famous representative actor. So, yes, it was possible to generalize The Law of Demand – as long as we assumed that on the aggregate level there was only one commodity and one actor. What generalization! Does this sound reasonable? Of course not. This is pure nonsense!

How has neoclassical economics reacted to this devastating findig? Basically by looking the other way, ignoring it and hoping that no one sees that the emperor is naked.

Modern mainstream neoclassical textbooks try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And – worse still – something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Of course one could say that it is too difficult on undergraduate levels to show why the procedure is right and to defer it to masters and doctoral courses. It could justifiably be reasoned that way – if what you teach your students is true, if The Law of Demand is generalizable to the market level and the representative actor is a valid modeling abstraction! But in this case it’s demonstrably known to be false, and therefore this is nothing but a case of scandalous intellectual dishonesty. It’s like telling your students that 2 + 2 = 5 and hope that they will never run into Peano’s axioms of arithmetics.

As Hans Albert has it:

albert1The neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures – appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

Expected utility — a serious case of theory-induced blindness

22 July, 2016 at 13:01 | Posted in Economics | 1 Comment

Although the expected utility theory is obviously both theoretically and descriptively inadequate, colleagues and microeconomics textbook writers gladly continue to use it, as though its deficiencies were unknown or unheard of.

Daniel Kahneman writes — in Thinking, Fast and Slow — that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by changes in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have an almost axiomatic status within economic theory?

Thinking Fast and SlowThe mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

On a more economic-theoretical level, information theory — and especially the so called Kelly criterion — also highlights the problems concerning the neoclassical theory of expected utility.
Suppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T) should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played — the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 — with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly criterion — after the American physicist and information theorist John L. Kelly, who in the article A New Interpretation of Information Rate (1956) suggested this criterion for how to optimize the size of the bet — under which the optimum is to invest a specific fraction (x) of wealth (T) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times as much as before, and when I lose (1 – x) times as much. After n rounds, when I have won v times and lost n – v times, my new bankroll (W) is

(1) W = (1 + x)v(1 – x)n – v T

[A technical note: The bets used in these calculations are of the “quotient form” (Q), where you typically keep your bet money until the game is over, and a fortiori, in the win/lose expression it’s not included that you get back what you bet when you win. If you prefer to think of odds calculations in the “decimal form” (D), where the bet money typically is considered lost when the game starts, you have to transform the calculations according to Q = D – 1.]

The bankroll increases multiplicatively — “compound interest” — and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left hand side now represents the average growth rate (g) in each game. On the right hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>
p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 — that is, 20% of my bankroll. Alternatively, we see that the Kelly criterion implies that we have to choose x so that E[log(1+x)] — which equals p log (1 + x) + (1 – p) log (1 – x) — is maximized. Plotting E[log(1+x)] as a function of x we see that the value maximizing the function is 0.2:

kelly2

The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average have 1.0210 times more than when I started (≈ 1.22).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly criterion gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time — entropy and the “arrow of time ” make this impossible — and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles and “parallel universe.”

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (x) of his assets (T) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly criterion says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose.” In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 2 % (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

The works of people like Kelly and Kahneman show that expected utility theory is indeed a serious case of theory-induced blindness that transmogrifies truth.

Cherry picking economic models

21 July, 2016 at 11:25 | Posted in Economics | Comments Off on Cherry picking economic models

chameleon-ipad-backgroundChameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty. Now these statements may not be made in quite the stark manner that I have made them here, but the underlying notion still prevails that because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters.

The best way to illustrate what chameleons are is to give some actual examples …

In April 2012 Harry DeAngelo and René Stulz circulated a paper entitled “Why High Leverage is Optimal for Banks.” The title of the paper is important here: it strongly suggests that the authors are claiming something about actual banks in the real world. In the introduction to this paper the authors explain what their model is designed to do:

“To establish that high bank leverage is the natural (distortion-free) result of intermediation focused on liquid-claim production, the model rules out agency problems, deposit insurance, taxes, and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of bank capital structure in the real world. However, in contrast to the MM framework – and generalizations that include only leverage-related distortions – it allows a meaningful role for banks as producers of liquidity and shows clearly that, if one extends the MM model to take that role into account, it is optimal for banks to have high leverage.” [emphasis added]

Their model, in other words, is designed to show that if we rule out many important things and just focus on one factor alone, we obtain the particular result that banks should be highly leveraged. This argument is for all intents and purpose analogous to the argument made in another paper entitled “Why High Alcohol Consumption is Optimal for Humans” by Bacardi and Mondavi. In the introduction to their paper Bacardi and Mondavi explain what their model does:

“To establish that high intake of alcohol is the natural (distortion free) result of human liquid-drink consumption, the model rules out liver disease, DUIs, health benefits, spousal abuse, job loss and all other distortionary factors. By positing these idealized conditions, the model obviously ignores some important determinants of human alcohol consumption in the real world. However, in contrast to the alcohol neutral framework – and generalizations that include only overconsumption- related distortions – it allows a meaningful role for humans as producers of that pleasant “buzz” one gets by consuming alcohol, and shows clearly that if one extends the alcohol neutral model to take that role into account, it is optimal for humans to be drinking all of their waking hours.”[emphasis added]

Deangelo and Stulz model is clearly a bookshelf theoretical model that would not pass through any reasonable filter if we want to take its results and apply them directly to the real world. In addition to ignoring much of what is important (agency problems, taxes, systemic risk, government guarantees, and other distortionary factors), the results of their main model are predicated on the assets of the bank being riskless and are based on a posited objective function that is linear in the percentage of assets funded with deposits. Given this the authors naturally obtain a corner solution with assets 100% funded by deposits. (They have no explicit model addressing what happens when bank assets are risky, but they contend that bank leverage should still be “high” when risk is present) …

cherry-pickDeAngelo and Stulz paper is a good illustration of my claim that one can generally develop a theoretical model to produce any result within a wide range. Do you want a model that produces the result that banks should be 100% funded by deposits? Here is aset of assumptions and an argument that will give you that result. That such a model exists tells us very little. By claiming relevance without running it through the filter it becomes a chameleon …

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us genuine insights. To be taken seriously models should pass through the real world filter.

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

Paul Pfleiderer

Reading Pfleiderer’s absolutely fabulous gem of an article reminded me of what H. L. Mencken once famously said:

There is always an easy solution to every problem – neat, plausible and wrong.

Pfleiderer’s perspective may be applied to many of the issues involved when modeling complex and dynamic economic phenomena. Let me take just one example — simplicity.

When it come to modeling I do see the point emphatically made time after time by e. g. Paul Krugman in simplicity — as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconomic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconomics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fulfill its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of  “simple” macroeconomic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.

That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.

As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.

 

Footnote: And of course you understood that the Bacardi/Mondavi paper is fictional. Or?

Rasismens fula tryne

21 July, 2016 at 11:14 | Posted in Politics & Society | Comments Off on Rasismens fula tryne

 

Ja hur ska man reagera på dessa uttryck för oförblommerat svinaktig rasism?

Kanske med att lyssna på Olof Palme

Why economists can’t reason

19 July, 2016 at 17:01 | Posted in Economics | 3 Comments

reasoning-9780070558823Reasoning is the process whereby we get from old truths to new truths, from the known to the unknown, from the accepted to the debatable … If the reasoning starts on firm ground, and if it is itself sound, then it will lead to a conclusion which we must accept, though previously, perhaps, we had not thought we should. And those are the conditions that a good argument must meet; true premises and a good inference. If either of those conditions is not met, you can’t say whether you’ve got a true conclusion or not.

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. The one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, is a scientific cul-de-sac. To have valid evidence is not enough. What economics needs is sound evidence.

Avoiding logical inconsistencies is crucial in all science. But it is not enough. Just as important is avoiding factual inconsistencies. And without showing — or at least warrantedly arguing — that the assumptions and premises of their models are in fact true, mainstream economists aren’t really reasoning, but only playing games. Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Goodbye Lenin!

19 July, 2016 at 11:42 | Posted in Varia | Comments Off on Goodbye Lenin!

 

How do we attach probabilities to the real world?

19 July, 2016 at 11:11 | Posted in Statistics & Econometrics | 1 Comment

Econometricians usually think that the data generating process (DGP) always can be modelled properly using a probability measure. The argument is standardly based on the assumption that the right sampling procedure ensures there will always be an appropriate probability measure. But – as always – one really has to argue the case, and present warranted evidence that real-world features are correctly described by some probability measure.

There are no such things as free-standing probabilities – simply because probabilities are strictly seen only defined relative to chance set-ups – probabilistic nomological machines like flipping coins or roulette-wheels. And even these machines can be tricky to handle. Although prob(fair coin lands heads|I toss it) = prob(fair coin lands head & I toss it)|prob(fair coin lands heads) may be well-defined, it’s not certain we can use it, since we cannot define the probability that I will toss the coin given the fact that I am not a nomological machine producing coin tosses.

No nomological machine – no probability.

A chance set-up is a nomological machine for probabilistic laws, and our description of it is a model that works in the same way as a model for deterministic laws … A situation must be like the model both positively and negatively – it must have all the characteristics featured in the model and it must have no significant interventions to prevent it operating as envisaged – before we can expect repeated trials to give rise to events appropriately described by the corresponding probability …

dappledProbabilities attach to the world via models, models that serve as blueprints for a chance set-up – i.e., for a probability-generating machine … Once we review how probabilities are associated with very special kinds of models before they are linked to the world, both in probability theory itself and in empirical theories like physics and economics, we will no longer be tempted to suppose that just any situation can be described by some probability distribution or other. It takes a very special kind of situation withe the arrangements set just right – and not interfered with – before a probabilistic law can arise …

Probabilities are generated by chance set-ups, and their characterisation necessarily refers back to the chance set-up that gives rise to them. We can make sense of probability of drawing two red balls in a row from an urn of a certain composition with replacement; but we cannot make sense of the probability of six per cent inflation in the United Kingdom next year without an implicit reference to a specific social and institutional structure that will serve as the chance set-up that generates this probability.

Küssen kann man nicht alleine

18 July, 2016 at 19:33 | Posted in Varia | Comments Off on Küssen kann man nicht alleine

 

Ich hab’ mein Herz in Heidelberg verloren (personal)

18 July, 2016 at 19:02 | Posted in Varia | 2 Comments

heidelberg

On probability distributions and uncertainty

18 July, 2016 at 17:31 | Posted in Economics | Comments Off on On probability distributions and uncertainty

treatprobAlmost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find economics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in mainstream economics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues – ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different.’

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by mainstream social economics.

How strange that writers of economics textbooks as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical ‘probability.’ In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the mainstream economics carpet.

It’s high time that economics textbooks give Keynes his due.

There are at least two ways to formally distinguish Keynes’s idea that the future is unknowable in principle from the neoclassical idea that the future is stochastic-stable and that agents know, or act as if they know, this distribution with absolute certainty. First, as Keynes, Shackle, Vickers, and others have stressed, it is logically impossible for agents to assign numerical probabilities to the potentially infinite number of imaginable future states. Even Savage acknowledged that, taken literally, the assumption that agents are able to consider all possible future economic states “is utterly ridiculous” (1954, p. 16). Worse yet, many possible future events are not even imaginable in the present moment: such events obviously cannot be assigned a probability …

j_crotty_nov2011_cropped_smaller1Alternatively, we could — for the sake of argument — think of firms and portfolio selectors as somehow forcing themselves to assign expected future returns to all the assets under evaluation even though they are conscious of the fact that their knowledge of the future is inherently incomplete and unreliable. The key point is that such subjective probability distributions would not be knowledge, and –most important — any rational agent would know they were not knowledge … Hicks insisted that in the nonergodic real world, people “do not know what is going to happen and know that they do not know what is going to happen. As in history!” …

Therefore, even given the unrealistic assumption of the existence of these distributions, there is a crucial piece of information about agent decision making that would be missing from any subjectivist theory — the extent to which the agents believe in the meaningfulness of their forecasts or, in Keynes’s words, the “weight of belief” or “the degree of rational belief” the agents assign to these probabilities. When knowledge of the future is subjective and imperfect, as it always is, the expectations of rational agents can never be fully and adequately represented solely by probability distributions because such distributions fail to incorporate the agents’ own understanding of the degree of incompleteness of their knowledge. These functions neglect the agents’ “confidence” in the meaningfulness of the forecasts — “how highly we rate the likelihood of our best forecast turning out to be quite wrong” (Keynes 1936, p. 148).

Keynes stressed the centrality of agents’ consciousness of their ignorance: the state of confidence plays a crucial role in his theory of the investment decision. “The state of confidence [in the ability to make meaningful forecasts] is relevant because it is one of the major factors determining [investment]” (1936, p. 149). The central role of confidence in the investment decision-making process has disappeared from mainstream Keynesian models and cannot exist by assumption in New Classical and neoclassical models.

James Crotty

Economics for everyone

18 July, 2016 at 13:26 | Posted in Economics | 2 Comments

 

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.