History of Modern Monetary Theory

20 september, 2016 kl. 19:20 | Publicerat i Economics | Kommentarer inaktiverade för History of Modern Monetary Theory

 

A skyrocketing economics blog

20 september, 2016 kl. 19:04 | Publicerat i Varia | 4 kommentarer

happy-cartoon-boy-jumping-and-smiling3

Tired of the idea of an infallible mainstream neoclassical economics and its perpetuation of spoon-fed orthodoxy, yours truly launched this blog five years ago. The number of visitors has increased steadily, and with now having had my posts viewed more than 3 million times, I have to admit of still being — given the rather wonkish character of the blog, with posts mostly on economic theory, statistics, econometrics, theory of science and methodology — rather gobsmacked that so many are interested and take their time to read the often rather geeky stuff posted here.

In the 21st century the blogosphere has without any doubts become one of the greatest channels for dispersing new knowledge and information. images-4As a blogger I can specia-lize in those particular topics an economist and critical realist professor of social science happens to have both deep knowledge of and interest in. That, of course, also means — in the modern long tail world — being able to target a segment of readers with much narrower and specialized interests than newspapers and magazines as a rule could aim for — and still attract quite a lot of readers.

Chicago drivel — a sure way to get a ‘Nobel prize’ in economics

19 september, 2016 kl. 17:38 | Publicerat i Economics | 6 kommentarer

In 2007 Thomas Sargent gave a graduation speech at University of California at Berkeley, giving the grads ”a short list of valuable lessons that our beautiful subject teaches”:

1. Many things that are desirable are not feasible.
2. Individuals and communities face trade-offs.
3. Other people have more information about their abilities, their efforts, and their preferences than you do.
4. Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always end up working as intended.
5. There are trade offs between equality and efficiency.
6. In an equilibrium of a game or an economy, people are satisfied with their choices. That is why it is difficult for well meaning outsiders to change things for better or worse.
Lebowski.jpg-610x07. In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation.
8. Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made.
9. It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore).
10. When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inflation.
11. Most people want other people to pay for public goods and government transfers (especially transfers to themselves).
12. Because market prices aggregate traders’ information, it is difficult to forecast stock prices and interest rates and exchange rates.

Reading through this list of ”valuable lessons” things suddenly fall in place.

This kind of self-righteous neoliberal drivel has again and again been praised and prized. And not only by econ bloggers and right-wing think-tanks.

Out of the seventy-six laureates that have been awarded ‘The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,’ twenty-eight have been affiliated to The University of Chicago. The world is really a small place when it comes to economics …

Why critique in economics is so important

17 september, 2016 kl. 16:46 | Publicerat i Economics | 9 kommentarer

Some of the economists who agree about the state of macro in private conversations will not say so in public. This is consistent with the explanation based on different prices. Yet some of them also discourage me from disagreeing openly, which calls for some other explanation.

un7gnnaThey may feel that they will pay a price too if they have to witness the unpleasant reaction that criticism of a revered leader provokes. There is no question that the emotions are intense. After I criticized a paper by Lucas, I had a chance encounter with someone who was so angry that at first he could not speak. Eventually, he told me, ”You are killing Bob.”

But my sense is that the problem goes even deeper that avoidance. Several economists I know seem to have assimilated a norm that the post-real macroeconomists actively promote – that it is an extremely serious violation of some honor code for anyone to criticize openly a revered authority figure – and that neither facts that are false, nor predictions that are wrong, nor models that make no sense matter enough to worry about …

Science, and all the other research fields spawned by the enlightenment, survive by ”turning the dial to zero” on these innate moral senses. Members cultivate the conviction that nothing is sacred and that authority should always be challenged … By rejecting any reliance on central authority, the members of a research field can coordinate their independent efforts only by maintaining an unwavering commitment to the pursuit of truth, established imperfectly, via the rough consensus that emerges from many independent assessments of publicly disclosed facts and logic; assessments that are made by people who honor clearly stated disagreement, who accept their own fallibility, and relish the chance to subvert any claim of authority, not to mention any claim of infallibility.

Paul Romer

This is part of why yours truly appreciate Romer’s article, and even find it ‘brave.’ Everyone knows what he says is true, but few have the courage to openly speak and write about it. The ‘honour code’ in academia certainly needs revision.

The excessive formalization and mathematization of economics since WW II has made mainstream — neoclassical — economists more or less obsessed with formal, deductive-axiomatic models. Confronted with the critique that they do not solve real problems, they  often react as Saint-Exupéry’s Great Geographer, who, in response to the questions posed by The Little Prince, says that he is too occupied with his scientific work to be be able to say anything about reality. Confronting economic theory’s lack of relevance and ability to tackle real probems, one retreats into the wonderful world of economic models.  While the economic problems in the world around us steadily increase, one is rather happily playing along with the latest toys in the mathematical toolbox.

Modern mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?

Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more  than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

Jensen’s inequality (wonkish)

17 september, 2016 kl. 09:26 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Jensen’s inequality (wonkish)


Economists’ infatuation with immense assumptions

16 september, 2016 kl. 23:31 | Publicerat i Economics | Kommentarer inaktiverade för Economists’ infatuation with immense assumptions

Peter Dorman is one of those rare economists that it is always a pleasure to read. Here his critical eye is focussed on economists’ infatuation with homogeneity and averages:

You may feel a gnawing discomfort with the way economists use statistical techniques. Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level. As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + …. + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals. We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

notes7-2What question does this model answer? It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables. Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations. What is assumed to be the same for these observations? (1) The outcome variable y is meaningful for all of them. (2) The list of potential explanatory factors, the x’s, is the same for all. (3) The effects these factors have on the outcome, the β’s, are the same for all. (4) The proper functional form that best explains the outcome is the same for all. In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations? Simply the values of the x’s and therefore the values of y and ε. That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness. It is these assumptions that both reflect and justify the search for average effects …

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation. Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others. An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects. This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it …

The first step toward recovery is admitting you have a problem. Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ”export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to the causal process lying behind events and disclose the causal forces behind what appears to be simple facts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ”laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

Lazy theorizing and useless macroeconomics

15 september, 2016 kl. 15:03 | Publicerat i Economics | 6 kommentarer

In a new, extremely well-written, brave, and interesting article, Paul Romer goes to frontal attack on the theories that has put macroeconomics on a path of ‘intellectual regress’ for three decades now:

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …

67477738In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that ”the more significant the theory, the more unrealistic the assumptions.” More recently, ”all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.

The noncommittal relationship with the truth revealed by these methodological evasions and the ”less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest ”post-real.”

Paul Romer

There are many kinds of useless ‘post-realeconomics held in high regard within mainstream economics establishment today. Few — if any — are less deserved than the macroeconomic theory/method — mostly connected with Nobel laureates Finn Kydland, Robert Lucas, Edward Prescott and Thomas Sargent — called calibration.

fraud-kit

Paul Romer and yours truly are certainly not the only ones having doubts about the scientific value of calibration. In Journal of Economic Perspective (1996, vol. 10) Nobel laureates Lars Peter Hansen and James J. Heckman writes:

It is only under very special circumstances that a micro parameter such as the inter-temporal elasticity of substitution or even a marginal propensity to consume out of income can be ‘plugged into’ a representative consumer model to produce an empirically concordant aggregate model … What credibility should we attach to numbers produced from their ‘computational experiments’, and why should we use their ‘calibrated models’ as a basis for serious quantitative policy evaluation? … There is no filing cabinet full of robust micro estimats ready to use in calibrating dynamic stochastic equilibrium models … The justification for what is called ‘calibration’ is vague and confusing.

Mathematical statistician Aris Spanos — in  Error and Inference (Mayo & Spanos, 2010, p. 240) — is no less critical:

Given that ”calibration” purposefully foresakes error probabilities and provides no way to assess the reliability of inference, how does one assess the adequacy of the calibrated model? …

The idea that it should suffice that a theory ”is not obscenely at variance with the data” (Sargent, 1976, p. 233) is to disregard the work that statistical inference can perform in favor of some discretional subjective appraisal … it hardly recommends itself as an empirical methodology that lives up to the standards of scientific objectivity

In physics it may possibly not be straining credulity too much to model processes as ergodic – where time and history do not really matter – but in social and historical sciences it is obviously ridiculous. If societies and economies were ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with ergodic concepts.

The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. Reading Lucas, Sargent, Prescott, Kydland and other calibrationists one comes to think of Robert Clower’s apt remark that

much economics is so far removed from anything that remotely resembles the real world that it’s often difficult for economists to take their own subject seriously.

As Romer says:

Math cannot establish the truth value of a fact. Never has. Never will.

So instead of assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations  is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest.

To say, as Edward Prescott that

one can only test if some theory, whether it incorporates rational expectations or, for that matter, irrational expectations, is or is not consistent with observations

is not enough. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than this rather watered-down version of “anything goes” when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given, which makes it rather puzzling how rational expectations has become the standard modeling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.

But I think Prescott’s view is also the reason why calibration economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. In the hands of Lucas, Prescott and Sargent, rational expectations has been transformed from an – in principle – testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not  science.

So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer notices — just looking the other way and pretend it’s raining:

The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.

Proper use of math

15 september, 2016 kl. 08:33 | Publicerat i Economics | 18 kommentarer

Balliol Croft, Cambridge
27. ii. 06
My dear Bowley,

I have not been able to lay my hands on any notes as to Mathematico-economics that would be of any use to you: and I have very indistinct memories of what I used to think on the subject. I never read mathematics now: in fact I have forgotten even how to integrate a good many things.

13.1a Alfred MarshallBut I know I had a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules — (1) Use mathematics as a short-hand language, rather than as an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in 4, burn 3. This last I did often.

I believe in Newton’s Principia Methods, because they carry so much of the ordinary mind with them. Mathematics used in a Fellowship thesis by a man who is not a mathematician by nature — and I have come across a good deal of that — seems to me an unmixed evil. And I think you should do all you can to prevent people from using Mathematics in cases in which the English language is as short as the Mathematical …

Your emptyhandedly,

Alfred Marshall

Has economics — really — become more empirical?

14 september, 2016 kl. 19:22 | Publicerat i Economics | 4 kommentarer

alchemyIn Economics Rules (OUP 2015), Dani Rodrik maintains that ‘imaginative empirical methods’ — such as game theoretical applications, natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions conerning the external validity of economic models. In Rodrik’s view they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever expanding ‘collection of potentially applicable models.’ Writes Rodrik:

Another way we can observe the transformation of the discipline is by looking at the new areas of research that have flourished in recent decades. Three of these are particularly noteworthy: behavioral economics, randomized controlled trials (RCTs), and institutions … They suggest that the view of economics as an insular, inbred discipline closed to the outside influences is more caricature than reality.

I beg to differ. When looked at carefully, there  are in fact few real reasons to share  Rodrik’s optimism on this ‘empirical turn’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2014? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Rodrik, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ”closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ”It works there” is no evidence for ”it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Rodrik’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

 

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Re game theory, yours truly remembers when back in 1991, earning my first Ph.D. with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that

repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.

This, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed. Listening to what one of the world’s most renowned game theorists — Ariel Rubinstein — has to say on the — rather limited — applicability of game theory in this interview (emphasis added), I basically think he confirms my doubts about how well-founded is Rodrik’s ‘optimism:’

Is game theory useful in a concrete sense or not? … I believe that game theory is very interesting. I’ve spent a lot of my life thinking about it, but I don’t respect the claims that it has direct applications.

The analogy I sometimes give is from logic. Logic is a very interesting field in philosophy, or in mathematics. But I don’t think anybody has the illusion that logic helps people to be better performers in life. A good judge does not need to know logic. It may turn out to be useful – logic was useful in the development of the computer sciences, for example – but it’s not directly practical in the sense of helping you figure out how best to behave tomorrow, say in a debate with friends, or when analysing data that you get as a judge or a citizen or as a scientist …

Game theory is about a collection of fables. Are fables useful or not? In some sense, you can say that they are useful, because good fables can give you some new insight into the world and allow you to think about a situation differently. But fables are not useful in the sense of giving you advice about what to do tomorrow, or how to reach an agreement between the West and Iran. The same is true about game theory …

In general, I would say there were too many claims made by game theoreticians about its relevance. Every book of game theory starts with “Game theory is very relevant to everything that you can imagine, and probably many things that you can’t imagine.” In my opinion that’s just a marketing device …

So — contrary to Rodrik’s optimism — I would argue that although different ‘empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a true empirical science.

Dark age of macroeconomics

13 september, 2016 kl. 15:41 | Publicerat i Economics | 3 kommentarer

In his 1936 “The General Theory of Employment, Interest and Money”, John Maynard Keynes already recognized that the idea that savings finance investments is wrong. Savings equal investment indeed, which is written as S=I. However, the way that this identity (roughly: definition in the form of an equation) holds is exactly the opposite …

461226-1Income is created by the value in excess of user cost which the producer obtains for the output he has sold; but the whole of this output must obviously have been sold either to a consumer or dark age of macroeconomicso another entrepreneur; and each entrepreneur’s current investment is equal to the excess of the equipment which he has purchased from other entrepreneurs over his own user cost. Hence, in the aggregate the excess of income over consumption, which we call saving, cannot differ from the addition to capital equipment which we call investment. And similarly with net saving and net investment. Saving, in fact, is a mere residual. The decisions to consume and the decisions to invest between them determine incomes. Assuming that the decisions to invest become effective, they must in doing so either curtail consumption or expand income. Thus the act of investment in itself cannot help causing the residual or margin, which we call saving, to increase by a corresponding amount …

Clearness of mind on this matter is best reached, perhaps, by thinking in terms of decisions to consume (or to refrain from consuming) rather than of decisions to save. A decision to consume or not to consume truly lies within the power of the individual; so does a decision to invest or not to invest. The amounts of aggregate income and of aggregate saving are the results of the free choices of individuals whether or not to consume and whether or not to invest; but they are neither of them capable of assuming an independent value resulting from a separate set of decisions taken irrespective of the decisions concerning consumption and investment. In accordance with this principle, the conception of the propensity to consume will, in what follows, take the place of the propensity or disposition to save.

This means that investment is financed by credit. When banks create new loans, new deposits are credited to the borrower’s account. These deposits are additional deposits that did not exist before. When spending, the investment takes place (and rises by some amount) and the seller of the goods are services that constitute the investment will received bank deposits. This is income not spend, which means that savings go up (by the same amount). Hence savings equal investment, but not because savings finance investment! Before Keynes, Wicksell and Schumpeter wrote about this as well, so it was common knowledge that loans finance investment and not savings. Today, we live in a dark age of macroeconomics and monetary theory since this insight has been forgotten by most of the discipline.

Dirk Ehnts

Ljus i mörkret

13 september, 2016 kl. 12:26 | Publicerat i Varia | Kommentarer inaktiverade för Ljus i mörkret

radioI dessa tider — när ljudrummet dränks i den kommersiella radions tyckmyckentrutade ordbajseri och fullständigt intetsägande pubertalflamsande tjafs — har man nästan gett upp.

Men det finns ljus i mörkret. I radions P2 går varje lördagmorgon ett vederkvickelsens och den seriösa musikens Lördagsmorgon i P2.

Och nu är även söndagarna räddade.

3270059_612_344I programmet Text och musik med Eric Schüldt — som sänds på söndagsförmiddagarna i P2 mellan klockan 11 och 12 — kan man lyssna på seriös musik och en programledare som har något att säga och inte bara låter foderluckan glappa. En lisa för själen.

Nu i söndags vigdes programmet åt en av den moderna tonkonstens giganter — Arvo Pärt.

Tack Eric för ett fantastiskt program. Du är ett ljus i mörkret!

But not as wrong as Paul Krugman …

11 september, 2016 kl. 16:42 | Publicerat i Economics | 7 kommentarer

In his review of Mervyn King’s The End of Alchemy: Money, Banking, and the Future of the Global Economy Krugman writes:

Economist KrugmanIs this argument right, analytically? I’d like to see King lay out a specific model for his claims, because I suspect that this is exactly the kind of situation in which words alone can create an illusion of logical coherence that dissipates when you try to do the math. Also, it’s unclear what this has to do with radical uncertainty. But this is a topic that really should be hashed out in technical working papers.

This passage really says it all.

Despite all his radical rhetoric, Krugman is — where it really counts — nothing but a die-hard mainstream neoclassical economist. Just like Milton Friedman, Robert Lucas or Greg Mankiw.

The only economic analysis that Krugman and other mainstream economists accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is, as made clear by Krugman’s comment on King, totally unthinkable —  and apply them to whatever you want – as long as you do it within the mainstream approach and its modeling strategy. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’

That isn’t pluralism.

That’s a methodological reductionist straightjacket.

So, even though we have seen a proliferation of models, it has almost exclusively taken place as a kind of axiomatic variation within the standard ‘urmodel,’ which is always used as a self-evident bench-mark.

Just as Dani Rodrik, in his Economics Rules, Krugman wants to purvey the view that the proliferation of economic models during the last twenty-thirty years is a sign of great diversity and abundance of new ideas.

But, again, it’s not, really, that simple.

Although mainstream economists like Rodrik and Krugman want to portray mainstream economics as an open and pluralistic ‘let a hundred flowers blossom,’ in reality it is rather “plus ça change, plus c’est la même chose.”

Applying closed analytical-formalist-mathematical-deductivist-axiomatic models, built on atomistic-reductionist assumptions to a world assumed to consist of atomistic-isolated entities, is a sure recipe for failure when the real world is known to be an open system where complex and relational structures and agents interact. Validly deducing things in models of that kind doesn’t much help us understanding or explaining what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream economists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Just telling us that the plethora of mathematical models that make up modern economics  ‘expand the range of the discipline’s insights’ is nothing short of hand waving.

No matter how many thousands of ‘technical working papers’ or models mainstream economists come up with, as long as they are just ‘wildly inconsistent’ axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

King knows that. Krugman obviously not.

Per T Ohlsson pratar i nattmössan om välfärdssektorns vinster

11 september, 2016 kl. 12:05 | Publicerat i Politics & Society | 2 kommentarer

DumstrutEn av landets mest välbetalda journalister — Per T Ohlsson — har i sin återkommande söndagskrönika i Sydvenskan i dag en mer än vanligt dåligt underbyggd artikel.

Denna gången handlar det om Ilmar Reepalus utredning om vinster i den skattefinansierade välfärdssektorn.

Följande lilla stycke är belysande:

Den enda rationella lösningen på detta frågekomplex – att värna och utveckla gemensam och likvärdig välfärd med bevarad valfrihet – är en kraftig skärpning av systemen för tillsyn, kontroll och sanktioner. Det kan handla om personaltäthet, fullständig transparens i redovisningen och hårdare etableringskontroll. Ett program av det slaget, detaljerat och långtgående, skulle väcka högljudda protester hos Gekko-fraktionen, men förmodligen också vinna anslutning långt in i borgerligheten till skillnad från ett förslag om vinsttak som idag saknar riksdagsmajoritet. De allra flesta, från höger till vänster, torde vara överens om att vård av sjuka, utbildning av unga och omhändertagande av gamla ställer andra krav än försäljning av hamburgare eller dataspel.

Och detta grodors plums och ankors plask ska man behöva läsa år 2016. Herre du milde!

Självklart är skola och sjukvård något annat än hamburgare. Men just därför ska vi inte tillåta vinster i skattefinansierade verksamheter som skola, vård och omsorg!

Sverige är förvisso ett litet land, men ändå lyckas våra marknadsfundamentalistiska stödtrupper vaska fram dumstrutsförsedda riks-pellejönsar av Per T Ohlssons kaliber. Imponerande.

När det gäller sakfrågan här visar det sig när det kommer till kritan att Per T — som vanligt — är fullständigt ute och reser.

Vinstdrivande företag inom vård- och skolsektor har diskuterats mycket de senaste åren. Många är med rätta upprörda.

Verksamma inom skolvärlden eller vårdsektorn har haft svårt att förstå alliansens och socialdemokratins inställning till privatiseringar och vinstuttag i den mjuka välfärdssektorn. Av någon outgrundlig anledning har de under många år pläderat för att vinster ska vara tillåtna i skolor och vårdföretag. Ofta har argumentet varit att driftsformen inte har någon betydelse. Så är inte fallet. Driftsform och att tillåta vinst i välfärden har visst betydelse. Och den är negativ.

Från Svenskt Näringsliv och landets alla ledarskribenter hörs en jämn ström av krav på ökad kontroll, tuffare granskning och inspektioner — förslag som Per T nu saluför som om de var nya.

Men vänta lite nu! Var det inte så att när man på 1990-talet påbörjade systemskiftet inom välfärdssektorn ofta anförde som argument för privatiseringarna att man just skulle slippa den byråkratiska logikens kostnader i form av regelverk, kontroller och uppföljningar? Konkurrensen – denna marknadsfundamentalismens panacé – skulle ju göra driften effektivare och höja verksamheternas kvalitet. Marknadslogiken skulle tvinga bort de “byråkratiska” och tungrodda offentliga verksamheterna och kvar skulle bara finnas de bra företagen som “valfriheten” möjliggjort.

Och nu när den panglossianska privatiseringsvåtdrömmen visar sig vara en mardröm så ska just det som man ville bli av med – regelverk och “byråkratisk” tillsyn och kontroll – vara lösningen?

Man tar sig för pannan – och det av många skäl!

För ska man genomföra de åtgärdspaket som förs fram undrar man ju hur det går med den där effektivitetsvinsten. Kontroller, uppdragsspecifikationer, inspektioner m m kostar ju pengar och hur mycket överskott blir det då av privatiseringarna när dessa kostnader också ska räknas hem i kostnads- intäktsanalysen? Och hur mycket värd är den där “valfriheten” när vi ser hur den gång på gång bara resulterar i verksamhet där vinst genereras genom kostnadsnedskärningar och sänkt kvalitet?

All form av ekonomisk verksamhet bygger på eller inbegriper någon form av delegering. En part (uppdragsgivaren, principalen, beställaren) vill att en annan part (uppdragstagaren, agenten, utföraren) ska utföra en viss uppgift. Grundproblemet är hur beställaren ska få utföraren att utföra uppdraget på det sätt som beställaren önskar …

Det finns en uppenbar fara i att basera ersättningssystem på enkla objektiva mått när det vi vill ersätta i själva verket har flera och komplexa dimensioner, exempelvis ersättning efter antal utskrivna patienter, lärarlöner kopplade till betyg eller dylikt. Ofta har kommunala verksamheter denna karaktär av “fleruppgiftsverkamhet” och då fungerar ofta inte incitamentkontrakt eller provisioner. I sådana fall kan “byråkratier” vara mer ändamålsenliga än marknader …

Effektiv resursanvändning kan aldrig vara ett mål i sig. Däremot kan det vara ett nödvändigt medel för att nå uppsatta mål. Välfärdsstatens vara eller icke vara är därför i grunden inte bara en fråga om ekonomisk effektivitet, utan också om våra föreställningar om ett värdigt liv, rättvisa och lika behandling.

Lars Pålsson Syll et al, Vad bör kommunerna göra? (Jönköping University Press, 2002)

Så grundfrågan är inte om skattefinansierade privata företag ska få göra vinstuttag eller om det krävs hårdare tag i form av kontroll och inspektion. Grundfrågan är om det är marknadens och privatiseringarnas logik som ska styra våra välfärdsinrättningar eller om det ska ske via demokratins och politikens logik. Grundfrågan handlar om den gemensamma välfärdssektorn ska styras av demokrati och politik eller av marknaden.

Låt oss stilla be för att herr Ohlsson, nästa gång han  sätter sig ner för att skriva en artikel, först kollar vad forskarvärlden säger. Det ger mycket mer än tyckmyckentrutat nonsens-pladder!

The Bourbaki-Debreu delusion of axiomatic economics

10 september, 2016 kl. 18:30 | Publicerat i Economics | 3 kommentarer

By the time that we have arrived at the peak first climbed by Arrow and Debreu, the central question boils down to something rather simple. We can phrase the question in the context of an exchange economy, but producers can be, and are, incorporated in the model. There is a rather arid economic environment referred to as a purely competitive market in which individuals receive signals as to the prices of all goods. All the individuals have preferences over all bundles of goods. They also have endowments or incomes defined by the prices of the goods, and this determines what is feasible for them, and the set of feasible bundles constitutes their budget set.bourbaki Choosing the best commodity bundle within their budget set determines their demand at each price vector. Under what assumptions on the preferences will there be at least one price vector that clears all markets, that is, an equilibrium? Put alternatively, can we find a price vector for which the excess demand for each good is zero? The question as to whether a mechanism exists to drive prices to the equilibrium has become secondary, and Herb Scarf’s famous example (1960) had already dealt that discussion a blow.

The warning bell was sounded by such authors as Donald Saari and Carl Simon (1978), whose work gave an indication, but one that has been somewhat overlooked, as to why the stability problem was basically unsolvable in the context of the general equilibrium model. The most destructive results were, of course, already there, those of Hugo Sonnenschein (1974), Rolf Mantel (1974), and Debreu (1974) himself. But those results show the model’s weakness, not where that weakness comes from. Nevertheless, the damage was done. What is particularly interesting about that episode is that it was scholars of the highest reputation in mathematical economics who brought the edifice down. This was not a revolt of the lower classes of economists complaining about the irrelevance of formalism in economics; this was a palace revolution.

Alan Kirman

Some of us have for years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

an-inconvenient-truth1Within mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Mathematical axiomatic systems lead to analytic truths, which do not require empirical verification, since they are true by virtue of definitions and logic. It is a startling discovery of the twentieth century that sufficiently complex axiomatic systems are undecidable and incomplete. That is, the system of theorem and proof can never lead to ALL the true sentences about the system, and ALWAYS contain statements which are undecidable – their truth values cannot be determined by proof techniques. More relevant to our current purpose is that applying an axiomatic hypothetico-deductive system to the real world can only be done by means of a mapping, which creates a model for the axiomatic system. These mappings then lead to assertions about the real world which require empirical verification. These assertions (which are proposed scientific laws) can NEVER be proven in the sense that mathematical theorems can be proven …

hqdefaultMany more arguments can be given to explain the difference between analytic and synthetic truths, which corresponds to the difference between mathematical and scientific truths … The scientific method arose as a rejection of the axiomatic method used by the Greeks for scientific methodology. It was this rejection of axiomatics and logical certainty in favour of empirical and observational approach which led to dramatic progress in science. However, this did involve giving up the certainties of mathematical argumentation and learning to live with the uncertainties of induction. Economists need to do the same – abandon current methodology borrowed from science and develop a new methodology suited for the study of human beings and societies.

Asad Zaman

Reepalus utredning om vinster i välfärden

10 september, 2016 kl. 09:00 | Publicerat i Economics, Politics & Society | Kommentarer inaktiverade för Reepalus utredning om vinster i välfärden

I november presenteras utredningen om vinster i välfärden som Ilmar Reepalu (S) är ordförande för. Men redan nu har nyhetsmedier tagit del av förslag från utredningen om hur en vinstbegräsning skulle kunna se ut.

SVT rapporterade i förra veckan att Joachim Landström, doktor i företagsekonomi vid Uppsala universitet, har tagit fram ett förslag till utredningen. Lite förenklat innebär den en begränsning på vinster på runt åtta procent.

Jonas Vlachos påpekar att det finns sätt att runda en sådan begränsning.

– Det finns så otroligt många andra sätt att plocka ut vinster ur ett företag. Man kan låna och hyra av sig själv. Man kan köpa tjänster av andra delar av koncernen. Jag tror inte det spelar någon roll på lite sikt.

– Om man verkligen vill komma åt det här, vilket man verkar vilja, så måste man förbjuda utdelande bolag …

Lars Pålsson Syll, nationalekonom och professor vid Malmö högskola, är också kritisk till talet om procentsatser i vinstutredningen.

– Om det nu är så att man tvunget ska göra vinst, borde man låta vinsten stanna kvar i företagen. Då borde man göra som i USA där man har en begränsning med en stiftelselagstiftning där vinsten ska stanna kvar i stiftelsen.

– Rent pragmatiskt tror jag inte det går att få svenska politiker att släppa vinster. Men låt oss då kräva att vinsten blir kvar i verksamheten, fortsätter han …

Lars Pålsson Syll tycker att det fanns mycket som var bra med hur systemet såg ut fram till 1980-talet, innan välfärdsföretagens intåg.

– Vi hade en bra offentlig skola och sjukvård. Vid sidan om det fanns lite annan verksamhet, som Montessori, som man kunde ansöka om att driva i privat regi. Då var det ofta entusiaster som tog över, som pedagoger som brann för en bra verksamhet, inte för att tjäna pengar.

Anne-Li Lehnberg/Flamman

Vinstdrivande företag inom vård- och skolsektor har diskuterats mycket det senaste året. Många är med rätta upprörda.

Många som är verksamma inom skolvärlden eller vårdsektorn har haft svårt att förstå alliansens och socialdemokratins inställning till privatiseringar och vinstuttag i den mjuka välfärdssektorn. Av någon outgrundlig anledning har de under många år pläderat för att vinster ska vara tillåtna i skolor och vårdföretag. Ofta har argumentet varit att driftsformen inte har någon betydelse. Så är inte fallet. Driftsform och att tillåta vinst i välfärden har visst betydelse. Och den är negativ.

Allinasen och socialdemokratin är förvisso långt ifrån ensamt om sitt velande. Från Svenskt Näringsliv och landets alla ledarskribenter hörs en jämn ström av krav på ökad kontroll, tuffare granskning och inspektioner.

Men vänta lite nu! Var det inte så att när man på 1990-talet påbörjade systemskiftet inom välfärdssektorn ofta anförde som argument för privatiseringarna att man just skulle slippa den byråkratiska logikens kostnader i form av regelverk, kontroller och uppföljningar? Konkurrensen – denna marknadsfundamentalismens panacé – skulle ju göra driften effektivare och höja verksamheternas kvalitet. Marknadslogiken skulle tvinga bort de “byråkratiska” och tungrodda offentliga verksamheterna och kvar skulle bara finnas de bra företagen som “valfriheten” möjliggjort.

Och nu när den panglossianska privatiseringsvåtdrömmen visar sig vara en mardröm så ska just det som man ville bli av med – regelverk och “byråkratisk” tillsyn och kontroll – vara lösningen?

Man tar sig för pannan – och det av många skäl!

För ska man genomföra de åtgärdspaket som förs fram undrar man ju hur det går med den där effektivitetsvinsten. Kontroller, uppdragsspecifikationer, inspektioner m m kostar ju pengar och hur mycket överskott blir det då av privatiseringarna när dessa kostnader också ska räknas hem i kostnads- intäktsanalysen? Och hur mycket värd är den där “valfriheten” när vi ser hur den gång på gång bara resulterar i verksamhet där vinst genereras genom kostnadsnedskärningar och sänkt kvalitet?

All form av ekonomisk verksamhet bygger på eller inbegriper någon form av delegering. En part (uppdragsgivaren, principalen, beställaren) vill att en annan part (uppdragstagaren, agenten, utföraren) ska utföra en viss uppgift. Grundproblemet är hur beställaren ska få utföraren att utföra uppdraget på det sätt som beställaren önskar …

Det finns en uppenbar fara i att basera ersättningssystem på enkla objektiva mått när det vi vill ersätta i själva verket har flera och komplexa dimensioner, exempelvis ersättning efter antal utskrivna patienter, lärarlöner kopplade till betyg eller dylikt. Ofta har kommunala verksamheter denna karaktär av “fleruppgiftsverkamhet” och då fungerar ofta inte incitamentkontrakt eller provisioner. I sådana fall kan “byråkratier” vara mer ändamålsenliga än marknader …

Effektiv resursanvändning kan aldrig vara ett mål i sig. Däremot kan det vara ett nödvändigt medel för att nå uppsatta mål. Välfärdsstatens vara eller icke vara är därför i grunden inte bara en fråga om ekonomisk effektivitet, utan också om våra föreställningar om ett värdigt liv, rättvisa och lika behandling.

Lars Pålsson Syll et al, Vad bör kommunerna göra? (Jönköping University Press, 2002)

Så grundfrågan är inte om skattefinansierade privata företag ska få göra vinstuttag eller om det krävs hårdare tag i form av kontroll och inspektion. Grundfrågan är om det är marknadens och privatiseringarnas logik som ska styra våra välfärdsinrättningar eller om det ska ske via demokratins och politikens logik. Grundfrågan handlar om den gemensamma välfärdssektorn ska styras av demokrati och politik eller av marknaden.

Ingen borde svaja i denna fråga. Nobelpristagaren i ekonomi, Kenneth Arrow, skrev i ett klassiskt arbete från 1963 om vårdsektorns ekonomi:

Under ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. In its absence he wants to have some guarantee that at least the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.

One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act as thoroughly in the buyer’s behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.

Kenneth Arrow, ”Uncertainty and the Welfare Economics of Medical Care”, American Economic Review, 53 (5).

Välfärdssektorn är en specifik, speciell, mångdimensionell sektor där det genomgående är svårt att etablera marknadsmekanismer, mäta och kontrollera kvalitet m m. Av dessa skäl finns det stark anledning attifrågasätta hela privatiseringsstrategin som sådan. Vinster hör inte hemma i en skattefinansierad välfärdssektor.

Hicks’ misrepresentation of Keynes — the Wicksellian connection

6 september, 2016 kl. 12:56 | Publicerat i Economics | 1 kommentar

Having read my post on Krugman and Hicks’ IS-LM misrepresentation of Keynes’ theory, professor Jan Kregel kindly sent an unpublished article he wrote back in 1984 — The Importance of Choosing a Model: Hicks vs. Keynes on Money, Interest and Prices — in which it is argued that Hicks’ particular presentation of Keynes’ theory and choice of model was ”crucial to its destruction”:

islmHicks’ choice of model led to a presentation of Keynes’s theory which implicitly required either 1) the operation of the liquidity trap, or 2) the assumption of unchanging prices, for a horizontal LM curve represented stable prices in conditions of perfectly elastic supply of consumption goods. As already noted, the IS-LM framework makes no reference to any supply conditions, except the supply of savings. Although neither of these positions represented Keynes’s model, nor were they ever accepted by Keynes, despite Hicks’s suggestions to the contrary, they have become the standard representation … Indeed, both Friedman and Tobin accept a positive slope for the LM curve, rejecting a vertical or horizontal curve. But, as Hicks’s article pointed out, in this case there is no difference between Keynes’s method and the ”ordinary method of economic theory” …

Indeed, having, shown that Keynes does not differ significantly from the Classics, Hicks proceeds:

”Mathematical elegance would suggest that we ought to have Y and i in all three equations, if the theory is to be really General. Why not have them there like this:

M = L (Y, i) Ix = f (Y, i) Ix = S(Y, i)?”

While Hicks was eager to add the i for the money rate of interest to all three relations, his eagerness led him to overlook the fact that his ”generalised” General Theory used two rates of interest, the money rate, I, and the investment rate, or the ”natural rate”, call it r, which determines the slope of the IS curve!

It is interesting to note that this was precisely the distinction that Keynes called attention to when he suggested his own view of the difference between his theory and the Classics. Keynes identifies the classical theory as considering the rate of interest as a non monetary phenomenon: the r given by the horizontal IS curve. But in Wicksell’s theory this would lead to either cumulative inflation or deflation, depending on whether the IS curve lay above or below the horizontal LM curve. Hicks’s innovation may then be seen as making Wicksell’s explosive system into a determinate equilibrium by means of his assumption of a given supply of money which assures a slope to the LM curve.

But, in such a scheme it is the money rate, i, which must adjust in equilibrium to the natural, real or investment rate of interest. This is just the opposite of Keynes’s expression of his own position! (as well as contradicting Keynes’s belief in the endogeneity of the money supply).

Jan Kregel

Added to the arguments put forward in my own blog post, this certainly reaffirms the view that Hicks’ IS-LM model(s) gave very little insights to Keynes’ theory.

Teflon-coated economics

4 september, 2016 kl. 09:50 | Publicerat i Statistics & Econometrics | 1 kommentar

At least since the time of Keynes’s famous critique of Tinbergen’s econometric methods, those of us in the social science community who have been unpolite enough to dare questioning the preferred methods and models applied in quantitive research in general and economics more specifically, are as a rule met with disapproval. Although people seem to get very agitated and upset by the critique — just read the commentaries on this blog if you don’t believe me — defenders of ‘received theory’ always say that the critique is ‘nothing new,’ that they have always been ‘well aware’ of the problems, and so on, and so on.

So, for the benefit of all mindless practitioners of econometrics and statistics — who don’t want to be disturbed in their doings — eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:

We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

The euro disaster — Wynne Godley was spot on already back in 1992!

3 september, 2016 kl. 17:28 | Publicerat i Economics | 1 kommentar

If there were an economic and monetary union, in which the power to act independently had actually been abolished, ‘co-ordinated’ reflation of the kind which is so urgently needed now could only be undertaken by a federal European government. Without such an institution, EMU would prevent effective action by individual countries and put nothing in its place.

wgodleyAnother important role which any central government must perform is to put a safety net under the livelihood of component regions which are in distress for structural reasons – because of the decline of some industry, say, or because of some economically-adverse demographic change. At present this happens in the natural course of events, without anyone really noticing, because common standards of public provision (for instance, health, education, pensions and rates of unemployment benefit) and a common (it is to be hoped, progressive) burden of taxation are both generally instituted throughout individual realms. As a consequence, if one region suffers an unusual degree of structural decline, the fiscal system automatically generates net transfers in favour of it. In extremis, a region which could produce nothing at all would not starve because it would be in receipt of pensions, unemployment benefit and the incomes of public servants.

What happens if a whole country – a potential ‘region’ in a fully integrated community – suffers a structural setback? So long as it is a sovereign state, it can devalue its currency. It can then trade successfully at full employment provided its people accept the necessary cut in their real incomes. With an economic and monetary union, this recourse is obviously barred, and its prospect is grave indeed unless federal budgeting arrangements are made which fulfil a redistributive role … If a country or region has no power to devalue, and if it is not the beneficiary of a system of fiscal equalisation, then there is nothing to stop it suffering a process of cumulative and terminal decline leading, in the end, to emigration as the only alternative to poverty or starvation … What I find totally baffling is the position of those who are aiming for economic and monetary union without the creation of new political institutions (apart from a new central bank), and who raise their hands in horror at the words ‘federal’ or ‘federalism’. This is the position currently adopted by the Government and by most of those who take part in the public discussion.

Wynne Godley

No man is an island!

2 september, 2016 kl. 19:10 | Publicerat i Economics | 1 kommentar

Given how sweeping the changes wrought by SMD (Sonnenschein-Mantel-Debreu) theory seem to be, it is understandable that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. concoctionsThese views are widely shared … General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and ”New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

In case you think this verdict is only a heterodox idiosyncrasy, here’s what one of the world’s leading microeconomists — Alan Kirman — writes in his thought provoking paper The intrinsic limits of modern economic theory:

If one maintains the fundamentally individualistic approach to constructing economic models no amount of attention to the walls will prevent the citadel from becoming empty. Empty in the sense that one cannot expect it to house the elements of a scientific theory, one capable of producing empirically falsifiable propositions …

kirman1Starting from ‘badly behaved’ individuals, we arrive at a situation in which not only is aggregate demand a nice function but, by a result of Debreu, equilibrium will be ‘locally unique. Whilst this means that at least there is some hope for local stability, the real question is, can we hope to proceed and obtain global uniqueness and stability?

The unfortunate answer is a categorical no! [The results of Sonnenchein (1972), Debreu (1974), Mantel (1976) and Mas Collel (1985)] shows clearly why any hope for uniqueness or stability must be unfounded … There is no hope that making the distribution of preferences or income ‘not to dispersed’ or ‘single peaked’ will help us to avoid the fundamental problem …

The problem seems to be embodied in what is an essential feature of a centuries-long tradition in economics, that of treating individuals as acting independently of each other …

To argue in this way suggests … that once the appropriate signals are given, individuals behave in isolation and the result of their behaviour may simply be added together …

The idea that we should start at the level of the isolated individual is one which we may well have to abandon … we should be honest from the outset and assert simply that by assumption we postulate that each sector of the economy behaves as one individual and not claim any spurious microjustification …

Economists therefore should not continue to make strong assertions about this behaviour based on so-called general equilibrium models which are, in reality, no more than special examples with no basis in economic theory as it stands.

Getting around Sonnenschein-Mantel-Debreu using representative agents may be — from a purely formalistic point of view — very expedient. But — from a scientific point of view — relevant and realistic? No way! Although garmented as representative agent, the emperor is still naked.

Treating people as if they live and act independently of each other takes us nowhere in understanding and explaining society.

Sacrificing content for the sake of mathematical tractability

2 september, 2016 kl. 17:50 | Publicerat i Economics | Kommentarer inaktiverade för Sacrificing content for the sake of mathematical tractability

A commodity is a primitive concept (Debreu 1991), and the model comprises a finite set of classes of commodities. The number of distinguishable commodities (Debreu 1959) or commodity labels (Koopmans and Bausch 1959) is a natural number. The Arrow—Debreu model continues by defining these commodities as goods that are physically determined …

31DEP1RAQJL._SX313_BO1,204,203,200_But after asserting that commodities are physically determined, in a number of striking passages Debreu (1959: 30) states that their quantities can be expressed as real numbers. This of course poses a problem, because it means that irrational numbers can be used to express quantities of physical objects, even indivisible things …

Even the use of rational numbers to denote quantities of physical objects implies assuming that these physical objects are perfectly divisible. This, in turn, means that no matter how far we divide the physical objects, we still obtain objects with the same physical properties. Although this may appeal to our intuition when it comes to milk or flour, it is completely devoid of sense when it comes to physical objects that come in discrete units. Who among us owns exactly 1.379 cars, or gets 2.408 haircuts? The use of real numbers, including irrational numbers, implies that consumers can somehow specify quantities of goods that are not even fractions. This diverges even farther from experience and common sense. After all, nobody goes to the local Wal-Mart to purchase √2 vacuum cleaners, or π PCs.

Alejandro Nadal

« Föregående sidaNästa sida »

Blogga med WordPress.com.
Entries och kommentarer feeds.