Stockholmsmelodi (personal)

30 Jul, 2017 at 12:44 | Posted in Varia | Comments Off on Stockholmsmelodi (personal)

 

Evert och Sven-Bertil i all ära, men för mig är det Tottes version som gäller!

The Venice of the North (personal)

30 Jul, 2017 at 09:49 | Posted in Varia | 1 Comment

For the third time in a year yours truly will make a guest appearance in Hamburg — The Venice of the North. Regular blogging will be resumed next weekend. Tschüss!

Ways in which economists overbid their cards

29 Jul, 2017 at 17:58 | Posted in Theory of Science & Methodology | 1 Comment

 

Why should we care about Sonnenschein-Mantel-Debreu?

26 Jul, 2017 at 10:47 | Posted in Economics | 7 Comments

Along with the Arrow-Debreu existence theorem and some results on regular economies, SMD (Sonnenschein-Mantel-Debreu) theory fills in many of the gaps we might have in our understanding of general equilibrium theory …

It is also a deeply negative result. SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

24958274Given how sweeping the changes wrought by SMD theory seem to be, it is understand-able that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.” There was a realization of a similar gap in the foundations of empirical economics. General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee either stability or uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds — not so little — of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

Om kvantitet och kvalitet i högskolevärlden

25 Jul, 2017 at 10:33 | Posted in Education & School | Comments Off on Om kvantitet och kvalitet i högskolevärlden

Den som kommer in på högskolan ska också gå ut med avlagd examen, anser regeringen. Enligt Helene Hellmark Knutsson (S), minister för högre utbildning, ska universitet och högskolor “se till att när man väl kommit in på sin utbildning, har sin behörighet, att man också får det stöd man behöver för att fullfölja sina studier”.

Det låter lite väl enkelt.

elite-daily-sleeping-studentHögskolelagens nuvarande formulering om att högskolorna ska “aktivt främja och bredda rekryteringen” ändras därför till “aktivt främja ett brett deltagande i utbildningen”, enligt det förslag som i veckan skickades ut på remiss.

Å ena sidan framställer Hellmark Knutsson förslaget som ett viktigt steg mot minskad social snedrekrytering. Å andra sidan beskrivs förändringen i remissen närmast som en formalitet, en anpassning av lagens bokstav till hur universitet och högskolor redan arbetar. Några ökade ekonomiska resurser ges utbildningsanstalterna heller inte av regeringen. Så vilket gäller?

Den sociala snedrekryteringen till högre utbildning har visat sig vara svår att komma till rätta med. Och visst är det viktigt att alla som vill studera och har förutsättningar att klara studierna också får chansen. Oavsett bakgrund.

Universitet och högskolor måste ge ett stöd avpassat efter studenternas skilda förutsättningar – inom rimliga gränser. Men det är inte självklart att den som blir antagen också har vad som faktiskt krävs. Det är länge sedan det behövdes toppbetyg för att komma in på universitet. I många fall räcker det att med nöd och näppe ha klarat gymnasiet.

Att ge sken av att det inte finns en motsättning mellan kvantitet och kvalitet är oseriöst. Lärare vid universitet och högskolor har länge uttryckt oro över att studenter är dåligt förberedda, har besvärande kunskapsluckor och svårigheter att uttrycka sig i skrift.

Sydsvenskan

2989556_1200_675Svenska universitet och högskolor brottas idag med många problem. Två av de mer akuta är hur man ska hantera en situation med krympande ekonomi och att allt fler av studenterna är dåligt förberedda för högskolestudier.
Varför har det blivit så här? Yours truly har vid upprepade tillfällen blivit approcherad av media apropå dessa frågor, och har då utöver ‘the usual suspects’ också försökt lyfta en problematik som sällan — av rädsla för att inte vara ‘politiskt korrekt’ — lyfts i debatten.

De senaste femtio åren har vi haft en fullständig explosion av nya studentgrupper som går vidare till universitets- och högskolestudier. Detta är på ett sätt klart glädjande. Idag har vi lika många doktorander i vårt utbildningssystem som vi hade gymnasister på 1950-talet. Men denna utbildningsexpansion har tyvärr i mycket skett till priset av försämrade möjligheter för studenterna att tillgodogöra sig högskoleutbildningens kompetenskrav. Många utbildningar har fallit till föga och sänkt kraven.

Tyvärr är de studenter vi får till universitet och högskolor över lag allt sämre rustade för sina studier. Omstruktureringen av skolan i form av decentralisering, avreglering och målstyrning har tvärtemot politiska utfästelser inte levererat. I takt med den eftergymnasiala utbildningsexpansionen har en motsvarande kunskapskontraktion hos stora studentgrupper ägt rum. Den skolpolitik som lett till denna situation slår hårdast mot dem den utger sig för att värna — de med litet eller inget ‘kulturkapital’ i bagaget hemifrån.

Mot denna bakgrund är det egentligen anmärkningsvärt att man inte i större utsträckning problematiserat vad utbildningsexplosionen i sig kan leda till.

gaussEftersom vi för femtio år sedan vid våra universitet utbildade enbart en bråkdel av befolkningen, är det ingen djärv gissning — under antagande av att ‘begåvning’ i en population är åtminstone approximativt normalfördelad — att lejonparten av dessa studenter ‘begåvningsmässigt’ låg till höger om mittpunkten på normalfördelningskurvan. Om vi idag tar in fem gånger så många studenter på våra högskolor och universitet kan vi — under samma antagande — knappast räkna med att en lika stor del av dessa utgörs av individer som ligger till höger om normalfördelningskurvans mittpunkt. Rimligen torde detta — ceteris paribus — innebära att i takt med att proportionen av befolkningen som går vidare till högskola och universitet ökar, så ökar svårigheterna för många av dessa att uppnå traditionellt högt ställda akademiska kravnivåer.

Här borde i så fall statsmakterna ha ytterligare en stark anledning till att öka resurserna till högskola och universitet, istället för att som idag bedriva utbildningar på mager kost och med få lärarledda föreläsningar i rekordstora studentgrupper. Med nya kategorier av studenter, som i allt större utsträckning rekryteras från studieovana hem, är det svårt att se hur vi med knappare resursramar ska kunna lösa dilemmat med högre krav på meritmässigt allt mer svagpresterande studenter.

The conundrum of unknown unknowns

24 Jul, 2017 at 17:18 | Posted in Economics | 2 Comments

Short-term weather forecasting is possible because most of the factors that determine tomorrow’s weather are, in a sense, already there … But when you look further ahead you encounter the intractable problem that, in non-linear systems, small changes in initial conditions can lead to cumulatively larger and larger changes in outcomes over time. In these circumstances imperfect knowledge may be no more useful than no knowledge at all.

economic_forecastingMuch the same is true in economics and business. What gross domestic product will be tomorrow is, like tomorrow’s rain or the 1987 hurricane, more or less already there: tomorrow’s output is already in production, tomorrow’s sales are already on the shelves, tomorrow’s business appointments already made. Big data will help us analyse this. We will know more accurately and more quickly what GDP is, we will be more successful in predicting output in the next quarter, and our estimates will be subject to fewer revisions …

Big data can help us understand the past and the present but it can help us understand the future only to the extent that the future is, in some relevant way, contained in the present. That requires a constancy of underlying structure that is true of some physical processes but can never be true of a world that contains Hitler and Napoleon, Henry Ford and Steve Jobs; a world in which important decisions or discoveries are made by processes that are inherently unpredictable and not susceptible to quantitative description.

John Kay

The central problem with the present ‘machine learning’ and ‘big data’ hype is that so many — falsely — think that they can get away with analysing real world phenomena without any (commitment to) theory. But — data never speaks for itself. Without a prior statistical set-up there actually are no data at all to process. And — using a machine learning algorithm will only produce what you are looking for.

Theory matters.

When ignorance is bliss

22 Jul, 2017 at 22:49 | Posted in Economics | 3 Comments

joanThe production function has been a powerful instrument of miseducation.
The student of economic theory is taught to write Q = f(L, K) where L is a quantity of labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labor; he is told something about the index-number problem in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units K is measured. Before he ever does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.

Joan Robinson The Production Function and the Theory of Capital (1953)

The ultimate insider

21 Jul, 2017 at 20:27 | Posted in Varia | Comments Off on The ultimate insider

 

One of my absolute favourite movies. Great, true, story. Marvelous actors — Russell Crowe, Al Pacino, Christopher Plummer. Fabulous music by e.g. Lisa Gerrard and Jan Garbarek.

What so many critiques of economics gets right

21 Jul, 2017 at 12:39 | Posted in Economics | 2 Comments

Yours truly had a post up the other day on John Rapley’s Twilight of the Money Gods.

In the main I think Rapley is right in his attack on contemporary economics and its ‘priesthood,’ although he often seems to forget that there are — yes, it’s true — more than one approach in economics, and that his critique mainly pertains to mainstream neoclassical economics.

Noah Smith, however, is not too happy about the book:

strw manThere are certainly some grains of truth in this standard appraisal. I’ve certainly lobbed my fair share of criticism at the econ profession over the years. But the problem with critiques like Rapley’s is that they offer no real way forward for the discipline … Simply calling for humility and methodological diversity accomplishes little.

Instead, pundits should focus on what is going right in the economics discipline — because there are some very good things happening.

First, economists have developed some theories that really work. A good scientific theory makes testable predictions that apply to situations other than those that motivated the creation of the theory. Slowly, econ is building up a repertoire of these gems. One of them is auction theory … Another example is matching theory, which has made it a lot easier to get an organ transplant …

Second, economics is becoming a lot more empirical, focusing more on examining the data than on constructing yet more theories.

Noah Smith maintains that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic theories and models.

Yours truly begs to differ. Although one, of course, has to agree with Noah’s view that discounting empirical evidence is not the right way to solve economic issues, when looked at carefully, there  are in fact few real reasons to share his optimism on this so called ’empirical revolution’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it, this is the heart of the matter. External validity and generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance and homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is — unfortunately — exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2016? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Noah Smith, but several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Noah Smith’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

I would, contrary to Noah Smith’s optimism, argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

maxresdefaultThe ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Ricardo’s trade paradigm — a formerly true theory

21 Jul, 2017 at 10:59 | Posted in Economics | 6 Comments

Two hundred years ago, on 19 April 1817, David Ricardo’s Principles was published. In it he presented a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have comparative advantage and importing goods in which they have a comparative disadvantage.

Heckscher-Ohlin-HO-Modern-Theory-of-International-TradeAlthough a great accomplishment per se, Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. In the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have a comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem — as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — beside the standard market clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with a constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor-prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. 

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real world situations. As so many other mainstream mathematical models taught to economics students today, this theorem has very little to do  with the real world.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘free trade is great’ just e.g. assume ‘all economists from Chicago are right’ and ‘all economists from Chicago consider free trade to be great’  The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.

What mainstream economics took over from Ricardo was not only the theory of comparative advantage. The whole deductive-axiomatic approach to economics that is still at the core of mainstream methodology was taken over from Ricardo. Nothing has been more detrimental to the development of economics than going down that barren path.

Ricardo shunted the car of economic science on to the wrong track. Mainstream economics is still on that track. It’s high time to get on the right track and make economics a realist and relevant science.

This having been said, I think the most powerful argument against the Ricardian paradigm is that what counts to day is not comparative advantage, but absolute advantage.

David_RicardoWhat has changed since Ricardo’s days is that the assumption of internationally immobile factors of production has been made totally untenable in our globalised world. When our modern corporations maximize their profits they do it by moving capital and technologies to where it is cheapest to produce. So we’re actually in a situation today where absolute — not comparative — advantages rules the roost when it comes to free trade.

And in that world, what is good for corporations is not necessarily good for nations.

The balanced budget paradox

18 Jul, 2017 at 18:44 | Posted in Economics | 10 Comments

The balanced budget paradox is probably one of the most devastating phenomena haunting our economies. The harder politicians — usually on the advise of establishment economists — try to achieve balanced budgets for the public sector, the less likely they are to succeed in their endeavour. And the more the citizens have to pay for the concomitant austerity policies these wrong-headed politicians and economists recommend as “the sole solution.”

national debt5One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome. Because it is interpersonal the proper analogy is not to national debt but to international debt…. But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.

A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.

Abba Lerner The Burden of the National Debt (1948)

Few issues in politics and economics are nowadays more discussed – and less understood – than public debt. Many raise their voices to urge for reducing the debt, but few explain why and in what way reducing the debt would be conducive to a better economy or a fairer society. And there are no limits to all the – especially macroeconomic –calamities and evils a large public debt is supposed to result in – unemployment, inflation, higher interest rates, lower productivity growth, increased burdens for subsequent generations, etc., etc.

People usually care a lot about public sector budget deficits and debts, and are as a rule worried and negative. Drawing analogies from their own household’s economy, debt is seen as a sign of an imminent risk of default and hence a source of reprobation. But although no one can doubt the political and economic significance of public debt, there’s however no unanimity whatsoever among economists as to whether debt matters, and if so, why and in what way. And even less – one doesn’t know what is the “optimal” size of public debt.

Through history public debts have gone up and down, often expanding in periods of war or large changes in basic infrastructure and technologies, and then going down in periods when things have settled down.

The pros and cons of public debt have been put forward for as long as the phenomenon itself has existed, but it has, notwithstanding that, not been possible to reach anything close to consensus on the issue — at least not in a long time-horizon perspective. One has as a rule not even been able to agree on whether public debt is a problem, and if — when it is or how to best tackle it. Some of the more prominent reasons for this non-consensus are the complexity of the issue, the mingling of vested interests, ideology, psychological fears, the uncertainty of calculating ad estimating inter-generational effects, etc., etc.

 

In classical economics — following in the footsteps of David Hume – especially Adam Smith, David Ricardo, and Jean-Baptiste Say put forward views on public debt that was more negative. The good budget was a balanced budget. If government borrowed money to finance its activities, it would only give birth to “crowding out” private enterprise and investments. The state was generally considered incapable if paying its debts, and the real burden would therefor essentially fall on the taxpayers that ultimately had to pay for the irresponsibility of government. The moral character of the argumentation was a salient feature — “either the nation must destroy public credit, or the public credit will destroy the nation” (Hume 1752)

Later on in the 20th century economists like John Maynard Keynes, Abba Lerner and Alvin Hansen would hold a more positive view on public debt. Public debt was normally nothing to fear, especially if it was financed within the country itself (but even foreign loans could be beneficient for the economy if invested in the right way). Some members of society would hold bonds and earn interest on them, while others would have to pay the taxes that ultimately paid the interest on the debt. But the debt was not considered a net burden for society as a whole, since the debt cancelled itself out between the two groups. If the state could issue bonds at a low interest rate, unemployment could be reduced without necessarily resulting in strong inflationary pressure. And the inter-generational burden was no real burden according to this group of economists, since — if used in a suitable way — the debt would, through its effects on investments and employment, actually be net winners. There could, of course, be unwanted negative distributional side effects, for the future generation, but that was mostly considered a minor problem since (Lerner 1948) “if our children or grandchildren repay some of the national debt these payments will be made to our children and grandchildren and to nobody else.”

Central to the Keynesian influenced view is the fundamental difference between private and public debt. Conflating the one with the other is an example of the atomistic fallacy, which is basically a variation on Keynes’ savings paradox. If an individual tries to save and cut down on debts, that may be fine and rational, but if everyone tries to do it, the result would be lower aggregate demand and increasing unemployment for the economy as a whole.

An individual always have to pay his debts. But a government can always pay back old debts with new, through the issue of new bonds. The state is not like an individual. Public debt is not like private debt. Government debt is essentially a debt to itself, its citizens. Interest paid on the debt is paid by the taxpayers on the one hand, but on the other hand, interest on the bonds that finance the debts goes to those who lend out the money.

Abba Lerner’s essay Functional Finance and the Federal Debt set out guiding principles for governments to adopt in their efforts to use economic – especially fiscal – policies in trying to maintain full employment and prosperity in economies struggling with chronic problems with maintaining a high enough aggregate demand.

Because of this inherent deficiency, modern states tended to have structural and long-lasting problems of maintaining full employment. According to Lerner’s Functional Finance principles, the private sector has a tendency not to generate enough demand on its own, and so the government has to take on the responsibility to make sure that full employment was attained. The main instrument in doing this is open market operations – especially selling and buying interest-bearing government bonds.

Although Lerner seems to have had the view that the ideas embedded in Functional Finance was in principle applicable in all kinds of economies, he also recognized the importance of the institutional arrangements in shaping the feasibility and practical implementation of it.

Functional Finance critically depends on nation states being able to tax its citizens, have a currency — and bonds — of its own. As has become transparently clear during the Great Recession, EMU has not been able to impose those structures, since as Hayek noted already back in 1939, “government by agreement is only possible provided that we do not require the government to act in fields other than those in which we can obtain true agreement.” The monetary institutional structure of EMU makes it highly unlikely – not to say impossible — that this will ever become a “system” in which Functional Finance is adapted.

To Functional Finance the choices made by governments to finance the public deficits — and concomitant debts — was important, since bond-based financing was considered more expansionary than using taxes also. According to Lerner, the purpose of public debt is to achieve a rate of interest that results in investments making full employment feasible. In the short run this could result in deficits, but he firmly maintained that there was no reason to assume that the application of Functional Finance to maintain full employment implied that the government had to always borrow money and increase the public debt. An application of Functional Finance would have a tendency to balance the budget in the long run since basically the guarantee of permanent full employment will make private investment much more attractive and a fortiori the greater private investment will diminish the need for deficit spending.

To both Keynes and Lerner it was evident that the state had the ability to promote full employment and a stable price level – and that it should use its powers to do so. If that meant that it had to take on a debt and (more or less temporarily) underbalance its budget – so let it be! Public debt is neither good nor bad. It is a means to achieving two over-arching macroeconomic goals – full employment and price stability. What is sacred is not to have a balanced budget or running down public debt per se, regardless of the effects on the macroeconomic goals. If “sound finance”, austerity and a balanced budgets means increased unemployment and destabilizing prices, they have to be abandoned.

Now against this reasoning, exponents of the thesis of Ricardian equivalence, have maintained that whether the public sector finances its expenditures through taxes or by issuing bonds is inconsequential, since bonds must sooner or later be repaid by raising taxes in the future.

Robert Barro (1974) attempted to give the proposition a firm theoretical foundation, arguing that the substitution of a budget deficit for current taxes has no impact on aggregate demand and so budget deficits and taxation have equivalent effects on the economy.

If the public sector runs extra spending through deficits, taxpayers will according to the hypothesis anticipate that they will have to pay higher taxes in future — and therefore increase their savings and reduce their current consumption to be able to do so, the consequence being that aggregate demand would not be different to what would happen if taxes were raised today.

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

The Ricardo-Barro hypothesis, with its view of public debt incurring a burden for future generations, is the dominant view among mainstream economists and politicians today. The rational people making up the actors in the model are assumed to know that today’s debts are tomorrow’s taxes. But — one of the main problems with this standard neoclassical theory is, however, that it doesn’t fit the facts.

From a more theoretical point of view, one may also strongly criticize the Ricardo-Barro model and its concomitant crowding out assumption, since perfect capital markets do not exist and repayments of public debt can take place far into the future and it’s dubious if we really care for generations 300 years from now.

At times when economic theories have been in favour of public debt one gets the feeling that the more or less explicit assumption is that public expenditures are useful and good for the economy, since they work as an important — and often necessary — injection to the economy, creating wealth and employment. At times when economic theories have been against public debt, the basic assumption seems to be that public expenditures are useless and only crowd out private initiatives and has no positive net effect on the economy.

Wolfgang Streeck argues in Buying Time: The Delayed Crisis of Democratic Capitalism (2014) for an interpretation of the more or less steady increase in public debt since the 1970s as a sign of a transformation of the tax state (Schumpeter) into a debt state. In his perspective public debt is both an indicator and a causal factor in the relationship between political and economic systems. The ultimate cause behind the increased public debt is the long run decline in economic growth, resulting in a doubling of the average public debt in OECD countries for the last 40 years. This has put strong pressures on modern capitalist states, and parallel to this, income inequality has increased in most countries. This is according to Streeck one manifestation of a neoliberal revolution – with its emphasis on supply side politics, austerity policies and financial deregulation — that has taken place and where democratic-redistributive intervention has become ineffectual.

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.

The failure of successive administrations in most developed countries to embark on any vigorous policy aimed at bringing down unconscionably high levels of unemployment has been due in no small measure to a ‘viewing with alarm’ of the size of the national debts, often alleged to be already excessive, or at least threatening to become so, and  by ideologically urged striving toward ‘balanced’ government budgets without any consideration of whether such debts and deficits are or threaten to become excessive in terms of some determinable impact on the real general welfare. darling-let-s-get-deeply-into-debtIf they are examined in the light of their impact on welfare, however, they can usually be shown to be well below their optimum levels, let alone at levels that could have dire consequences.

To view government debts in terms of the ‘functional finance’ concept introduced by Abba Lerner, is to consider their role in the macroeconomic balance of the economy. In simple, bare bones terms, the function of government debts that is significant for the macroeconomic health of an economy is that they provide the assets into which individuals can put whatever accumulated savings they attempt to set aside in excess of what can be wisely invested in privately owned real assets. A debt that is smaller than this will cause the attempted excess savings, by being reflected in a reduced level of consumption outlays, to be lost in reduced real income and increased unemployment.

William Vickrey

Why testing axioms is necessary in economics

15 Jul, 2017 at 13:23 | Posted in Economics | Comments Off on Why testing axioms is necessary in economics

Of course the more immediate target of Davidson in his formulation of the argument in the early 1980s was not Samuelson, but Lucas and Sargent and their rational expectations hypothesis … This was indeed the period when new classical economics was riding at its highest point of prestige, with Lucas and Sargent and their rational expectations assumption apparently sweeping the boards of any sort of Keynesian theories. Curiously, they did not seem to care whether the assumption was actually true, because it was “an axiom,” something that is assumed and cannot be tested …

requirements-based-testing-13-728This matter of “testing axioms” is controversial. Davidson is right that Keynes was partly inspired by Einstein’s Theory of General Relativity that was based on a relaxation of the parallel axiom of Euclid. So, Davidson argued not unreasonably that he would also be inclined to wish to relax any ergodic axiom. However, of course, the rejection of the parallel postulate (or axiom) did come from empirical tests showing that it does not hold in space-time in general due to gravity curving it. So, the empirical testing of axioms is relevant, and the failure of the rational expectations axiom to hold empirically is certainly reasonable grounds for rejecting it.

J. Barkley Rosser Jr

On this Einstein and Keynes are of course absolutely right. Economics — in contradistinction to logic and mathematics — is an empirical science, and empirical testing of ‘axioms’ ought to be self-evidently relevant for such a discipline. For although the economist himself (implicitly) claims that his axiom is universally accepted as true and in now need of proof, that is in no way a justified reason for the rest of us to simpliciter accept the claim.

When applying deductivist thinking to economics, neoclassical economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply don’t hold.

The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for real-world systems. As Hans Albert has it on the neoclassical style of thought:

hans_albertScience progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

Most mainstream economic models are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?

Confronted with the massive empirical failures of their models and theories, mainstream economists often retreat into looking upon their models and theories as some kind of “conceptual exploration,” and give up any hopes/pretenses whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way.

To me this kind of scientific defeatism is equivalent to giving up on our search for understanding the world we live in. It can’t be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything of the world we live in – then why should we waste any of our precious time on them?

Are methodological discussions risky?

14 Jul, 2017 at 13:53 | Posted in Economics | 2 Comments

Most mainstream economists are reluctant to have a methodological discussion. They usually think it’s too ‘risky.’

Well, maybe it is. But on the other hand, if we’re not prepared to take that risk, economics can’t progress, as Tony Lawson forcefully argues in his Essays on the Nature and State of Modern Economics:

Twenty common myths and/or fallacies of modern economics

1. The widely observed crisis of the modern economics discipline turns on problems that originate at the level of economic theory and/or policy.

onedoesIt does not. The basic problems mostly originate at the level of methodology, and in particular with the current emphasis on methods of mathematical modelling.The latter emphasis is an error given the lack of match of the methods in question to the conditions in which they are applied. So long as the critical focus remains only, or even mainly, at the level of substantive economic theory and/or policy matters, then no amount of alternative text books, popular monographs, introductory pocketbooks, journal or magazine articles … or whatever, are going to get at the nub of the problems and so have the wherewithal to help make economics a sufficiently relevant discipline. It is the methods and manner of their use that are the basic problem.

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journal being filled with articles supporting the stories with empirical evidence confirming the predictions. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject isn’t considered required.

If the ultimate criteria of success of a deductivist system is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.

No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real world economies.

keynes-right-and-wrong

Mer naket än så här blir det inte

14 Jul, 2017 at 11:20 | Posted in Varia | Comments Off on Mer naket än så här blir det inte

 

Den här videon golvade mig fullständigt när jag första gången såg den år 1987.

Naket och självutlämnande ur hjärtats mörker.

Peter LeMarc imponerade för trettio år sedan med sin nakna ärlighet.

Det gör han fortfarande.

Is there a EU housing bubble?

13 Jul, 2017 at 16:14 | Posted in Economics | 5 Comments

houses1

Low interest rates are criticized by some economists as these should encourage asset price bubbles. Houses are our most important asset. Are house prices in the EU at this moment increasing too fast?

Not yet in Southern Europe … However … Swedish prices are of the chart. Dutch prices are rapidly increasing (and continued to do so during the first six months of 2017). German prices have, by now, increased with a third (house ownership in Germany is less common than in many other countries but the increase means that there will, quite soon, be a push to sell houses to renters – who of course have to borrow from the big banks). Yes, there is a northern European bubble. And it is rapidly inflating: during the last six months (not in the graph) prices have continued to increase …

House prices show continued increases which are way higher than the increases of nominal income of households … Actions have to be taken: a gradual increase (with clear forward guidance) of land value taxes (the money raised has to be used to lower VAT on labor), a gradual decrease of Loan to Value ratio’s (with clear forward guidance) and a gradual banishment of tax deductions of interest paid (with clear forward guidance).

It won’t happen.

Merijn Knibbe

Yes indeed, house prices are increasing fast in EU. And more so in Sweden than in any other member state. Sweden’s house price boom started in mid-1990s, and looking at the development of real house prices during the last three decades there are reasons to be deeply worried. The indebtedness of the Swedish household sector has also risen to alarmingly high levels:

sweden-households-debt-to-gdp2x

In its latest report on Sweden, The European Commission warns Sweden about rising house prices and spiralling household debts:

The government’s 22-point housing market plan addresses some underlying factors for housing shortage, including measures to increase the amount of available land for construction, reduce construction costs and shorten planning process lead times. However, some other structural inefficiencies, including weak competition in the construction sector, do not receive appropriate attention. The housing shortage is exacerbated by barriers hindering the efficient use of the existing housing stock. Sweden’s tightly regulated rental market creates lock-in and ‘insider/outsider’ effects, but no significant policy action has been taken to introduce more flexibility in setting rents. In the owner-occupancy market, relatively high capital gains taxes reduce homeowner mobility. A temporary reform of the deferral rules for capital gains taxes on property transaction was introduced, but this will probably have limited effect. Lack of available and affordable housing can also limit labour market mobility and the effective integration of migrants into the labour market, and contribute to intergenerational inequality.

Yours truly has been trying to argue with ‘very serious people’ that it’s really high time to ‘take away the punch bowl.’ Mostly I have felt like the voice of one calling in the desert.

Housing-bubble-markets-flatten-a-bit-530Where do housing bubbles come from? There are of course many different explanations, but one of the more fundamental mechanisms at work is easy to explain with the following arbitrage argument:

Assume you have a sum of money (A) that you want to invest. You can put the money in a bank and receive a yearly interest (r) on the money. Disregarding — for the sake of simplicity — risks, asset depreciations and transaction costs that may possibly be present, rA should equal the income you would alternatively receive if instead you buy a house with a down-payment (A) and let it out during a year with a rent (h) plus changes in house price (dhp) — i. e.

rA = h + dhp

Dividing both sides of the equation with the house price (hp) we get

hp = h/[r(A/hp) – (dhp/hp)]

If you expect house prices (hp) to increase, house prices will increase. It’s this kind of self generating cumulative process à la Wicksell-Myrdal that is the core of the housing bubble. Unlike the usual commodities markets where demand curves usually point downwards, on asset markets they often point upwards, and therefore give rise to this kind of instability. And, the greater the leverage (the lower A/hp), the greater the increase in prices.

The Swedish housing market is living on borrowed time. It’s really high time to take away the punch bowl. What is especially worrying is that although the aggregate net asset position of the Swedish households is still on the solid side, an increasing proportion of those assets is illiquid. When the inevitable drop in house prices hits the banking sector and the rest of the economy, the consequences will be enormous.

It hurts when bubbles burst …

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.