Bostadsbubblan

13 Feb, 2013 at 12:39 | Posted in Economics | 2 Comments

house-prices

Kanske borde det här lända till lite eftertanke. Visst kan bubblan pysa, men risken för att det smäller är fortfarande stor …

Ojämliketen som får vårt samhälle att rämna

12 Feb, 2013 at 14:59 | Posted in Politics & Society | Comments Off on Ojämliketen som får vårt samhälle att rämna

I dag presenterade LO sin tolfte rapport i serien Maktelitens inkomster. Den visar att inkomstskillnaderna mellan makthavare och löntagargrupper fortsatt öka. De med chefspositioner som vd vid våra stora företag har inkomster som är 46 gånger större än vanliga arbetares – eller lika mycket som en industriarbetare tjänar ihop under hela sin livstid.

Hur har vi kunnat låta det gå så här illa? Hur länge till ska vi behöva stå ut med dessa satans orättvisor?

Oh, ni marknadens matadorer! Ödets vagn löper förvisso inte på skenor, men er stund ska också komma. Som August Strindberg så riktigt konstaterade för snart hundrafemtio år sedan i Röda rummet – det här är olidligt. Men –

strindbergdet kommer en dag, då det blir än värre, men då, då komma vi ner från Vita Bergen, från Skinnarviks-bergen, från Tyskbagar-bergen, och vi komma med stort dån som ett vattenfall, och vi ska begära igen våra sängar. Begära? Nej, ta! och ni ska få ligga på hyvelbänkar, som jag har fått, och ni ska få äta potatis, så att era magar stå som trumskinn, alldeles som om ni gått igenom vattenprovet som vi …

Macroeconomic quackery

12 Feb, 2013 at 11:26 | Posted in Economics, Theory of Science & Methodology | Comments Off on Macroeconomic quackery

In a recent interview Robert Lucas says he now believes that “the evidence on postwar recessions … overwhelmingly supports the dominant importance of real shocks.”

So, according to Lucas, changes in tastes and technologies should be able to explain the main fluctuations in e.g. unemployment that we have seen during the last six or seven decades.

Let’s look at the facts and see if there is any strong evidence for this allegation. Just to take a couple of examples, let’s look at Sweden and Portugal:

SwedenPortugal-Unemployment

and at the situation in the eurozone:

unemployment

What shocks to tastes and technologies drove the unemployment rate up and down like this in these countries? Not even a Nobel laureate could in his wildest imagination come up with any warranted and justified explanation solely based on changes in tastes and technologies. Lucas is just making himself ridiculous.

How do we protect ourselves from this kind of scientific quackery? I think Larry Summers has a suggestion well worth considering:

Modern scientific macroeconomics sees a (the?) crucial role of theory as the development of pseudo world’s or in Lucas’s (1980b) phrase the “provision of fully articulated, artificial economic systems that can serve as laboratories in which policies that would be prohibitively expensive to experiment with in actual economies can be tested out at much lower cost” and explicitly rejects the view that “theory is a collection of assertions
about the actual economy” …

[A] great deal of the theoretical macroeconomics done by those professing to strive for rigor and generality, neither starts from empirical observation nor concludes with empirically verifiable prediction …

LARRYThe typical approach is to write down a set of assumptions that seem in some sense reasonable, but are not subject to empirical test … and then derive their implications and report them as a conclusion. Since it is usually admitted that many considerations are omitted, the conclusion is rarely treated as a prediction …

However, an infinity of models can be created to justify any particular set of empirical predictions … What then do these exercises teach us about the world? … If empirical testing is ruled out, and persuasion is not attempted, in the end I am not sure these theoretical exercises teach us anything at all about the world we live in …

Reliance on deductive reasoning rather than theory based on empirical evidence is particularly pernicious when economists insist that the only meaningful questions are the ones their most recent models are designed to address. Serious economists who respond to questions about how today’s policies will affect tomorrow’s economy by taking refuge in technobabble about how the question is meaningless in a dynamic games context abdicate the field to those who are less timid. No small part of our current economic difficulties can be traced to ignorant zealots who gained influence by providing answers to questions that others labeled as meaningless or difficult. Sound theory based on evidence is surely our best protection against such quackery.

P values – noisy measures of evidence

12 Feb, 2013 at 10:47 | Posted in Statistics & Econometrics | 2 Comments

In theory, the P value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also be labeled highly significant, marginally significant, and not statistically significant at conventional levels), with cutoffs roughly at P = 0.01 and 0.10.

pvalueOne big practical problem with P values is that they cannot easily be compared. The difference between a highly significant P value and a clearly nonsignificant P value is itself not necessarily statistically significant. (Here, I am using “significant” to refer to the 5% level that is standard in statistical practice in much of biostatistics, epidemiology, social science, and many other areas of application.) Consider a simple example of two independent experiments with estimates (standard error) of 25 (10) and 10 (10). The first experiment is highly statistically significant (two and a half standard errors away from zero, corresponding to a normal-theory P value of about 0.01) while the second is not significant at all. Most disturbingly here, the difference is 15 (14), which is not close to significant. The naive (and common) approach of summarizing an experiment by a P value and then contrasting results based on significance levels, fails here, in implicitly giving the imprimatur of statistical significance on a comparison that could easily be explained by chance alone … [T]his is not simply the well-known problem of arbitrary thresholds, the idea that a sharp cutoff at a 5% level, for example, misleadingly separates the P = 0.051 cases from P = 0.049. This is a more serious problem: even an apparently huge difference between clearly significant and clearly nonsignificant is not itself statistically significant.

In short, the P value is itself a statistic and can be a noisy measure of evidence.

Andrew Gelman

The ergodic axiom and the shortcomings of risk management

11 Feb, 2013 at 15:47 | Posted in Economics, Statistics & Econometrics | 2 Comments

uncertaintyroadUnfortunately as we have all learned in the world of experience, little is known with certainty about future payoffs of investment decisions made today. If the return on economic decisions made today is never known with certainty, then how can financial managers make optimal decisions on where to put their firm’s money and householder’s where to put their saving today?

If theorists invent a world remote from reality and then lived in it consistently, then Keynes [1936, p.16] argued these economic thinkers were “like Euclidean geometers in a non-Euclidean world who discover that apparent parallel lines collide, rebuke these lines for not keeping straight. Yet, in truth there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics” …

As any statistician will tell you, in order to draw any statistical (probabilistic risk) inferences regarding the values of any population universe, one should draw and statistically analyze a sample from that universe. Drawing a sample from the future economic universe of financial markets, however, is impossible. Simply stated the ergodic axiom presumes that the future is already predetermined by an unchanging probability distribution and therefore a sample from the past is equivalent to drawing a sample from the future … Assuming ergodicity permits one to believe one can calculate an actuarial certainty about future events from past data.

Efficient market theorists must implicitly presume decision makers can reliably calculate the future. The economy, therefore, must be governed by an ergodic stochastic process, so that calculating a probability distribution from past statistical data samples is the same as calculating the risks from a sample drawn from the future. If financial markets are governed by the ergodic axiom, then we might ask why do mutual funds that advertise their wonderful past earnings record always note in the advertisement that past performance does not guarantee future results …

This ergodic axiom is an essential foundation for all the complex risk management computer models developed by the “quants” on Wall Street. It is also the foundation for econometricians who believe that their econometric models will correctly predict the future GDP, employment, inflation rate, etc. If, however, the economy is governed by a non-ergodic stochastic process, then econometric estimates generated from past market data are not reliable estimates that would be obtained if one could draw a sample from the future …

In sum, the ergodic axiom underlying the typical risk management and efficient market models represents, in a Keynes view, a model remote from an economic reality that is truly governed by non-ergodic conditions. Keynes, his Post Keynesian followers, and George Soros all reject the assumption that people can know the economic future since it is not predetermined. Instead they assert that people “know” they cannot know the future outcome of crucial economic decisions made today. The future is truly uncertain and not just probabilistic risky.

Paul Davidson

Statistical vs. Practical Significance

10 Feb, 2013 at 11:22 | Posted in Statistics & Econometrics | Comments Off on Statistical vs. Practical Significance

 

 

Awesome Swedish Winter’s Tale

9 Feb, 2013 at 17:31 | Posted in Varia | 2 Comments


Absolutely fabulous video by Ted Ström.

Why Friedman’s methodology has “jumped the shark”

9 Feb, 2013 at 11:02 | Posted in Economics, Theory of Science & Methodology | 2 Comments

fonzsharkjump-300x300The basic argument of [Milton Friedman’s infamous 1953 essay ‘The Methodology of Positive Economics’] is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.

The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ’it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.

The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? …

The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this …

The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science …

The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other …

The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.

The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge – data entry, lawyers and financial advisors, for example – is nullified …

Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark.

Unlearning Economics

On the irreversibility of time and economics

8 Feb, 2013 at 19:24 | Posted in Economics, Statistics & Econometrics | 2 Comments

 

As yours truly has argued – e.g. here, here and here – this is an extremely important issue for everyone wanting to understand what are the deep fundamental flaws of mainstream neoclassical economics.

Added 9/2: And as an almost immediate testimony to how wrong things may go when you do not understand the importance of making the distinction between real, non-ergodic, time averages and unreal hypothetical, ergodic , ensemble averages,  Noah Smith yesterday posted a piece defending the Efficient Market Hypothesis (falsely intimating it being a popular target of critique only among “lay critics of the econ profession”) basically referring  to the same Paul Samuelson that Ole Peters so rightly criticises in his lecture.

On probabilism and statistics

8 Feb, 2013 at 12:45 | Posted in Statistics & Econometrics | 3 Comments

‘Mr Brown has exactly two children. At least one of them is a boy. What is the probability that the other is a girl?’ What could be simpler than that? After all, the other child either is or is not a girl. I regularly use this example on the statistics courses I give to life scientistsworking in the pharmaceutical industry. They all agree that the probability is one-half.

dicing-with-death-chance-risk-and-healthSo they are all wrong. I haven’t said that the older child is a boy.The child I mentioned, the boy, could be the older or the younger child. This means that Mr Brown can have one of three possible combinations of two children: both boys, elder boy and younger girl, elder girl and younger boy, the fourth combination of two girls being excluded by what I have stated. But of the three combinations, in two cases the other child is a girl so that the requisite probability is 2/3 …

This example is typical of many simple paradoxes in probability: the answer is easy to explain but nobody believes the explanation. However, the solution I have given is correct.

Or is it? That was spoken like a probabilist. A probabilist is a sort of mathematician. He or she deals with artificial examples and logical connections but feel no obligation to say anything about the real world. My demonstration, however, relied on the assumption that the three combinations boy–boy, boy–girl and girl–boy are equally likely and this may not be true. The difference between a statistician and a probabilist is that the latter will define the problem so that this is true, whereas the former will consider whether it is true and obtain data to test its truth.

Models in economics

8 Feb, 2013 at 09:45 | Posted in Economics | Comments Off on Models in economics

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage, The Flaw of Averages

Gunilla von Bahr (1941-2013)

7 Feb, 2013 at 17:37 | Posted in Varia | Comments Off on Gunilla von Bahr (1941-2013)

 

On inflation targeting and rational expectations

6 Feb, 2013 at 17:50 | Posted in Economics | 4 Comments

inflationtargeting

The Riksbank in 1993 announced an official target for CPI inflation of 2 percent. Over the last 15 years, average CPI inflation has equaled 1.4 percent and has thus fallen short of the target by 0.6 percentage points. Has this undershooting of the inflation target had any costs in terms of higher average unemployment? This depends on whether the long-run Phillips curve in Sweden is vertical or not. During the last 15 years, inflation expectations in Sweden have become anchored to the inflation target in the sense that average inflation expectations have been close to the target. The inflation target has thus become credible. If inflation expectations are anchored to the target also when average inflation deviates from the target, the long-run Phillips curve is no longer vertical but downward-sloping. Then average inflation below the credible target means that average unemployment is higher than the rationalexpectations steady-state (RESS) unemployment rate. The data indicate that the average unemployment rate has been 0.8 percentage points higher than the RESS rate over the last 15 years. This is a large unemployment cost of undershooting the inflation target. Some simple robustness tests indicate that the estimate of the unemployment cost is rather robust, but the estimate is preliminary and further scrutiny is needed to assess its robustness.

During 1997-2011, average CPI inflation has fallen short of the inflation target of 2 percent by 0.6 percentage points. But average inflation expectations according to the TNS Sifo Prospera survey have been close to the target. Thus, average inflation expectations have been anchored to the target and the target has become credible. If average inflation expectations are anchored to the target when average inflation differ from the target, the long-run Phillips curve is not vertical. Then lower average inflation means higher average unemployment. The data indicate that average inflation below target has been associated with average unemployment being 0.8 percentage points higher over the last 15 years than would have been the case if average inflation had been equal to the target. This is a large unemployment cost of average inflation below a credible target. Some simple robustness tests indicate that the estimate of the unemployment cost is rather robust, but the estimate is preliminary and further scrutiny is needed to assess its robustness.

The difference between average inflation and average inflation expectations and the apparent existence of a downward-sloping long-run Phillips curve raises several urgent questions that I believe need to be addressed. Why have average inflation expectations exceeded average inflation for 15 years? Why has average inflation fallen below the target for 15 years? Could average inflation have fallen below average inflation expectations and the inflation target without the large unemployment cost estimated here? Could the large unemployment cost have been avoided with a different monetary policy? What are the policy implications for the future? Do these findings make price-level targeting or the targeting of average inflation over a longer period relatively more attractive, since they would better ensure that average inflation over longer periods equals the target?

Lars E.O. Svensson, The Possible Unemployment Cost of Average Inflation below a Credible Target

According to Lars E. O. Svensson  – deputy governor of the Riksbank – the Swedish Riksbank has been pursuing a policy during the years 1998-2011 that in reality has made inflation on average 0.6 percentage units lower than the goal set by the Riksbank. The Phillips Curve he estimates shows that unemployment as a result of this overly “austere” inflation level has been almost 1% higher than if one had stuck to the set inflation goal of 2%.

What Svensson is saying, without so many words, is that the Swedish Fed for no reason at all has made people unemployed. As a consequence of a faulty monetary policy the unemployment is considerably higher than it would have been if the Swedish Fed had done its job adequately.

From a more methodological point of view it is of course also interesting to consider the use made of the rational expectations hypothesis in these model-based calculations (and models of the same ilk that abounds in “modern” macroeconmics). When data tells us that “average inflation expectations exceeded average inflation for 15 years” – wouldn’t it be high time to put the REH where it belongs – in the dustbin of history!

To me Svensson’s paper basically confirms what I wrote a couple of months ago:

Models based on REH impute beliefs to the agents that is not based on any real informational considerations, but simply stipulated to make the models mathematically-statistically tractable.

Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system.

Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. Of course, one could perhaps accept REH if it had produced lots of verified predictions and good explanations. But it has done nothing of the kind. Therefore the burden of proof is on those who still want to use models built on ridiculously unreal assumptions – models devoid of all empirical interest.

In reality, REH is a rather harmful modeling assumption, since it contributes to perpetuating the ongoing transformation of economics into a kind of science-fiction-economics. If economics is to guide us, help us make forecasts, explain or better understand real world phenomena, it is in fact next to worthless.

On the non-equivalence of Keynesian and Knightian uncertainty (wonkish)

5 Feb, 2013 at 22:30 | Posted in Economics, Theory of Science & Methodology | 2 Comments

Last year Bank of England’s Andrew G Haldane and Benjamin Nelson  presented a paper with the title Tails of the unexpected. The main message of the paper was that we should no let us be fooled by randomness:

For almost a century, the world of economics and finance has been dominated by randomness. Much of modern economic theory describes behaviour by a random walk, whether financial behaviour such as asset prices (Cochrane (2001)) or economic behaviour such as consumption (Hall (1978)). Much of modern econometric theory is likewise underpinned by the assumption of randomness in variables and estimated error terms (Hayashi (2000)).

But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.

The normal distribution provides a beguilingly simple description of the world. Outcomes lie symmetrically around the mean, with a probability that steadily decays. It is well-known that repeated games of chance deliver random outcomes in line with this distribution: tosses of a fair coin, sampling of coloured balls from a jam-jar, bets on a lottery number, games of paper/scissors/stone. Or have you been fooled by randomness?

In 2005, Takashi Hashiyama faced a dilemma. As CEO of Japanese electronics corporation Maspro Denkoh, he was selling the company’s collection of Impressionist paintings, including pieces by Cézanne and van Gogh. But he was undecided between the two leading houses vying to host the auction, Christie’s and Sotheby’s. He left the decision to chance: the two houses would engage in a winner-takes-all game of paper/scissors/stone.

Recognising it as a game of chance, Sotheby’s randomly played “paper”. Christie’s took a different tack. They employed two strategic game-theorists – the 11-year old twin daughters of their international director Nicholas Maclean. The girls played “scissors”. This was no random choice. Knowing “stone” was the most obvious move, the girls expected their opponents to play “paper”. “Scissors” earned Christie’s millions of dollars in commission.

As the girls recognised, paper/scissors/stone is no game of chance. Played repeatedly, its outcomes are far from normal. That is why many hundreds of complex algorithms have been developed by nerds (who like to show off) over the past twenty years. They aim to capture regularities in strategic decision-making, just like the twins. It is why, since 2002, there has been an annual international world championship organised by the World Rock-Paper-Scissors Society.

The interactions which generate non-normalities in children’s games repeat themselves in real world systems – natural, social, economic, financial. Where there is interaction, there is non-normality. But risks in real-world systems are no game. They can wreak havoc, from earthquakes and power outages, to depressions and financial crises. Failing to recognise those tail events – being fooled by randomness – risks catastrophic policy error.

So is economics and finance being fooled by randomness? And if so, how did that happen?

Normality has been an accepted wisdom in economics and finance for a century or more. Yet in real-world systems, nothing could be less normal than normality. Tails should not be unexpected, for they are the rule. As the world becomes increasingly integrated – financially, economically, socially – interactions among the moving parts may make for potentially fatter tails. Catastrophe risk may be on the rise.

If public policy treats economic and financial systems as though they behave like a lottery – random, normal – then public policy risks itself becoming a lottery. Preventing public policy catastrophe requires that we better understand and plot the contours of systemic risk, fat tails and all. It also means putting in place robust fail-safes to stop chaos emerging, the sand pile collapsing, the forest fire spreading. Until then, normal service is unlikely to resume.

Since I think this is a great paper, I think it merits  a couple of  comments.

To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all. What is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules the roost. That was something both Keynes and Knight basically said in their 1921 books. Thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

Knight’s uncertainty concept has an epistemological founding and Keynes’s definitely an ontological founding. Of course this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’s view is the most warranted of the two.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if one subscribes to the former, Knightian view – as Taleb, Haldane & Nelson and “Black Swan” theorists basically do – you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As Keynes convincingly argued, that is ontologically just not possible.

To Keynes the source of uncertainty was in the nature of the real – nonergodic – world. It had to do, not only – or primarily – with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilites and expectations at all.

Keynes-Tobin Tax – steg i rätt riktning

5 Feb, 2013 at 15:15 | Posted in Economics, Politics & Society | 3 Comments

tobintaxI finanskrisens spår har allt fler ledande politiker inom EU åter börjat ställa krav på införande av en Keynes-Tobin-skatt på finansiella transaktioner.

Europas kärnländer har nu bestämt sig för att införa en skatt på finansiella transaktioner och kommissionen och Europaparlamentet har godkänt förslaget. Det sista hindret i form av ett veto från någon av de EU-länder som inte vill vara med (till exempel Storbritannien och Sverige) avvärjdes på de europeiska finansministrarnas möte för två veckor sedan.

Många etablissemangsekonomer har hävdat att det är en ”extremt missriktad åtgärd” att införa en sådan skatt. Det är nästan ingen hejd på allt negativt en Keynes-Tobin-skatt skulle leda till: minskade investeringar, lägre produktion, minskad handelsvolym och sjunkande löner.

Åsiktet av det slaget belyser på ett nästan övertydligt sätt det kanske mest paradoxala med finanskriser – att många ekonomer ingenting verkar vilja lära. En nutidshistorisk tillbakablick bjuder annars på lärorika erfarenheter.

I början av år 2000 började det stå allt klarare att de omfattande avregleringar på den finansiella marknaden som ägt rum sedan Thatcher-Reagan-eran medfört en alltför snabb kreditexpansion. Bankernas och finansbolagens utlåning ökade lavinartat och satsningen på att vinna allt större marknadsandelar gjorde att kreditvärdighetskontrollen eftersattes och dåliga kundförbindelser kom att accepteras. Framför allt IT-aktiernas värden var oproportionerligt höga. Det skulle leda till en oundviklig finansiell kris. Så blev det också. Bubblan sprack och finansmarknadskrisen var ett faktum.

Eftersom människans minne är kort kunde samma mekanismer åter skapa en finansiell härdsmälta år 2008. Den kris världsekonomin fortfarande befinner sig i efterdyningarna av, har denna gång inte sitt ursprung i IT-aktier, utan i den spekulativa bubbla som utvecklades på den amerikanska bolånemarknaden åren 1997 – 2006 och ett kostsamt ekonomiskt experiment i form av euron.

Det genomgående mönstret har varit detsamma i denna som i andra finanskriser. Av någon anledning uppstår en förskjutning (krig, innovationer, nya spelregler m m) i det ekonomiska kretsloppet som leder till förändringar i bankers och företags vinstmöjligheter. Efterfrågan och priser stiger och drar med sig allt fler delar av en ekonomi som hamnar i ett slags eufori. Fler och fler dras med och snart är spekulationsmanin – vare sig det gäller tulpanlökar, fastigheter eller bolån – ett faktum. Förr eller senare säljer någon för att ta ut sina vinster och en rusning efter likviditet sätter in. Det har blivit dags att hoppa av karusellen och lösa in sina värdepapper och andra tillgångar i reda pengar. Ett finansiellt nödläge uppstår och breder ut sig. Priserna börjar sjunka, konkurserna ökar och krisen påskyndas och går över i panik. För att hindra den slutliga kraschen dras krediten åt och man börjar ropa på en långivare som i sista hand kan garantera tillgången på de efterfrågade kontanta medlen och återupprätta förtroendet. Lyckas inte detta är kraschen ett faktum.

Finanskriser är ett oundvikligen återkommande inslag i en ekonomi med fritt spelrum för väsentligen oreglerade marknader. Finansiell beteendeteori har kunnat visa att bilden av investerare som rationella är svår att förena med fakta hämtade från verkliga finansmarknader. Investerare verkar handla mer med utgångspunkt från brus än information. De extrapolerar utifrån korta tidsförlopp, är känsliga för hur problem presenteras, är dåliga på att revidera sina riskbedömningar och ofta överkänsliga för humörsvängningar. Irrationalitet är inte irrelevant på finansmarknader.

Finansmarknader fungerar i grunden som ett slags informationscentral. Eftersom aktörerna inte har fullständig information tenderar de att skaffa sig information om ekonomins fundamenta genom att helt enkelt iaktta varandras handlande. Flockinstinkten får marknadens aktörer att tänka i takt och bidrar till att utvecklingen ofta får karaktären av att vara en biprodukt av kasinoaktiviteter. Med den nya tekniken och instrumenten kommer ett inbyggt tvång att handla snabbt, vilket ger ytterligare näring åt hjordbeteendet. Informationen hinner inte smältas utan ska genast omsättas i handling baserad på gissningar om hur “de andra” kommer att reagera när informationen blir allmän. Detta leder i sin tur leder till ökad riskexponering.

Sedan avregleringen av finansmarknaderna tog fart på 1980-talet har bankernas kapitaltäckning – hur mycket kapital som bankerna måste hålla i reserv i förhållande till utlåningsvolymen – minskat från 80-90% till att idag ligga runt 50-60 %. Denna underkapitalisering bäddade självklart för den Ponziliknande spekulation som ledde fram till den senaste finanskrisen, där banker sålde tillgångar de inte ägde till spekulanter med pengar de inte hade till priser som de i slutändan inte kunde få ut.

Tyvärr har skuggbankernas expansion, överflyttande av låneverksamhet till bankernas regleringsbefriade investmentdivisioner, värdepapperisering och andra former av finansiella innovationer gjort mycket av den reglering som funnits kvar efter avregleringarna verkningslös. Trots att syftet med de nya instrumenten sades vara att minska och sprida risker har effekten snarare blivit en vildsint kreditgivning som ökat riskexponering och moralisk risk.

Vad kan man då göra för att minimera risken för framtida kriser? Finansmarknadens aktörer genererar uppenbarligen kostnader som de inte själva står för. När vår tids främste nationalekonom – John Maynard Keynes – efter börskraschen 1929 förespråkade införandet av en allmän finansmarknadsskatt var det för att han menade att marknaden då får stå för de kostnader som dess instabilitet, obalanser och störningar ger upphov till. De som i sin vinsthunger är beredda att ta onödiga risker och varaktigt skada samhällsekonomin måste själva vara med och betala notan.

James Tobin aktualiserade i början på 1970-talet Keynes idé genom att föreslå införandet av en skatt på valutatransaktioner. Målen för denna skatt är främst att reducera småskalig spekulation genom att automatiskt bestraffa kortsiktig spekulation men ha försumbara effekter på incitamenten för långsiktig handel och kapitalinvesteringar.

Finansmarknader har viktiga uppgifter att fylla i en ekonomi. Detta har marknadsförespråkare varit duktiga på att visa. Däremot har man varit sämre på att visa på de kostnader som de också ger upphov till.

Det fina med en Keynes-Tobin-skatt är att den skulle kyla av intresset för finansmarknadsoperatörer – och aktiehandlande robotar – att hålla på med kortsiktig spekulation, utan att ha annat än försumbara effekter på långsiktiga investeringsbeslut.

Dessa verksamheter har ju också visat sig vara av tvivelaktigt samhällsvärde – de konsumerar en stor del av våra gemensamma resurser i form av både mänsklig tankeverksamhet, talang, datakraft – och ger i slutändan inte mycket mer tillbaka än skulder och kriser. Som andra får betala. Att kasta grus i det maskineriet skulle antagligen bidra till bättre hushållning av samhällets resurspool.

Kritiker av Keynes-Tobin-skatten bygger det mesta av sina resonemang på risken för att Keynes-Tobin-skattens införande i EU bara skulle få bankverksamheten att flytta någon annanstans. Fast noga betänkt kan man ju säga det om alla förslag till finansmarknadsregleringar. Så varför just när det gäller den här skatten? Och varför hör vi aldrig argumentet när vi talar om bankernas courtageavgifter (som i flera länder är högre än den nu föreslagna skattesatsen från EU-kommissionen på 1 promille)? Eller Englands stämpelskatt på 0.5%? Och är man nu så rädd för att bankdirektörerna ska försöka undgå skatten kan man väl förslagsvis införa en incitamentskonform belöning på säg 5-10% av det staten skulle få in om bankanställda avslöjade chefers eventuella försök till “dodging”.

Vi måste inse att vi inte både kan ha tårtan och äta den. Så länge vi har en ekonomi med väsentligen oreglerade finansiella marknader kommer vi också att få dras med återkommande kriser. Inte bara riksbankschefer inser nu för tiden att interventioner på finansmarknaden är försvarbara när prissättningen ”utvecklas i riktning från det fundamentalt motiverade.”

Jag är övertygad om att en Keynes-Tobin-skatt tillsammans med större öppenhet när det gäller behovet av bank-, kapital- och valutaregleringar kan bidra till att vi på sikt kan minska riskerna för finansiella systemrisker och kostsam finansiell instabilitet. Om vi däremot reflexmässigt vägrar se vidden av problemen kommer vi åter att stå handfallna när nästa kris tornar upp sig

En Keynes-Tobin-skatt löser långt ifrån alla problem på finansmarknaden. Men dess införande skulle sända en stark signal om hur vi ser på kasinoekonomins samhälleliga värde. För som Keynes skriver:

Spekulanterna kan vara oförargliga så länge de bara är bubblor på ytan av företagandets breda ström. Men läget blir allvarligt när företagandet blir en bubbla ovanpå en virvel av spekulation. När kapitalbildningen i ett land blir en biprodukt av versksamheten i ett kasino är det troligt att arbetet blir illa utfört … Införandet av en betydande omsättningsskatt på alla avslut skulle måhända visa sig vara den mest ändamålsenliga genomförbara reformen i syfte att … försvaga spekulationens övermakt gentemot företagandet.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.