Ha Ha Said The Clown

22 Jun, 2015 at 21:25 | Posted in Varia | Comments Off on Ha Ha Said The Clown

 

Känner du dig också kränkt?

22 Jun, 2015 at 20:36 | Posted in Varia | Comments Off on Känner du dig också kränkt?

Dagens studenter tycks ha känsliga små själar eftersom de så lätt blir kränkta. Dessutom verkar de leva under den konstiga föreställningen att kränkningar alltid är fel. Dessa unga människor anser sig ha rätt att aldrig bli utsatta för sådant som kan störa deras själsfrid.

kränktMånga vill därför att konst och litteratur numera ska förses med så kallade triggervarningar som säger att här används ord eller här förekommer idéer som kan vara stötande och kränkande för alla som inte omfattar vår tids progressiva tankar.

Det som inte stämmer överens med dessa måste rensas ut. Bort med Den gudomliga komedin där Dante framställde islams profet som schismatiker med en plats i helvetet och inte som ”den perfekta människan”. Nobelpristagare som William Faulkner och T S Eliot behandlar i sina verk svarta och judar på sätt som inte är acceptabla och därför ska de inte längre läsas. Inte ska vi behöva läsa böcker med för minoriteter så diskriminerande titlar som Idioten och Dvärgen. Och är det inte dags att nu så här efter mer än två tusen år rensa bort Platon och Aristoteles ur det västerländska kulturarvet? Två döda vita män med en i dag fullkomligt oacceptabel kvinnosyn.

Carl Rudbeck

Bra rutet Carl!

Dagens imbecilla tyckmyckentrutade kränkthetspladder är en skymf mot alla de som verkligen blir kränkta i vårt samhälle. Att tro att ren historieförfalskning och “redigering” av vår litteraturskatt på något vis skulle råda bot på detta visar bara att en stor del av pk-folket som vanligt har otur när de försöker tänka.

Sherlock Holmes of the year

22 Jun, 2015 at 20:24 | Posted in Economics | Comments Off on Sherlock Holmes of the year

Do economic booms cause economic busts?

To a lot of people, this seems like a silly question to even ask. Of course booms cause busts, they say. Excessive greed or optimism or easy credit leads to overinvestment, soaring asset prices and unsustainable borrowing binges. What goes up must come down, and the surest sign of a bust tomorrow is a boom today …

Many people instinctively believe this that it would astonish most people to learn that for the last half-century, this hasn’t been the way macroeconomists — the type working as university professors, anyway — think about the business cycle …

A small handful of macroeconomists are turning back to the old idea that booms cause busts, and vice versa … Paul Beaudry and Franck Portier are two such researchers. They are famous for a 2006 theory saying that news about future changes in productivity could be what cause recessions and booms. That model never really caught on …

Now, Beaudry and Portier, along with co-author Dana Galizia, are going after bigger fish. They want to resurrect the idea that booms cause recessions.

Noah Smith

Wooh! Booms causing busts. Who would have thought anything like that.

Impressive indeed …

Sverige har blivit ännu lite tråkigare (personligt)

21 Jun, 2015 at 21:27 | Posted in Varia | Comments Off on Sverige har blivit ännu lite tråkigare (personligt)

 

 
fem-myror-580__b468h626mEva Remaeus 1950-02-13 — 1993-01-29
‘Brasse’ Brännström 1945-02-27 — 2014-08-29
Magnus Härenstam 1941-06-19 — 2015-06-13

Tack för allt

Alla skratten

Glädjen

R. I. P.

The way forward — discard 90% of the data!

21 Jun, 2015 at 20:02 | Posted in Statistics & Econometrics | 1 Comment

Could it be better to discard 90% of the reported research? Surprisingly, the answer is yes to this statistical paradox. This paper has shown how publication selection can greatly distort the research record and its conventional summary statistics. Using both Monte Carlo simulations and actual research examples, we show how a simple estimator, which uses only 10 percent of the reported research reduces publication bias and improves efficiency over conventional summary statistics that use all the reported research.

gonefishing1The average of the most precise 10 percent, ‘Top10,’ of the reported estimates of a given empirical phenomenon is often better than conventional summary estimators because of its heavy reliance on the reported estimate’s precision (i.e., the inverse of the estimate’s standard error). When estimates are chosen, in part, for their statistical significance, studies cursed with imprecise estimates have to engage in more intense selection from among alternative statistical techniques, models, data sets, and measures to produce the larger estimate that statistical significance demands. Thus, imprecise estimates will contain larger biases.

Studies that have access to more data will tend to be more precise, and hence less biased. At the level of the original empirical research, the statistician’s motto, “the more data the better,” holds because more data typically produce more precise estimates. It is only at the meta-level of integrating, summarizing, and interpreting an entire area of empirical research (meta-analysis), where the removal of 90% of the data might actually improve our empirical knowledge. Even when the authors of these larger and more precise studies actively select for statistical significance in the desired direction, smaller significant estimates will tend to be reported. Thus, precise studies will, on average, be less biased and thereby possess greater scientific quality, ceteris paribus.

We hope that the statistical paradox identified in this paper refocuses the empirical sciences upon precision. Precision should be universally adopted as one criterion of research quality, regardless of other statistical outcomes.

T.D. Stanley, Stephen B. Jarrell, and Hristos Doucouliagos

Is public debt — really — a burden?

19 Jun, 2015 at 15:33 | Posted in Economics | 4 Comments

national debt5One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome. Because it is interpersonal the proper analogy is not to national debt but to international debt…. But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.

A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.

Abba Lerner The Burden of the National Debt (1948)

Few issues in politics and economics are nowadays more discussed – and less understood – than public debt. Many raise their voices to urge for reducing the debt, but few explain why and in what way reducing the debt would be conducive to a better economy or a fairer society. And there are no limits to all the – especially macroeconomic –calamities and evils a large public debt is supposed to result in – unemployment, inflation, higher interest rates, lower productivity growth, increased burdens for subsequent generations, etc., etc.

People usually care a lot about public sector budget deficits and debts, and are as a rule worried and negative. Drawing analogies from their own household’s economy, debt is seen as a sign of an imminent risk of default and hence a source of reprobation. But although no one can doubt the political and economic significance of public debt, there’s however no unanimity whatsoever among economists as to whether debt matters, and if so, why and in what way. And even less – one doesn’t know what is the “optimal” size of public debt.

Through history public debts have gone up and down, often expanding in periods of war or large changes in basic infrastructure and technologies, and then going down in periods when things have settled down.

The pros and cons of public debt have been put forward for as long as the phenomenon itself has existed, but it has, notwithstanding that, not been possible to reach anything close to consensus on the issue — at least not in a long time-horizon perspective. One has as a rule not even been able to agree on whether public debt is a problem, and if — when it is or how to best tackle it. Some of the more prominent reasons for this non-consensus are the complexity of the issue, the mingling of vested interests, ideology, psychological fears, the uncertainty of calculating ad estimating inter-generational effects, etc., etc.

In the mercantilist era public debt was as a rule considered positive (cf. Berkeley, Melon, de Pinto), a view that was later repeated in the 19th century by, e.g., economists Adolf Wagner, Lorenz von Stein and Carl Dietzel. The state’s main aim was to control and distribute the resources of the nation, often through regulations and forceful state interventions. As a result of increased public debt, the circulation of money and credit would increase the amount of capital and contribute to the wealth of nations. Public debt was basically considered something that was moved from “the right hand to the left hand.” The economy simply needed a state that was prepared to borrow substantial amounts of money and financial papers and incur indebtedness in the process.

There was also a clear political dimension to the issue, and some authors were clearly aware that government loan/debt activities could have a politically stabilizing effect. Investors had a vested interest in stable governments (low interest rate and low risk premium) and so instinctively were loyal to the government.

In classical economics — following in the footsteps of David Hume – especially Adam Smith, David Ricardo, and Jean-Baptiste Say put forward views on public debt that was more negative. The good budget was a balanced budget. If government borrowed money to finance its activities, it would only give birth to “crowding out” private enterprise and investments. The state was generally considered incapable if paying its debts, and the real burden would therefor essentially fall on the taxpayers that ultimately had to pay for the irresponsibility of government. The moral character of the argumentation was a salient feature — “either the nation must destroy public credit, or the public credit will destroy the nation” (Hume 1752)

Later on in the 20th century economists like John Maynard Keynes, Abba Lerner and Alvin Hansen would again hold a more positive view on public debt. Public debt was normally nothing to fear, especially if it was financed within the country itself (but even foreign loans could be beneficient for the economy if invested in the right way). Some members of society would hold bonds and earn interest on them, while others would have to pay the taxes that ultimately paid the interest on the debt. But the debt was not considered a net burden for society as a whole, since the debt cancelled itself out between the two groups. If the state could issue bonds at a low interest rate, unemployment could be reduced without necessarily resulting in strong inflationary pressure. And the inter-generational burden was no real burden according to this group of economists, since — if used in a suitable way — the debt would, through its effects on investments and employment, actually be net winners. There could, of course, be unwanted negative distributional side effects, for the future generation, but that was mostly considered a minor problem since (Lerner 1948) “if our children or grandchildren repay some of the national debt these payments will be made to our children and grandchildren and to nobody else.”

Central to the Keynesian influenced view is the fundamental difference between private and public debt. Conflating the one with the other is an example of the atomistic fallacy, which is basically a variation on Keynes’ savings paradox. If an individual tries to save and cut down on debts, that may be fine and rational, but if everyone tries to do it, the result would be lower aggregate demand and increasing unemployment for the economy as a whole.

An individual always have to pay his debts. But a government can always pay back old debts with new, through the issue of new bonds. The state is not like an individual. Public debt is not like private debt. Government debt is essentially a debt to itself, its citizens. Interest paid on the debt is paid by the taxpayers on the one hand, but on the other hand, interest on the bonds that finance the debts goes to those who lend out the money.

Abba Lerner’s essay Functional Finance and the Federal Debt set out guiding principles for governments to adopt in their efforts to use economic – especially fiscal – policies in trying to maintain full employment and prosperity in economies struggling with chronic problems with maintaining a high enough aggregate demand.

Because of this inherent deficiency, modern states tended to have structural and long-lasting problems of maintaining full employment. According to Lerner’s Functional Finance principles, the private sector has a tendency not to generate enough demand on its own, and so the government has to take on the responsibility to make sure that full employment was attained. The main instrument in doing this is open market operations – especially selling and buying interest-bearing government bonds.

Although Lerner seems to have had the view that the ideas embedded in Functional Finance was in principle applicable in all kinds of economies, he also recognized the importance of the institutional arrangements in shaping the feasibility and practical implementation of it.

Functional Finance critically depends on nation states being able to tax its citizens, have a currency — and bonds — of its own. As has become transparently clear during the Great Recession, EMU has not been able to impose those structures, since as Hayek noted already back in 1939, “government by agreement is only possible provided that we do not require the government to act in fields other than those in which we can obtain true agreement.” The monetary institutional structure of EMU makes it highly unlikely – not to say impossible — that this will ever become a “system” in which Functional Finance is adapted.

To Functional Finance the choices made by governments to finance the public deficits — and concomitant debts — was important, since bond-based financing was considered more expansionary than using taxes also. According to Lerner, the purpose of public debt is to achieve a rate of interest that results in investments making full employment feasible. In the short run this could result in deficits, but he firmly maintained that there was no reason to assume that the application of Functional Finance to maintain full employment implied that the government had to always borrow money and increase the public debt. An application of Functional Finance would have a tendency to balance the budget in the long run since basically the guarantee of permanent full employment will make private investment much more attractive and a fortiori the greater private investment will diminish the need for deficit spending.

To both Keynes and Lerner it was evident that the state had the ability to promote full employment and a stable price level – and that it should use its powers to do so. If that meant that it had to take on a debt and (more or less temporarily) underbalance its budget – so let it be! Public debt is neither good nor bad. It is a means to achieving two over-arching macroeconomic goals – full employment and price stability. What is sacred is not to have a balanced budget or running down public debt per se, regardless of the effects on the macroeconomic goals. If “sound finance”, austerity and a balanced budgets means increased unemployment and destabilizing prices, they have to be abandoned.

Now against this reasoning, exponents of the thesis of Ricardian equivalence, have maintained that whether the public sector finances its expenditures through taxes or by issuing bonds is inconsequential, since bonds must sooner or later be repaid by raising taxes in the future.

Robert Barro (1974) attempted to give the proposition a firm theoretical foundation, arguing that the substitution of a budget deficit for current taxes has no impact on aggregate demand and so budget deficits and taxation have equivalent effects on the economy.

If the public sector runs extra spending through deficits, taxpayers will according to the hypothesis anticipate that they will have to pay higher taxes in future — and therefore increase their savings and reduce their current consumption to be able to do so, the consequence being that aggregate demand would not be different to what would happen if taxes were raised today.

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

The Ricardo-Barro hypothesis, with its view of public debt incurring a burden for future generations, is the dominant view among mainstream economists and politicians today. The rational people making up the actors in the model are assumed to know that today’s debts are tomorrow’s taxes. But — one of the main problems with this standard neoclassical theory is, however, that it doesn’t fit the facts.

From a more theoretical point of view, one may also strongly criticize the Ricardo-Barro model and its concomitant crowding out assumption, since perfect capital markets do not exist and repayments of public debt can take place far into the future and it’s dubious if we really care for generations 300 years from now.

At times when economic theories have been in favour of public debt one gets the feeling that the more or less explicit assumption is that public expenditures are useful and good for the economy, since they work as an important — and often necessary — injection to the economy, creating wealth and employment. At times when economic theories have been against public debt, the basic assumption seems to be that public expenditures are useless and only crowd out private initiatives and has no positive net effect on the economy.

Wolfgang Streeck argues in Buying Time: The Delayed Crisis of Democratic Capitalism (2014) for an interpretation of the more or less steady increase in public debt since the 1970s as a sign of a transformation of the tax state (Schumpeter) into a debt state. In his perspective public debt is both an indicator and a causal factor in the relationship between political and economic systems. The ultimate cause behind the increased public debt is the long run decline in economic growth, resulting in a doubling of the average public debt in OECD countries for the last 40 years. This has put strong pressures on modern capitalist states, and parallel to this, income inequality has increased in most countries. This is according to Streeck one manifestation of a neoliberal revolution – with its emphasis on supply side politics, austerity policies and financial deregulation — that has taken place and where democratic-redistributive intervention has become ineffectual.

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.

The failure of successive administrations in most developed countries to embark on any vigorous policy aimed at bringing down unconscionably high levels of unemployment has been due in no small measure to a ‘viewing with alarm’ of the size of the national debts, often alleged to be already excessive, or at least threatening to become so, and  by ideologically urged striving toward ‘balanced’ government budgets without any consideration of whether such debts and deficits are or threaten to become excessive in terms of some determinable impact on the real general welfare. darling-let-s-get-deeply-into-debtIf they are examined in the light of their impact on welfare, however, they can usually be shown to be well below their optimum levels, let alone at levels that could have dire consequences.

To view government debts in terms of the ‘functional finance’ concept introduced by Abba Lerner, is to consider their role in the macroeconomic balance of the economy. In simple, bare bones terms, the function of government debts that is significant for the macroeconomic health of an economy is that they provide the assets into which individuals can put whatever accumulated savings they attempt to set aside in excess of what can be wisely invested in privately owned real assets. A debt that is smaller than this will cause the attempted excess savings, by being reflected in a reduced level of consumption outlays, to be lost in reduced real income and increased unemployment.

William Vickrey

Anti-Keynesianism — in most cases a sign of ignorance

19 Jun, 2015 at 10:32 | Posted in Economics | Comments Off on Anti-Keynesianism — in most cases a sign of ignorance

Yesterday Simon Wren-Lewis wrote on his blog:

When anti-Keynesians tell you that support or otherwise for Keynesian macroeconomics depends on belief about the size of the state, they are telling something about where their own views come from. When they tell you everyone ignores evidence that conflicts with their views, they are telling you how they treat evidence. And the fact that some on the right take this position tells you why anti-Keynesian views continue to survive despite overwhelming evidence in favour of Keynesian theory.

Although I mainly agree with Wren-Lewis on this issue, I think another important rational behind this kind of rude anti-Keynesianism is pure ignorance.

econtalkA while ago, on my way home on train after conferencing in Stockholm, I tried to while away the time by listening to a podcast of EconTalk where Garett Jones of George Mason University talked with EconTalk host Russ Roberts about the ideas of Irving Fisher on debt and deflation.

Jones’s thoughts on Fisher were thought-provoking and interesting, but in the middle of the discussion Roberts started to ask questions on the relation between Fisher’s ideas and those of Keynes, saying more or less something like “Keynes generated a lot of interest in his idea that the labour market doesn’t clear … because the price for labour does not adjust, i. e. wages are ‘sticky’ or ‘inflexible’.”

This is of course pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage rigidities, he certainly did not hold the view that wage rigidity was the reason behind high unemployment and other macroeconomic problems. To Keynes, recessions, depressions and faultering labour markets were not basically a problem of “sticky wages.”

Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will – according to Keynes – accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and – as Keynes noted – unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.

In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the “fallacy of composition.” Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.

Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. But to Keynes it would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen by Keynes as a general substitute for an expansionary monetary or fiscal policy.

Even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

So, what Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.

To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.

To mainstream neoclassical theory the kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. Keynes on the other hand writes in General Theory:

The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …

The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemployment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …

Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.

gtUnfortunately, Roberts’s statement is not the only example of this kind of utter nonsense on Keynes. Similar distortions of Keynes’s views can be found in , e. g., the economics textbooks of the “New Keynesian” — a grotesque misnomer — Greg Mankiw. How is this possible? Probably because these economists have but a very superficial acquaintance with Keynes’s own works, and rather depend on second-hand sources like Hansen, Samuelson, Hicks and the likes.

Fortunately there is a solution to the problem. Keynes’s books are still in print. Read them!!

Expected utility theory is transmogrifying truth

17 Jun, 2015 at 16:40 | Posted in Economics | 6 Comments

Although the expected utility theory is obviously both theoretically and descriptively inadequate, colleagues and microeconomics textbook writers all over the world gladly continue to use it, as though its deficiencies were unknown or unheard of.

Daniel Kahneman writes — in Thinking, Fast and Slow — that expected utility theory is seriously flawed since it doesn’t take into consideration the basic fact that people’s choices are influenced by changes in their wealth. Where standard microeconomic theory assumes that preferences are stable over time, Kahneman and other behavioural economists have forcefully again and again shown that preferences aren’t fixed, but vary with different reference points. How can a theory that doesn’t allow for people having different reference points from which they consider their options have an almost axiomatic status within economic theory?

41kgYr0Fs2L._SY344_BO1,204,203,200_The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind … I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking it is extraordinarily difficult to notice its flaws … You give the theory the benefit of the doubt, trusting the community of experts who have accepted it … But they did not pursue the idea to the point of saying, “This theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only present wealth.”

On a more economic-theoretical level, information theory — and especially the so called Kelly criterion — also highlights the problems concerning the neoclassical theory of expected utility.
Suppose I want to play a game. Let’s say we are tossing a coin. If heads comes up, I win a dollar, and if tails comes up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T) should I optimally invest in this game?

A strict neoclassical utility-maximizing economist would suggest that my goal should be to maximize the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer no to that question. The risk of losing is so high, that I already after few games played — the expected time until my first loss arises is 1/(1-p), which in this case is equal to 2.5 — with a high likelihood would be losing and thereby become bankrupt. The expected-value maximizing economist does not seem to have a particularly attractive approach.

So what’s the alternative? One possibility is to apply the so-called Kelly criterion — after the American physicist and information theorist John L. Kelly, who in the article A New Interpretation of Information Rate (1956) suggested this criterion for how to optimize the size of the bet — under which the optimum is to invest a specific fraction (x) of wealth (T) in each game. How do we arrive at this fraction?

When I win, I have (1 + x) times as much as before, and when I lose (1 – x) times as much. After n rounds, when I have won v times and lost n – v times, my new bankroll (W) is

(1) W = (1 + x)v(1 – x)n – v T

[A technical note: The bets used in these calculations are of the “quotient form” (Q), where you typically keep your bet money until the game is over, and a fortiori, in the win/lose expression it’s not included that you get back what you bet when you win. If you prefer to think of odds calculations in the “decimal form” (D), where the bet money typically is considered lost when the game starts, you have to transform the calculations according to Q = D – 1.]

The bankroll increases multiplicatively — “compound interest” — and the long-term average growth rate for my wealth can then be easily calculated by taking the logarithms of (1), which gives

(2) log (W/ T) = v log (1 + x) + (n – v) log (1 – x).

If we divide both sides by n we get

(3) [log (W / T)] / n = [v log (1 + x) + (n – v) log (1 – x)] / n

The left hand side now represents the average growth rate (g) in each game. On the right hand side the ratio v/n is equal to the percentage of bets that I won, and when n is large, this fraction will be close to p. Similarly, (n – v)/n is close to (1 – p). When the number of bets is large, the average growth rate is

(4) g = p log (1 + x) + (1 – p) log (1 – x).

Now we can easily determine the value of x that maximizes g:

(5) d [p log (1 + x) + (1 – p) log (1 – x)]/d x = p/(1 + x) – (1 – p)/(1 – x) =>
p/(1 + x) – (1 – p)/(1 – x) = 0 =>

(6) x = p – (1 – p)

Since p is the probability that I will win, and (1 – p) is the probability that I will lose, the Kelly strategy says that to optimize the growth rate of your bankroll (wealth) you should invest a fraction of the bankroll equal to the difference of the likelihood that you will win or lose. In our example, this means that I have in each game to bet the fraction of x = 0.6 – (1 – 0.6) ≈ 0.2 — that is, 20% of my bankroll. Alternatively, we see that the Kelly criterion implies that we have to choose x so that E[log(1+x)] — which equals p log (1 + x) + (1 – p) log (1 – x) — is maximized. Plotting E[log(1+x)] as a function of x we see that the value maximizing the function is 0.2:

kelly2

The optimal average growth rate becomes

(7) 0.6 log (1.2) + 0.4 log (0.8) ≈ 0.02.

If I bet 20% of my wealth in tossing the coin, I will after 10 games on average have 1.0210 times more than when I started (≈ 1.22).

This game strategy will give us an outcome in the long run that is better than if we use a strategy building on the neoclassical economic theory of choice under uncertainty (risk) – expected value maximization. If we bet all our wealth in each game we will most likely lose our fortune, but because with low probability we will have a very large fortune, the expected value is still high. For a real-life player – for whom there is very little to benefit from this type of ensemble-average – it is more relevant to look at time-average of what he may be expected to win (in our game the averages are the same only if we assume that the player has a logarithmic utility function). What good does it do me if my tossing the coin maximizes an expected value when I might have gone bankrupt after four games played? If I try to maximize the expected value, the probability of bankruptcy soon gets close to one. Better then to invest 20% of my wealth in each game and maximize my long-term average wealth growth!

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin toss example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble-average does not work for an individual, for whom a time-average better reflects the experience made in the “non-parallel universe” in which we live.

The Kelly criterion gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time — entropy and the “arrow of time ” make this impossible — and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles and “parallel universe.”

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – the Kelly criterion shows that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When the bankroll is gone, it’s gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction (x) of his assets (T) should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the x, the greater is the leverage. But also – the greater is the risk. Since p is the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means the Kelly criterion says he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose.” In our example this means that he at each investment opportunity is to invest the fraction of x = 0.6 – (1 – 0.6), i.e. about 20% of his assets. The optimal average growth rate of investment is then about 2 % (0.6 log (1.2) + 0.4 log (0.8)).

Kelly’s criterion shows that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

The works of people like Kelly and Kahneman show that expected utility theory is indeed transmogrifying truth.

Hommage à Nicholas Georgescu-Roegen

16 Jun, 2015 at 22:55 | Posted in Economics | 2 Comments

 

 
C’est vraiment incroyable que l’économie orthodoxe ait toujours négligé The Entropy Law and the Economic Process, un ouvrage fondamental et aussi important dans l’histoire de la pensée économique que General Theory de Keynes.

Meta-analysis (student stuff)

16 Jun, 2015 at 10:05 | Posted in Economics | Comments Off on Meta-analysis (student stuff)

 

Econometric alchemy

15 Jun, 2015 at 17:31 | Posted in Statistics & Econometrics | Comments Off on Econometric alchemy

Thus we have “econometric modelling”, that activity of matching an incorrect version of [the parameter matrix] to an inadequate representation of [the data generating process], using insufficient and inaccurate data.59524872 The resulting compromise can be awkward, or it can be a useful approximation which encompasses previous results, throws’ light on economic theory and is sufficiently constant for prediction, forecasting and perhaps even policy. Simply writing down an “economic theory”, manipulating it to a “condensed form” and “calibrating” the resulting parameters using a pseudo-sophisticated estimator based on poor data which the model does not adequately describe constitutes a recipe for disaster, not for simulating gold! Its only link with alchemy is self-deception.

David Hendry

Why economic models constantly crash

15 Jun, 2015 at 16:07 | Posted in Economics | 1 Comment

To understand real world decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information. Unknown-Knowns-invert-657x600Moreover, all such views are predicated on there being no unanticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently …   Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for ‘dynamic stochastic general equilibrium models’ [DSGEs] are profound … The mathematical basis of a DSGE model fails when distributions shift. This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are built … It seems unlikely that economic agents are any more successful than professional economists in foreseeing when breaks will occur, or divining their properties from one or two observations after they have happened. That link with forecast failure has important implications for economic theories about agents’ expectations formation in a world with extrinsic unpredictability. General equilibrium theories rely heavily on ceteris paribus assumptions – especially the assumption that equilibria do not shift unexpectedly. The standard response to this is called the law of iterated expectations. Unfortunately, as we now show, the law of iterated expectations does not apply inter-temporally when the distributions on which the expectations are based change over time. To explain the law of iterated expectations, consider a very simple example – flipping a coin. The conditional probability of getting a head tomorrow is 50%. The law of iterated expectations says that one’s current expectation of tomorrow’s probability is just tomorrow’s expectation, i.e. 50%. In short, nothing unusual happens when forming expectations of future expectations. The key step in proving the law is forming the joint distribution from the product of the conditional and marginal distributions, and then integrating to deliver the expectation … The law of iterated expectations need not hold when the distributions shift. To return to the simple example, the expectation today of tomorrow’s probability of a head will not be 50% if the coin is changed from a fair coin to a trick coin that has, say, a 60% probability of a head.

David Hendry & Grayham Mizon

Time is what prevents everything from happening at once. To simply assume that economic processes are stationary — or even ergodic — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies. It only leads to forecast failures and crashed models.

When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks

Silly economics

14 Jun, 2015 at 21:02 | Posted in Economics | 1 Comment

For all its sober achievements, modern economics and its imitators in other social sciences exhibits a good deal of foolishness, strutting about claiming dignity in the manner of John Cleese.

Deirdre McCloskey

o-GEORGE-OSBORNE-SILLY-WALKS-TORY-MINISTRY-facebook

Econometric modellers — people burying their heads in the sand

14 Jun, 2015 at 17:41 | Posted in Economics | Comments Off on Econometric modellers — people burying their heads in the sand

The co-existence of confluent and structural relations implies that empirically observed failures of parametric invariance should not be immediately interpreted as ‘structural’ breaks or shifts in the real world. Rather, they often indicate model specification inadequacy in representing significantly confluent effects. Autonomy is frequently embedded in confluent relations due to the highly interdependent nature of many economic variables of interest. Modellers who stick to theory-based regression models and expect sophisticated estimators to work wonder when the variables of interest are known to correlate significantly with variables disregarded by theory are actually burying their heads in the sand.

head-in-sand

Logically, it is obviously premature to choose parameter estimators before the model to be estimated can be regarded as a self-autonomous unit. It simply makes no sense to go for estimators consistent with mis-specified or inadequately designed models. Moreover, the precision gain achievable through choice of better estimators is usually of a far smaller order of magnitude than the gain achievable through improved model designs. This has been repeatedly shown from numerous empirical model experiments for decades.

Duo Qin

Econometrics — rhetorics and reality

14 Jun, 2015 at 15:57 | Posted in Economics | 2 Comments

The desire in the profession to make universalistic claims following certain standard procedures of statistical inference is simply too strong to embrace procedures which explicitly rely on the use of vernacular knowledge for model closure in a contingent manner.

bc66cd6866434512e21f302be9d8c3c440e0fe058526dbca5215852a847422e9

More broadly, such a desire has played a vital role in the decisive victory of mathematical formalization over conventionally verbal based economic discourses as the proncipal medium of rhetoric, owing to its internal consistency, reducibility, generality, and apparent objectivity. It does not matter that [as Einstein wrote] ‘as far as the laws of mathematics refer to reality, they are not certain.’ What matters is that these laws are ‘certain’ when ‘they do not refer to reality.’ Most of what is evaluated as core research in the academic domain has little direct bearing on concrete social events in the real world anyway.

Duo Qin

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: