‘New Keynesian’ DSGE models

29 March, 2016 at 17:02 | Posted in Economics | 1 Comment

In the model [Gali, Smets and Wouters, Unemployment in an Estimated New Keyesian Model (2011)] there is perfect consumption insurance among the members of the household. SR002_FRONTBecause of separability in utility, this implies that consumption is equalized across all workers, whether they are employed or not … Workers who find that they do not have to work are unemployed or out of the labor force, and they have cause to rejoice as a result. Unemployed workers enjoy higher utility than the employed because they receive the same level of consumption, but without having to work.

There is much evidence that in practice unemployment is not the happy experience it is for workers in the model.  For example, Chetty and Looney (2006) and Gruber (1997) find that US households suffer roughly a 10 percent drop in consumption when they lose their job. According to Couch and Placzek (2010), workers displaced through mass layoffs suffer substantial and extended reductions in earnings. Moreover, Oreopoulos, Page and Stevens (2008) present evidence that the children of displaced workers also suffer reduced earnings. Additional evidence that unemployed workers suffer a reduction in utility include the results of direct interviews, as well as findings that unemployed workers experience poor health outcomes. Clark and Oswald (1994), Oswald (1997) and Schimmack, Schupp and Wagner (2008) describe evidence that suggests unemployment has a negative impact on a worker’s self-assessment of well being. Sullivan and von Wachter (2009) report that the mortality rates of high-seniority workers jump 50-100% more than would have been expected otherwise in the year after displacement. Cox and Koo (2006) report a significant positive correlation between male suicide and unemployment in Japan and the United States. For additional evidence that unemployment is associated with poor health outcomes, see Fergusson, Horwood and Lynskey (1997) and Karsten and Moser (2009) …

Suppose the CPS [Current Population Survey] employee encountered one of the people designated as “unemployed” … and asked if she were “available for work”. What would her answer be? She knows with certainty that she will not be employed in the current period. Privately, she is delighted about this because the non-employed enjoy higher utility than the employed … Not only is she happy about not having to work, but the labor union also does not want her to work. From the perspective of the union, her non-employment is a fundamental component of the union’s strategy for promoting the welfare of its membership.

Lawrence J. Christiano

fubar1To me these kind of ‘New Keynesian’ DSGE models, where unemployment is portrayed as a bliss, are a sign of a momentous failure to model real-world unemployment. It’s not only adding insult to injury — it’s also sad gibberish that shamelessly tries to whitewash neoliberal economic policies that put people out of work.

Being able to model a ‘credible’ DSGE world — how credible that world is, when depicting unemployment as a ‘happy experience’ and predicting the wage markup to increase with unemployment, I leave to the reader to decide — a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

The modeling convention used when constructing DSGE models makes it impossible to fully incorporate things that we know are of paramount importance for understanding modern economies — such as income and wealth inequality, asymmetrical power relations and information, liquidity preference, just to mention a few.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized that have to match reality, not the other way around.

Advertisements

Nutbush

28 March, 2016 at 21:20 | Posted in Varia | 1 Comment

 

The power of self-belief …

28 March, 2016 at 15:38 | Posted in Varia | 1 Comment

ego

Axel Leijonhufvud a ‘New Keynesian’? No way!

28 March, 2016 at 14:47 | Posted in Economics | 1 Comment

51BMduFh0cL._SX373_BO1,204,203,200_Trying to delineate the difference between ‘New Keynesianism’ and ‘Post Keynesianism’ — during an interview a couple of weeks ago — yours truly was confronted by the odd and confused view that Axel Leijonhufvud was a ‘New Keynesian.’ I wasn’t totally surprised — I had run into that misapprehension before — but still, it’s strange how wrong people sometimes get things.

The  last time I met Axel, we were both invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?” in Copenhagen. Axel’s speech was later published as Keynes and the crisis and contains the following thought provoking passages:
 

So far I have argued that recent events should force us to re-examine recent monetary policy doctrine. Do we also need to reconsider modern macroeconomic theory in general? I should think so. Consider briefly a few of the issues.

The real interest rate … The problem is that the real interest rate does not exist in reality but is a constructed variable. What does exist is the money rate of interest from which one may construct a distribution of perceived real interest rates given some distribution of inflation expectations over agents. Intertemporal non-monetary general equilibrium (or finance) models deal in variables that have no real world counterparts. Central banks have considerable influence over money rates of interest as demonstrated, for example, by the Bank of Japan and now more recently by the Federal Reserve …

The representative agent. If all agents are supposed to have rational expectations, it becomes convenient to assume also that they all have the same expectation and thence tempting to jump to the conclusion that the collective of agents behaves as one. The usual objection to representative agent models has been that it fails to take into account well-documented systematic differences in behaviour between age groups, income classes, etc. In the financial crisis context, however, the objection is rather that these models are blind to the consequences of too many people doing the same thing at the same time, for example, trying to liquidate very similar positions at the same time. Representative agent models are peculiarly subject to fallacies of composition. The representative lemming is not a rational expectations intertemporal optimising creature. But he is responsible for the fat tail problem that macroeconomists have the most reason to care about …

For many years now, the main alternative to Real Business Cycle Theory has been a somewhat loose cluster of models given the label of New Keynesian theory. New Keynesians adhere on the whole to the same DSGE modeling technology as RBC macroeconomists but differ in the extent to which they emphasise inflexibilities of prices or other contract terms as sources of shortterm adjustment problems in the economy. The “New Keynesian” label refers back to the “rigid wages” brand of Keynesian theory of 40 or 50 years ago. Except for this stress on inflexibilities this brand of contemporary macroeconomic theory has basically nothing Keynesian about it.

The obvious objection to this kind of return to an earlier way of thinking about macroeconomic problems is that the major problems that have had to be confronted in the last twenty or so years have originated in the financial markets – and prices in those markets are anything but “inflexible”. But there is also a general theoretical problem that has been festering for decades with very little in the way of attempts to tackle it. Economists talk freely about “inflexible” or “rigid” prices all the time, despite the fact that we do not have a shred of theory that could provide criteria for judging whether a particular price is more or less flexible than appropriate to the proper functioning of the larger system. More than seventy years ago, Keynes already knew that a high degree of downward price flexibility in a recession could entirely wreck the financial system and make the situation infinitely worse. But the point of his argument has never come fully to inform the way economists think about price inflexibilities …

I began by arguing that there are three things we should learn from Keynes … The third was to ask whether events provedthat existing theory needed to be revised. On that issue, I conclude that dynamic stochastic general equilibrium theory has shown itself an intellectually bankrupt enterprise. But this does not mean that we should revert to the old Keynesian theory that preceded it (or adopt the New Keynesian theory that has tried to compete with it). What we need to learn from Keynes, instead, are these three lessons about how to view our responsibilities and how to approach our subject.

Axel Leijonhufvud a ‘New Keynesian’? Forget it!

The shaky mathematical basis of DSGE models

28 March, 2016 at 14:14 | Posted in Economics | 1 Comment

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.

HendryDavid-15x10cm-300dpiMoreover, all such views are predicated on there being no un-anticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.

The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models …

Many of the theoretical equations in DSGE models take a form in which a variable today, say incomes (denoted as yt) depends inter alia on its ‘expected future value’… For example, yt may be the log-difference between a de-trended level and its steady-state value. Implicitly, such a formulation assumes some form of stationarity is achieved by de-trending.

Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for DSGEs are profound. As we explain below, the mathematical basis of a DSGE model fails when distributions shift … This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are built.

David Hendry & Grayham Mizon

A great article, not only showing on what shaky mathematical basis DSGE models are built, but also confirming much of Keynes’s critique of econometrics, underlining that to understand real world ”non-routine” decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.

Advocates of DSGE modeling want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises — and that also applies to the quest for causality and forecasting predictability in DSGE models.

Two reasons DSGE models are such spectacular failures

26 March, 2016 at 17:29 | Posted in Economics | 4 Comments

wpid-mmb9qajq9swpi8xxy76aThe unsellability of DSGE models — private-sector firms do not pay lots of money to use DSGE models — is one strong argument against DSGE.

But it is not the most damning critique of it.

To me the most damning critiques that can be levelled against DSGE models are the following two:

(1) DSGE models are unable to explain involuntary unemployment

In the basic DSGE models the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its ‘equilibrium value,’ the representative agent adjust her labour supply, so that when the real wage is higher than its ‘equilibrium value,’ labour supply is increased, and when the real wage is below its ‘equilibrium value,’ labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

The D WordAlthough this picture of unemployment as a kind of self-chosen optimality, strikes most people as utterly ridiculous, there are also, unfortunately, a lot of neoclassical economists out there who still think that price and wage rigidities are the prime movers behind unemployment. DSGE models basically explains variations in employment (and a fortiori output) with assuming nominal wages being more flexible than prices – disregarding the lack of empirical evidence for this rather counterintuitive assumption.

Lowering nominal wages would not  clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. It would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen as a general substitute for an expansionary monetary or fiscal policy. And even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.

The classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong. Flexible wages would probably only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labour market.

Obviously it’s rather embarrassing that the kind of DSGE models ‘modern’ macroeconomists use cannot incorporate such a basic fact of reality as involuntary unemployment. Of course, working with representative agent models, this should come as no surprise. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

(2) In DSGE models increases in government spending leads to a drop in private consumption

In the most basic mainstream proto-DSGE models one often assumes that governments finance current expenditures with current tax revenues.  This will have a negative income effect on the households, leading — rather counterintuitively — to a drop in private consumption although both employment an production expands. This mechanism also holds when the (in)famous Ricardian equivalence is added to the models.

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

Why?

In the standard neoclassical consumption model — used in DSGE macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon  today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form

ct + cf/(1+r) = ft + yt + yf/(1+r),

where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form

U = u(ct) + au(cf),

where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when

u'(ct) = a(1+r)u'(cf).

This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as

1/ct = a(1+r)(1/cf),

or

cf/ct = a(1+r).

This importantly implies that according to the neoclassical consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes, since wealth — ft + yt + yf/(1+r) — has to be interpreted as present discounted value net of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged. As a result — households cut down on their consumption when governments increase their spendings. Mirabile dictu!

Benchmark DSGE models have paid little attention to the role of fiscal policy, therefore minimising any possible interaction of fiscal policies with monetary policy. This has been partly because of the assumption of Ricardian equivalence. As a result, the distribution of taxes across time become irrelevant and aggregate financial wealth does not matter for the behavior of agents or for the dynamics of the economy because bonds do not represent net real wealth for households.

Incorporating more meaningfully the role of fiscal policies requires abandoning frameworks with the Ricardian equivalence. The question is how to break the Ricardian equivalence? Two possibilities are available. The first is to move to an overlapping generations framework and the second (which has been the most common way of handling the problem) is to rely on an infinite-horizon model with a type of liquidity constrained agents (eg “rule of thumb agents”).

Camillo Tovar

out of the fryingYours truly totally agree that macroeconomic models have to abandon Ricardian equivalence nonsense. But replacing it with “overlapping generations” and “infinite-horizon” models — isn’t that — in terms of realism and relevance — just getting out of the frying pan into the fire? All unemployment is still voluntary. Intertemporal substitution between labour and leisure is still ubiquitous. And the specification of the utility function is still hopelessly off the mark from an empirical point of view.

As one Nobel laureate had it:

Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.

Joseph E. Stiglitz, twitter 

And as one economics blogger has it:

DSGE modeling is taught in every graduate school in the country. It is also sheer nonsense.

Lars Syll, twitter 

Why public debt is a good thing

26 March, 2016 at 15:27 | Posted in Economics | 2 Comments

dec3bb27f72875e4fb4d4b62daebb2fd161b36392c1a0626f00cfd2ece207d84The U.S. economy has, on the whole, done pretty well these past 180 years, suggesting that having the government owe the private sector money might not be all that bad a thing. The British government, by the way, has been in debt for more than three centuries, an era spanning the Industrial Revolution, victory over Napoleon, and more.

But is the point simply that public debt isn’t as bad as legend has it? Or can government debt actually be a good thing?

Believe it or not, many economists argue that the economy needs a sufficient amount of public debt out there to function well. And how much is sufficient? Maybe more than we currently have. That is, there’s a reasonable argument to be made that part of what ails the world economy right now is that governments aren’t deep enough in debt.

Paul Krugman

Indeed.

Krugman is absolutely right.

Why?

Through history public debts have gone up and down, often expanding in periods of war or large changes in basic infrastructure and technologies, and then going down in periods when things have settled down.

The pros and cons of public debt have been put forward for as long as the phenomenon itself has existed, but it has, notwithstanding that, not been possible to reach anything close to consensus on the issue — at least not in a long time-horizon perspective. One has as a rule not even been able to agree on whether public debt is a problem, and if — when it is or how to best tackle it. Some of the more prominent reasons for this non-consensus are the complexity of the issue, the mingling of vested interests, ideology, psychological fears, the uncertainty of calculating ad estimating inter-generational effects, etc., etc.

In classical economics — following in the footsteps of David Hume – especially Adam Smith, David Ricardo, and Jean-Baptiste Say put forward views on public debt that was as a rule negative. The good budget was a balanced budget. If government borrowed money to finance its activities, it would only give birth to “crowding out” private enterprise and investments. The state was generally considered incapable if paying its debts, and the real burden would therefor essentially fall on the taxpayers that ultimately had to pay for the irresponsibility of government. The moral character of the argumentation was a salient feature — according to Hume, “either the nation must destroy public credit, or the public credit will destroy the nation.”

Later on in the 20th century economists like John Maynard Keynes, Abba Lerner and Alvin Hansen would hold a more positive view on public debt. Public debt was normally nothing to fear, especially if it was financed within the country itself (but even foreign loans could be beneficient for the economy if invested in the right way). Some members of society would hold bonds and earn interest on them, while others would have to pay the taxes that ultimately paid the interest on the debt. But the debt was not considered a net burden for society as a whole, since the debt cancelled itself out between the two groups. If the state could issue bonds at a low interest rate, unemployment could be reduced without necessarily resulting in strong inflationary pressure. And the inter-generational burden was no real burden according to this group of economists, since — if used in a suitable way — the debt would, through its effects on investments and employment, actually be net winners. There could, of course, be unwanted negative distributional side effects, for the future generation, but that was mostly considered a minor problem since, as  Lerner put it,“if our children or grandchildren repay some of the national debt these payments will be made to our children and grandchildren and to nobody else.”

Central to the Keynesian influenced view is the fundamental difference between private and public debt. Conflating the one with the other is an example of the atomistic fallacy, which is basically a variation on Keynes’ savings paradox. If an individual tries to save and cut down on debts, that may be fine and rational, but if everyone tries to do it, the result would be lower aggregate demand and increasing unemployment for the economy as a whole.

An individual always have to pay his debts. But a government can always pay back old debts with new, through the issue of new bonds. The state is not like an individual. Public debt is not like private debt. Government debt is essentially a debt to itself, its citizens. Interest paid on the debt is paid by the taxpayers on the one hand, but on the other hand, interest on the bonds that finance the debts goes to those who lend out the money.

To both Keynes and Lerner it was evident that the state had the ability to promote full employment and a stable price level – and that it should use its powers to do so. If that meant that it had to take on a debt and (more or less temporarily) underbalance its budget – so let it be! Public debt is neither good nor bad. It is a means to achieving two over-arching macroeconomic goals – full employment and price stability. What is sacred is not to have a balanced budget or running down public debt per se, regardless of the effects on the macroeconomic goals. If “sound finance”, austerity and a balanced budgets means increased unemployment and destabilizing prices, they have to be abandoned.

Now against this reasoning, exponents of the thesis of Ricardian equivalence, have maintained that whether the public sector finances its expenditures through taxes or by issuing bonds is inconsequential, since bonds must sooner or later be repaid by raising taxes in the future.

In the 1970s Robert Barro attempted to give the proposition a firm theoretical foundation, arguing that the substitution of a budget deficit for current taxes has no impact on aggregate demand and so budget deficits and taxation have equivalent effects on the economy.

The Ricardo-Barro hypothesis, with its view of public debt incurring a burden for future generations, is the dominant view among mainstream economists and politicians today. The rational people making up the actors in the model are assumed to know that today’s debts are tomorrow’s taxes. But — one of the main problems with this standard neoclassical theory is, however, that it doesn’t fit the facts.

From a more theoretical point of view, one may also strongly criticize the Ricardo-Barro model and its concomitant crowding out assumption, since perfect capital markets do not exist and repayments of public debt can take place far into the future and it’s dubious if we really care for generations 300 years from now.

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.

The question if public debt is good and that we may actually have to little of it is one of our time’s biggest questions. Giving the wrong answer to it — as Krugman notices — will be costly:

0005397318999_621db3b6e1The great debt panic that warped the U.S. political scene from 2010 to 2012, and still dominates economic discussion in Britain and the eurozone, was even more wrongheaded than those of us in the anti-austerity camp realized.

Not only were governments that listened to the fiscal scolds kicking the economy when it was down, prolonging the slump; not only were they slashing public investment at the very moment bond investors were practically pleading with them to spend more; they may have been setting us up for future crises.

And the ironic thing is that these foolish policies, and all the human suffering they created, were sold with appeals to prudence and fiscal responsibility.

Preachers of austerity — false prophets

26 March, 2016 at 10:24 | Posted in Economics | 1 Comment

We are not going to get out of the economic doldrums as long as we continue to be obsessed with the unreasoned ideological goal of reducing the so-called deficit. The “deficit” is not an economic sin but an economic necessity …

austerity22The administration is trying to bring the Titanic into harbor with a canoe paddle, while Congress is arguing over whether to use an oar or a paddle, and the Perot’s and budget balancers seem eager to lash the helm hard-a-starboard towards the iceberg. Some of the argument seems to be over which foot is the better one to shoot ourselves in. We have the resources in terms of idle manpower and idle plants to do so much, while the preachers of austerity, most of whom are in little danger of themselves suffering any serious consequences, keep telling us to tighten our belts and refrain from using the resources that lay idle all around us.

Alexander Hamilton once wrote “A national debt, if it be not excessive, would be for us a national treasure.” William Jennings Bryan used to declaim, “You shall not crucify mankind upon a cross of gold.” Today’s cross is not made of gold, but is concocted of a web of obfuscatory financial rectitude from which human values have been expunged.

William Vickrey

Modern economics — the true picture

25 March, 2016 at 15:21 | Posted in Varia | 3 Comments

Most mainstream economists think ‘modern’ economics looks like this:

noahbuilding

 
In reality, I would argue, it looks more like this:

neoclassical building

DSGE models — a report from the ‘scientific battlefield’

23 March, 2016 at 11:04 | Posted in Economics | 1 Comment

teamIn conclusion, one can say that the sympathy that some of the traditional and Post-Keynesian authors show towards DSGE models is rather hard to understand. Even before the recent financial and economic crisis put some weaknesses of the model – such as the impossibility of generating asset price bubbles or the lack of inclusion of financial sector issues – into the spotlight and brought them even to the attention of mainstream media, the models’ inner working were highly questionable from the very beginning. While one can understand that some of the elements in DSGE models seem to appeal to Keynesians at first sight, after closer examination, these models are in fundamental contradiction to Post-Keynesian and even traditional Keynesian thinking. The DSGE model is a model in which output is determined in the labour market as in New Classical models and in which aggregate demand plays only a very secondary role, even in the short run.

In addition, given the fundamental philosophical problems presented for the use of DSGE models for policy simulation, namely the fact that a number of parameters used have completely implausible magnitudes and that the degree of freedom for different parameters is so large that DSGE models with fundamentally different parametrization (and therefore different policy conclusions) equally well produce time series which fit the real-world data, it is also very hard to understand why DSGE models have reached such a prominence in economic science in general.

Sebastian Dullien

Neither New Classical nor ‘New Keynesian’ microfounded DSGE macro models have helped us foresee, understand or craft solutions to the problems of today’s economies. But still most young academic macroeconomists want to work with DSGE models. After reading Dullien’s article, that certainly should be a very worrying confirmation of economics — at least from the point of view of realism and relevance — becoming more and more a waste of time. Why do these young bright guys waste their time and efforts? Besides aspirations of being published, I think maybe Frank Hahn gave the truest answer back in 2005, when interviewed on the occasion of his 80th birthday, he confessed that some economic assumptions didn’t really say anything about ‘what happens in the world,’ but still had to be considered very good ‘because it allows us to get on this job.’

Modern macro — ‘genuine plurality’ vs. ‘axiomatic variation’

23 March, 2016 at 10:26 | Posted in Economics | Comments Off on Modern macro — ‘genuine plurality’ vs. ‘axiomatic variation’

out-of-the-fryingThe DSGE mainstream – which is made up of new classical macroeconomics and neo-Keynesianism – is unanimously based on the core assumptions that characterize the paradigm of social exchange theory. These are rationality, ergodicity and substitutionality, the exclusive acceptance of a formal mathematical-deductive, positivist reductionism. After the ‘empirical turn’ of the last two or three decades, these have been combined with sophisticated micro- and macroeconometrics, or with experimental arrangements, such as are familiar from the leading natural sciences (physics and chemistry). The postulate of stability and optimality (Walras’s law), which is implemented a priori in the core assumptions, serves as a ‘model solution,’ and thus functions as a marker of a negative heuristic. The apparently very different model prognoses of new classical macroeconomics (hyper-balanced and hyper-stable) on the one hand, and of standard and neo-Keynesianism (unbalanced, open to intervention) on the other hand are based on changes to assumptions in the ‘protective belt’ (e.g. about the speed of adjustment, the rigidity of prices and quantities, the formation of expectations etc.), but do not actually point to a different paradigmatic origin of the two schools of theory.

Arne Heise & Sebastian Thieme

Rivers of belief (personal)

22 March, 2016 at 22:27 | Posted in Varia | Comments Off on Rivers of belief (personal)

 

This one is for you, Kristina

But in dreams,
I can hear your name.
And in dreams,
We will meet again.

When the seas and mountains fall
And we come to end of days,
In the dark I hear a call
Calling me there
I will go there
And back again.

Economics — the ‘ten million cool theories’ problem

22 March, 2016 at 19:15 | Posted in Economics | 1 Comment

By Noah Smith’s own account, the field of economics is experiencing an empirical revolution. Unlike the past, it has become necessary to test theories against reality. That places the field of economics many decades behind the field of evolution and numerous fields in the human social sciences that have been rigorously evidence-based all along. Earth to the economics profession: Welcome to Science 101!

1349490-View-at-delicious-platters-standing-together-on-smorgasbord-Stock-PhotoBut there is more to Science 101 than the need to test theories. Let’s imagine that there were ten million cool theories out there. How long would it take to test them? Hundreds of millions of years. Does it really need explaining that the choice of the theory to test is important? Does Smith really believe that any old idea that comes into the head of an economist is equally worthy of attention?

The main reason that the so-called orthodox school of economics achieved its dominance is because it seemed to offer a grand unifying theoretical framework. Too bad that its assumptions were absurd and little effort was made to test its empirical predictions. Its failure does not change the fact that some unifying theoretical framework is required to prevent the “ten million cool theories” problem.

David Sloan Wilson

For my own  view — rather similar to Wilson’s — on the ‘ten million cool theories’ problem, see here.

Let me lead you through the streets of London (personal)

22 March, 2016 at 18:36 | Posted in Varia | Comments Off on Let me lead you through the streets of London (personal)

 

The non-existence of economic laws

22 March, 2016 at 18:28 | Posted in Economics | Comments Off on The non-existence of economic laws

In mainstream economics there’s — still — a lot of talk about ‘economic laws.’ The crux of these laws — and regularities — that allegedly do exist in economics, is that they only hold ceteris paribus. That fundamentally means that these laws/regularites only hold when the right conditions are at hand for giving rise to them. Unfortunately, from an empirical point of view, those conditions are only at hand in artificially closed nomological models purposely designed to give rise to the kind of regular associations that economists want to explain. But, really, since these laws/regularities do not exist outside these ‘socio-economic machines,’ what’s the point in constructing thought experimental models showing these non-existent laws/regularities? When the almost endless list of narrow and specific assumptions necessary to allow the ‘rigorous’ deductions are known to be at odds with reality, what good do these models do?

Deducing laws in theoretical models is of no avail if you cannot show that the models — and the assumptions they build on — are realistic representations of what goes on in real-life.

Conclusion? Instead of restricting our methodological endeavours at building ever more rigorous and precise deducible models, we ought to spend much more time improving our methods for choosing models!

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

errorineconomicsThere is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical prof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

 

Panglossian macroeconomics

22 March, 2016 at 11:07 | Posted in Economics | 2 Comments

Economic science does an excellent job of displacing bad ideas with good ones. It’s happening every day. For every person who places obstacles in the way of good science to protect his or her turf, there are five more who are willing to publish innovative papers in good journals, and to promote revolutionary ideas that might be destructive for the powers-that-be. The state of macro is sound …

Stephen Williamson

Noita8Sure, and soon it’s Easter and the Swedish witches fly to Blåkulla (“the Blue Mountain”) to meet the devil …

Debt, credit and financial instability

21 March, 2016 at 16:11 | Posted in Economics | 1 Comment

 

In a blogpost the other day, Simon Wren-Lewis was airing some misgivings about MMT. According to Wren-Lewis there’s basically nothing new about MMT — everything has been ‘well known long before MMT.’ And, argues Wren-Lewis:

Some have commented that my recent discussion of fiscal rules ignores the fact that governments can finance investment, or anything else, by creating money. What would happen if the government started doing exactly that: stopped issuing debt and just created money. Let’s assume that real output is at its ‘full employment’ level. That would force interest rates down, which in turn would raise demand and create inflationary pressure, which is not really desirable. MMTers tend to ignore this, and it is not at all clear why.

‘Stopped issuing debt and just created money’?

Hmmm …

If there’s one single thing MMTers have been very adamant about and stressing all the time, it is that creating money is nothing but a form of debt that has to — as emphasized in the Bezemer-video above — show up on the liability side of the central bank accounts. One agent’s assets is always another agent’s liabilities. Although Wren-Lewis admits never having ‘heard of MMT’ before starting his blog, this, however, I thought, was common knowledge …
 
 
Added 18:30 GMT: Obviously yours truly is not alone in wondering about modern Oxford scholarship …

He suggests that he (and his ilk) knew all of the propositions advanced by MMT all along anyway – so ‘there is nothing new about it’.

Which then begs the question as to why they are still teaching things like:

1. The money multiplier.

2. Crowding out.

3. Fiscal deficits cause higher interest rates.

4. Central bank controls the money supply.

5. Inflation is higher if governments ‘print money’ to match their deficits relative to issuing debt …

So they knew all of the myths in the mainstream macroeconomics textbooks all along but still choose to teach them in their courses …

These characters have built an image of MMT in their own minds from some garbled accounts etc and choose to hide behind a few crude and incorrect propositions that they attribute to MMT academics (such as, ‘deficits do not matter’) to make themselves feel comfortable that they know everything anyway and still have a body of economics that is relevant.

The lack of scholarship is astounding – but then what would you expect when you examine the course material they offer their students and the sort of statements they make on the public record.

Bill Mitchell

Slandering Bernie Sanders

21 March, 2016 at 13:23 | Posted in Economics | 1 Comment

Sanders has been dismissed as selling unrealistic pipe dreams. Social Security would be a pipe dream if we did not already have it; so would Medicare and public education too. There is a lesson in that. Pipe dreams are the stuff of change.

4180503Rather than an excess of pipe dreams, our current dismal condition is the product of fear of dreaming. The Democratic Party establishment persistently strives to downsize economic and political expectations. Senator Sanders aims to upsize them, which is why he has been viewed as such a threat.

November will be a time for Democratic voters to come together to stop whoever the Republicans nominate. In the meantime, there is a big lesson to be learned.

Today, the status quo defense mechanism has been used to tarnish Bernie Sanders: tomorrow it will, once again, be used to rule out progressive policy personnel and options.

Progressives must surface the obstruction posed by the Democratic Party establishment. Primaries are prime time to do that, which means there is good reason for Sanders’ campaign to continue.

Thomas Palley

Science and truth

21 March, 2016 at 13:07 | Posted in Theory of Science & Methodology | 1 Comment

28mptoothfairy_jpg_1771152eIn my view, scientific theories are not to be considered ‘true’ or ‘false.’ In constructing such a theory, we are not trying to get at the truth, or even to approximate to it: rather, we are trying to organize our thoughts and observations in a useful manner.

Robert Aumann

 

What a handy view of science.

How reassuring for all of you who have always thought that believing in the tooth fairy make you understand what happens to kids’ teeth. Now a ‘Nobel prize’ winning economist tells you that if there are such things as tooth fairies or not doesn’t really matter. Scientific theories are not about what is true or false, but whether ‘they enable us to organize and understand our observations’ …

Mirabile dictu!

What Aumann and other defenders of scientific storytelling ‘forgets’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough. Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough. Truth is an important concept in real science.

Card and Krueger on minimum wage

20 March, 2016 at 22:23 | Posted in Economics | 2 Comments

“Myth and Measurement” was recently re-released as a 20th anniversary edition. While it’s common now to hear that raising the minimum wage won’t increase unemployment, the initial empirical work by Card and Krueger wasn’t immediately embraced by other economists …

minimum-wage-profitsAnd it makes sense why some economists would find Card and Krueger’s results so shocking and at times literally unbelievable. Simple supply and demand tells us that if a higher minimum wage pushes a wage rate above its equilibrium level, then the amount of labor demanded by firms will decline and the number of jobs will drop as a result. But that’s the simple theory of supply and demand in a perfectly competitive labor market. Card and Krueger’s analysis—along with further work by other economists—shows that the predictions of the perfectly competitive market don’t come true.

Of course, not everyone in the economics profession hasn’t come to the same conclusion as Card and Krueger …

Nick Bunker/Equitablog

No, indeed, not everyone came to the same conclusion …

buchC6The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the pre-supposition that human choice behavior is sufficiently rational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teaching of two centuries; we have not yet become a bevy of camp-following whores.

James M. Buchanan in Wall Street Journal (April 25, 1996)

Noahpinion and the empirical ‘revolution’ in economics

20 March, 2016 at 13:32 | Posted in Economics | 2 Comments

Experimental-Economics-1-1038x576But I think that more important than any of these theoretical changes … is the empirical revolution in econ. Ten million cool theories are of little use beyond the “gee whiz” factor if you can’t pick between them. Until recently, econ was fairly bad about agreeing on rigorous ways to test theories against reality, so paradigms came and went like fashions and fads. Now that’s changing. To me, that seems like a much bigger deal than any new theory fad, because it offers us a chance to find enduringly reliable theories that won’t simply disappear when people get bored or political ideologies change.

So the shift to empiricism away from philosophy supersedes all other real and potential shifts in economic theory. Would-be econ revolutionaries absolutely need to get on board with the new empiricism, or else risk being left behind.

Noahpinion

Noah Smith maintains that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic models.

Yours truly beg to differ. When looked at carefully, there  are in fact few real reasons to share the optimism on this so called ’empirical revolution’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it is this heart of the matter. External validity and generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance and homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2016? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Noah Smith, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Noah Smith’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

I would, contrary to Noah Smith’s optimism, argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Teaching macroeconomics

19 March, 2016 at 11:42 | Posted in Economics | 2 Comments

To what extent has – or should – the teaching of economics be modified in the light of the current economic crisis? … For macroeconomists in particular, the reaction has been to suggest that modifications of existing models to take account of ‘frictions’ or ‘imperfections’ will be enough …

However, other economists such as myself feel that we have finally reached the turning point in economics where we have to radically change the way we conceive of and model the economy … Rather than making steady progress towards explaining economic phenomena professional economists have been locked into a narrow vision of the economy. We constantly make more and more sophisticated models within that vision until, as Bob Solow put it, ‘the uninitiated peasant is left wondering what planet he or she is on’ …

Every student in economics is faced with the model of the isolated optimising individual who makes his choices within the constraints imposed by the market. Somehow, the axioms of rationality imposed on this individual are not very convincing … But the student is told that the aim of the exercise is to show that there is an equilibrium, there can be prices that will clear all markets simultaneously. And, furthermore, the student is taught that such an equilibrium has desirable welfare properties. Importantly, the student is told that since the 1970s it has been known that whilst such a system of equilibrium prices may exist, we cannot show that the economy would ever reach an equilibrium nor that such an equilibrium is unique.

The student then moves on to macroeconomics and is told that the aggregate economy or market behaves just like the average individual she has just studied. She is not told that these general models in fact poorly reflect reality. For the macroeconomist, this is a boon since he can now analyse the aggregate allocations in an economy as though they were the result of the rational choices made by one individual. The student may find this even more difficult to swallow when she is aware that peoples’ preferences, choices and forecasts are often influenced by those of the other participants in the economy. Students take a long time to accept the idea that the economy’s choices can be assimilated to those of one individual …

We owe it to our students to point out difficulties with the structure and assumptions of our theory. Although we are still far from a paradigm shift, in the longer run the paradigm will inevitably change. We would all do well to remember that current economic thought will one day be taught as history of economic thought.

Alan Kirman 

Where ‘New Keynesian’ and New Classical economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they — as argued in chapter 4 of yours truly’s On the use and misuse of theories and models in mainstream economics — have to put a blind eye on the emergent properties that characterize all open social and economic systems. The interaction between animal spirits, trust, confidence, institutions, etc., cannot be deduced or reduced to a question answerable on the individual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms.

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and ‘New Keynesian’ macroeconomics with their microfounded macromodels are bad substitutes for real macroeconomic analysis. Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

How to get published in ‘top’ journals

18 March, 2016 at 22:23 | Posted in Varia | 3 Comments

Halmos2If you think that your paper is vacuous,

Use the first-order functional calculus.

It then becomes logic,

And, as if by magic,

The obvious is hailed as miraculous.

Paul Halmos

 
 

Bayesianism — confusing degree of confirmation with probability

18 March, 2016 at 09:59 | Posted in Theory of Science & Methodology | 5 Comments

If we identify degree of corroboration or confirmation with probability, we should be forced to adopt a  number of highly paradoxical views, among them the following clearly self-contradictory assertion:

“There are cases in which x is strongly supported by z and y is  strongly undermined by z while, at the same time, x is confirmed by z to  a lesser degree than is y.”

LogicSciDiscoveryCoverConsider the next throw with a homogeneous die. Let x be the statement ‘six will turn up’; let y be its negation, that is to say, let y = x;  and let z be the information ‘an even number will turn up’.

We have the following absolute probabilities:

p(x)=l/6; p(y) = 5/6; p(z) = 1/2.

Moreover, we have the following relative probabilities:

p(x, z) = 1/3; p(y, z) = 2/3.

We see that x is supported by the information z, for z raises the probability of x from 1/6 to 2/6 = 1/3. We also see that y is undermined by z, for z lowers the probability of y by the same amount from 5/6 to 4/6 = 2/3. Nevertheless, we have p(x, z) < p(y, z) …

A report of the result of testing a theory can be summed up by an appraisal. This can take the form of assigning some degree of corroboration to the theory. But it can never take the form of assigning to it a degree of probability; for the probability of a statement (given some test statements) simply does not express an appraisal of the severity of the tests a theory has passed, or of the manner in which it has passed these tests. The main reason for this is that the content of a theory — which is the same as its improbability — determines its testability and its corroborability.

Although Bayesians think otherwise, to me there’s nothing magical about Bayes’ theorem. The important thing in science is for you to have strong evidence. If your evidence is strong, then applying Bayesian probability calculus is rather unproblematic. Otherwise — garbage in, garbage out. Applying Bayesian probability calculus to subjective beliefs founded on weak evidence is not a recipe for scientific akribi and progress.

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes’ theorem.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing repeatedly over the years, there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

So, why then are so many scientists nowadays so fond of Bayesianism? I guess one strong reason is that Bayes’ theorem gives them a seemingly fast, simple and rigorous answer to their problems and hypotheses. But, as already Popper showed back in the 1950’s, the Bayesian probability (likelihood) version of confirmation theory is “absurd on both formal and intuitive grounds: it leads to self-contradiction.”

Heckscher-Ohlin and the ‘principle of explosion’

17 March, 2016 at 12:14 | Posted in Theory of Science & Methodology | 4 Comments

The other day yours truly had a post up on the Heckscher-Ohlin theorem, arguing that since the assumptions on which the theorem build are empirically false, one might, from a methodological point of view, wonder

how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods anything logically follows.

Some people have had troubles with the last sentence — from falsehoods anything whatsoever follows.

But that’s really nothing very deep or controversial. What I’m referring to — without going into the intricacies of distinguishing between ‘false,’ ‘inconsistent’ and ‘self-contradictory’ statements — is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.

poppWhilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.

Elegy of the uprooting (personal)

17 March, 2016 at 09:54 | Posted in Varia | Comments Off on Elegy of the uprooting (personal)

This one is for you — all you brothers and sisters of mine, struggling to survive in civil wars, or forced to flee your homes, risking your lives on your way to my country or other countries in Europe.

May God be with you.

Then the righteous will answer him, ‘Lord, when did we see you hungry and feed you, or thirsty and give you something to drink? When did we see you a stranger and invite you in, or needing clothes and clothe you? When did we see you sick or in prison and go to visit you?’

The King will reply, ‘Truly I tell you, whatever you did for one of the least of these brothers and sisters of mine, you did for me.’

The lady tasting tea

17 March, 2016 at 09:19 | Posted in Statistics & Econometrics | Comments Off on The lady tasting tea

The mathematical formulations of statistics can be used to compute probabilities. Those probabilities enable us to apply statistical methods to scientific problems. In terms of the mathematics used, probability is well defined. How does this abstract concept connect to reality? How is the scientist to interpret the probability statements of statistical analyses when trying to decide what is true and what is not? …

The_Lady_Tasting_Tea_-_David_SalsburgFisher’s use of a significance test produced a number Fisher called the p-value. This is a calculated probabiity, a probability associated with the observed data under the assumption that the null hypothesis is true. For instance, suppose we wish to test a new drug for the prevention of a recurrence of breast cancer in patients who have had mastectomies, comparing it to a placebo. The null hypothesis, the straw man, is that the drug is no better than the placebo …

Since [the p-value] is used to show that the hypothesis under which it is calculated is false, what does it really mean? It is a theoretical probability associated with the observations under conditions that are most likely false. It has nothing to do with reality. It is an indirect measurement of plausibility. It is not the probability that we would be wrong to say that the drug works. It is not the probability of any kind of error. It is not the probability that a patient will do as well on the placebo as on the drug.

Robert Lucas the storyteller

16 March, 2016 at 08:15 | Posted in Economics | 1 Comment

economists pretend to know

We are storytellers, operating much of the time in worlds of make believe. We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it is the only way we have found to think seriously about reality. In a way, there is nothing more to this method than maintaining the conviction … that imagination and ideas matter … there is no practical alternative”

Robert Lucas (1988) What Economists Do

Sounds great, doesn’t it? And here’s an example of the outcome of that serious think about reality …

lucIn summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problems they face. One cannot, even conceptually, arrive at a usable definition of full employment as a state in which no involuntary unemployment exists.

The difficulties are not the measurement error problems which necessarily arise in applied economics. They arise because the “thing” to be measured does not exist.

Magoo economics

15 March, 2016 at 08:41 | Posted in Economics | 2 Comments

absdThere is widespread disappointment with economists now because we did not forecast or prevent the financial crisis of 2008. The Economist’s articles of July 18th on the state of economics were an interesting attempt to take stock of two fields, macroeconomics and financial economics, but both pieces were dominated by the views of people who have seized on the crisis as an opportunity to restate criticisms they had voiced long before 2008. Macroeconomists in particular were caricatured as a lost generation educated in the use of valueless, even harmful, mathematical models, an education that made them incapable of conducting sensible economic policy. I think this caricature is nonsense …

The recession is now under control and no responsible forecasters see anything remotely like the 1929-33 contraction in America on the horizon. This outcome did not have to happen, but it did.

Robert Lucas

The gross substitution axiom

14 March, 2016 at 19:00 | Posted in Economics | 1 Comment

Economics is perhaps more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.

Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).

Building their economic models, modern mainstream neoclassical economists ground their models on a set of core assumptions (CA) — describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

gsThe core assumptions typically consist of completeness, transitivity,  non-satiation, optimisation, consistency, gross substitutability, etc., etc.

Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model.

So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always be unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations).

The hope is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

Empirically it has, however, turned out that this hope is almost never fulfilled. The core — and many of the auxiliary — assumptions turn out to have preciously little to do with the real (non-model) world we happen to live in. And that goes for the gross substitution axiom as well:

 

The gross substitution axiom assumes that if the demand for good x goes up, its relative price will rise, inducing demand to spill over to the now relatively cheaper substitute good y. For an economist to deny this ‘universal truth’ of gross substitutability between objects of demand is revolutionary heresy – and as in the days of the Inquisition, the modern-day College of Cardinals of mainstream economics destroys all non-believers, if not by burning them at the stake, then by banishing them from the mainstream professional journals. Yet in Keynes’s (1936, ch. 17) analysis ‘The Essential Properties of Interest and Money’ require that:

1. The elasticity of production of liquid assets including money is approximately zero. This means that private entrepreneurs cannot produce more of these assets by hiring more workers if the demand for liquid assets increases. In other words, liquid assets are not producible by private entrepreneurs’ hiring of additional workers; this means that money (and other liquid assets) do not grow on trees.

2. The elasticity of substitution between all liquid assets, including money (which are not reproducible by labour in the private sector) and producibles (in the private sector), is zero or negligible. Accordingly, when the price of money increases, people will not substitute the purchase of the products of industry for their demand for money for liquidity (savings) purposes.

Paul_Davidson_01These two elasticity properties that Keynes believed are essential to the concepts of money and liquidity mean that a basic axiom of Keynes’s logical framework is that non- producible assets that can be used to store savings are not gross substitutes for producible assets in savers’ portfolios. If this elasticity of substitution between liquid assets and the products of industry is significantly different from zero (if the gross substitution axiom is ubiquitously true), then even if savers attempt to use non-reproducible assets for storing their increments of wealth, this increase in demand will increase the price of non- producibles. This relative price rise in non-producibles will, under the gross substitution axiom, induce savers to substitute reproducible durables for non-producibles in their wealth holdings and therefore non-producibles will not be, in Hahn’s terminology, ‘ultimate resting places for savings’. The gross substitution axiom therefore restores Say’s Law and denies the logical possibility of involuntary unemployment.

Paul Davidson

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.