The loanable funds fallacy

24 April, 2018 at 14:26 | Posted in Economics | Leave a comment

The loanable funds theory is in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credits set by banks and determined by supply and demand — as Bertil Ohlin put it — “in the same way as the price of eggs and strawberries on a village market.”

loanIt is a beautiful fairy tale, but the problem is that banks are not barter institutions that transfer pre-existing loanable funds from depositors to borrowers. Why? Because, in the real world, there simply are no pre-existing loanable funds. Banks create new funds — credit — only if someone has previously got into debt! Banks are monetary institutions, not barter vehicles.

In the traditional loanable funds theory — as presented in mainstream macroeconomics textbooks — the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving is the supply of loanable funds, investment is the demand for loanable funds and assumed to be negatively related to the interest rate. Lowering households’ consumption means increasing savings via a lower interest.

That view has been shown to have very little to do with reality. It’s nothing but an otherworldly neoclassical fantasy. But there are many other problems as well with the standard presentation and formalization of the loanable funds theory:

As already noticed by James Meade decades ago, the causal story told to explicate the accounting identities used gives the picture of “a dog called saving wagged its tail labelled investment.” In Keynes’s view — and later over and over again confirmed by empirical research — it’s not so much the interest rate at which firms can borrow that causally determines the amount of investment undertaken, but rather their internal funds, profit expectations and capacity utilization.

As is typical of most mainstream macroeconomic formalizations and models, there is pretty little mention of real-world phenomena, like e. g. real money, credit rationing and the existence of multiple interest rates, in the loanable funds theory. Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something they definitely are not. As emphasized especially by Minsky, to understand and explain how much investment/loaning/crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

The loanable funds theory in the ‘New Keynesian’ approach means that the interest rate is endogenized by assuming that Central Banks can (try to) adjust it in response to an eventual output gap. This, of course, is essentially nothing but an assumption of Walras’ law being valid and applicable, and that a fortiori the attainment of equilibrium is secured by the Central Banks’ interest rate adjustments. From a realist Keynes-Minsky point of view, this can’t be considered anything else than a belief resting on nothing but sheer hope. [Not to mention that more and more Central Banks actually choose not to follow Taylor-like policy rules.] The age-old belief that Central Banks control the money supply has more an more come to be questioned and replaced by an ‘endogenous’ money view, and I think the same will happen to the view that Central Banks determine “the” rate of interest.

A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. This is seriously wrong:

gtThe classical theory of the rate of interest [the loanable funds theory] seems to suppose that, if the demand curve for capital shifts or if the curve relating the rate of interest to the amounts saved out of a given income shifts or if both these curves shift, the new rate of interest will be given by the point of intersection of the new positions of the two curves. But this is a nonsense theory. For the assumption that income is constant is inconsistent with the assumption that these two curves can shift independently of one another. If either of them shifts​, then, in general, income will change; with the result that the whole schematism based on the assumption of a given income breaks down … In truth, the classical theory has not been alive to the relevance of changes in the level of income or to the possibility of the level of income being actually a function of the rate of the investment.

There are always (at least) two parts in an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no ‘direct and immediate’ automatic interest mechanism at work in modern monetary economies. What this ultimately boils done to is — iter — that what happens at the microeconomic level — both in and out of equilibrium —  is not always compatible with the macroeconomic outcome. The fallacy of composition (the ‘atomistic fallacy’ of Keynes) has many faces — loanable funds is one of them.

Contrary to the loanable funds theory, finance in the world of Keynes and Minsky precedes investment and saving. Highlighting the loanable funds fallacy, Keynes wrote in “The Process of Capital Formation” (1939):

Increased investment will always be accompanied by increased saving, but it can never be preceded by it. Dishoarding and credit expansion provides not an alternative to increased saving, but a necessary preparation for it. It is the parent, not the twin, of increased saving.

What is ‘forgotten’ in the loanable funds theory, is the insight that finance — in all its different shapes — has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies, and acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

All real economic activities nowadays depend on a functioning financial machinery. But institutional arrangements, states of confidence, fundamental uncertainties, asymmetric expectations, the banking system, financial intermediation, loan granting processes, default risks, liquidity constraints, aggregate debt, cash flow fluctuations, etc., etc. — things that play decisive roles in channelling money/savings/credit — are more or less left in the dark in modern formalizations of the loanable funds theory.

It should be emphasized that the equality between savings and investment … will be valid under all circumstances.kalecki In particular, it will be independent of the level of the rate of interest which was customarily considered in economic theory to be the factor equilibrating the demand for and supply of new capital. In the present conception investment, once carried out, automatically provides the savings necessary to finance it. Indeed, in our simplified model, profits in a given period are the direct outcome of capitalists’ consumption and investment in that period. If investment increases by a certain amount, savings out of profits are pro tanto higher …

One important consequence of the above is that the rate of interest cannot be determined by the demand for and supply of new capital because investment ‘finances itself.’

So, yes, the ‘secular stagnation’ will be over, as soon as we free ourselves from the loanable funds theory — and scholastic gibbering about ZLB — and start using good old Keynesian fiscal policies.

Advertisements

Finland ends basic income experiment

24 April, 2018 at 12:04 | Posted in Economics | Leave a comment

Europe’s first national government-backed experiment in giving citizens free cash will end next year after Finland decided not to extend its widely publicised basic income trial and to explore alternative welfare schemes instead.

universal_basic_incomeSince January 2017, a random sample of 2,000 unemployed people aged 25 to 58 have been paid a monthly €560 (£475), with no requirement to seek or accept employment …

The scheme – aimed primarily at seeing whether a guaranteed income might incentivise people to take up paid work by smoothing out gaps in the welfare system – is strictly speaking not a universal basic income (UBI) trial, because the payments are made to a restricted group and are not enough to live on.

But it was hoped it would shed light on policy issues such as whether an unconditional payment might reduce anxiety among recipients and allow the government to simplify a complex social security system that is struggling to cope with a fast-moving and insecure labour market …

The idea of UBI – appealing both to the left, which hopes it can cut poverty and inequality, and to the right, which sees it as a possible route to a leaner, less bureaucratic welfare system – has gained traction recently amid predictions that automation could threaten up to a third of current jobs.

Jon Henley/The Guardian

Tractability hoax redux​

23 April, 2018 at 18:32 | Posted in Economics | 1 Comment

A ‘tractable’ model is one that you can solve, which means there are several types of tractability:​ analytical tractability (finding a solution to a theoretical model), empirical tractability (being able to estimate/calibrate your model) and computational tractability (finding numerical solutions). It is sometimes hard to discriminate between theoretical and empirical, or empirical and computational tractability …

canopenrWhat I’d like to capture is the effect of those choices economists make “for convenience,” to be able to reach solutions, to simplify, to ease their work, in short, to make a model tractable. While those assumptions are conventional and meant to be lifted as mathematical, theoretical and empirical skills and technology (hardware and software) ‘progress,’ their underlying rationale is often lost as they are taken up by other researchers, spread, and become standard (implicit in the last sentence is the idea that what a tractable model is evolves as new techniques and technologies are brought in) …

The tractability lens also helps me make sense of what is happening in economics now, and what might come next.  Right now, clusters of macroeconomists are each working on relaxing one or two tractability assumptions: research agendas span heterogeneity, non-rational expectations, financial markets, non-linearities, fat-tailed distributions, etc. But if you put all these adds-on together (assuming you can design a consistent model, and that adds-on are the way forward, which many critics challenge), you’re back to non-tractable. So what is the priority? How do macroeconomists rank these model improvements? And can the profession afford​ waiting 30 more years, 3 more financial crises and two trade wars before it can finally say it has a model rich enough to anticipate crises?

Beatrice Cherrier

Important questions that serious economists ought to ask themselves. Using ‘simplifying’ tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists — as Cherrier conjectures — do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Take for example the ongoing discussion on rational expectations as a modelling assumption. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actors models (cf. this critique of VAR DSGE modelling). As yours truly has tried to show in On the use and misuse of theories and models in mainstream economics there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to real-world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

A new paradigm for teaching economics

22 April, 2018 at 12:08 | Posted in Economics | 1 Comment

CORE-Team_pb_Proof.inddDon’t let the students know! What we teach them in our intro classes bears little resemblance to how we do economics ourselves …

The new paradigm not only provides a more convincing story about how an economy might reach a competitive equilibrium, it also fundamentally alters the nature of that outcome. When lenders and borrowers, and employers and employees, are modelled as principals and agents with asymmetric information, who interact under an incomplete contract, credit and labour markets do not clear in competitive equilibrium …

The gap between concerns about major economic problems that bring students to our classrooms, and the topics we teach, is a second motivation for the CORE Project. During the past four years we have asked, in classrooms around the world: “what is the most pressing problem that economists should address?” The word cloud below shows what students at the Humboldt University in Berlin told us:

COREfig1

Word clouds from students in Sydney, London and Bogota are barely distinguishable from Berlin … Even more remarkable, in 2016 we asked the same question to new recruits – mostly economics graduates – at the Bank of England, and professional economists and other staff at the New Zealand Treasury and Reserve Bank. Both responded with a similar concern about inequality. Word clouds from France gave greater prominence to unemployment. All of them highlight climate change and environmental problems, automation, and financial instability.

Samuel Bowles & Wendy Carlin

La Chine — un défi adressé aux théories économiques

22 April, 2018 at 10:28 | Posted in Economics | Leave a comment

Avec l’effondrement de l’Union Soviétique, nombre d’intellectuels avaient anticipé une fin de l’histoire : marché et démocratie allaient remplacer le Gosplan et la domination du Parti Communiste. Depuis 1989, les régimes démocratiques se sont diffusés sur la plupart des continents et la logique du marché semble dominer les choix politiques que, dans le passé, mettaient en œuvre les gouvernements. Les années 2010 marquent cependant un infléchissement car se multiplient les régimes autoritaires qui n’ont plus que de lointaines similitudes avec l’idéal de la démocratie. Simultanément, dans l’ordre économique, un nombre croissant de gouvernements revendiquent une reprise de contrôle du processus d’internationalisation.

chineLa trajectoire russe témoigne de l’échec du processus de démocratisation comme préliminaire à la modernisation économique et la trajectoire chinoise invalide le pronostic qui ferait de la démocratie le régime politique nécessaire à la performance économique. On trouve, même en Chine, une justification d’un pouvoir centralisé : les défis seraient si nombreux et l’urgence telle que les délibérations propres à la démocratie ne permettraient pas d’y répondre. La multiplicité des investissements chinois à l’étranger actualise la possibilité d’une alternative au Consensus de Washington. Il est donc devenu essentiel de cerner les ressorts mais aussi les faiblesses qui sous- tendent le dynamisme de la Chine …

Les Etats-Unis ne sont plus la référence incontestée dans l’organisation des sociétés contemporaines. Après l’effondrement de l’Union Soviétique, vint le temps du Japon et aujourd’hui la Chine est perçue comme représentant une alternative, au point de susciter l’idée d’un consensus de Pékin. Développement accéléré et succès économique, doutes sur les vertus de la démocratie, montée en régime d’une puissance scientifique sont autant d’atouts aux yeux de divers gouvernements tentés par la verticalité du pouvoir politique. En fait, ce « modèle » repose sur la puissance d’une économie continentale, le rôle d’un parti-Etat et l’inscription dans une longue tradition d’exercice du pouvoir, autant de caractéristiques qui hypothèquent sa diffusion. La faiblesse des autres nations nourrit son expansion internationale mais aussi les sources de dépendance, peu favorable au développement. Enfin nombre de tensions et déséquilibres traversent la société chinoise au point de susciter la recherche d’un autre régime socio-économique. Dès lors la leçon chinoise est sans doute que chaque société doit inscrire sa stratégie dans l’histoire longue et que tout modèle finit par rencontrer ses limites.

Robert Boyer/Le Monde

The case for a new economics

20 April, 2018 at 19:04 | Posted in Economics | Leave a comment

When the great crash hit a decade ago, the public realised that the economics profession was clueless …

After 10 years in the shadow of the crisis, the profession’s more open minds have recognised there is serious re-thinking to be done …

But the truth is that most of the “reforms” have been about adding modules to the basic template, leaving the core of the old discipline essentially intact. My view is that this is insufficient, and treats the symptoms rather than the underlying malaise …

RE-LogoIf we accept that we need fundamental reform, what should the new economics—“de-conomics” as I’m calling it—look like?

First, we need to accept that there is no such thing as “value-free” analysis of the economy. As I’ve explained, neoclassical economics pretends to be ethically neutral while smuggling in an individualistic, anti-social ethos …

Second, the analysis needs to be based around how human beings actually operate—rather than how neoclassicism asserts that “rational economic person (or firm)” should operate …

Third, we need to put the good life centre stage, rather than prioritising the areas that are most amenable to analysis via late-19th century linear mathematics. Technological progress and power relationships between firms, workers and governments need to be at the heart of economic discourse and research …

Finally, economics needs to be pluralistic. For the last half-century neoclassical economics has been gradually colonising other social science disciplines such as sociology and political science. It is high time this process reversed itself so that there was two-way traffic and a mutually beneficial learning exchange between disciplines. It is possible—and probably desirable—that the “deconomics” of the future looks more like psychology, sociology or anthropology than it does today’s arid economics …

The change I am seeking is no more fundamental than the transition from classical to neoclassical economics, and that was accomplished without the discipline imploding. And this time around we’ve got then-unimaginable data and other resources. So there can be no excuse for delay. Let economists free themselves of a misleading map, and then—with clear eyes—look at the world anew.

Howard Reed/Prospect Magazine

Mainstream economists are of course not überjoyed when confronted with this kind of critique. Diane Coyle’s reply to Reed in Prospect Magazine is typical.

Those of us in the economics community who are impolite enough to dare question the​ preferred methods and models applied in mainstream economics are as a rule met with disapproval. But although people seem to get very agitated and upset by the critique — just read the commentaries on this blog if you don’t believe me — defenders of “received theory” always say that the critique is “nothing new”, that they have always been “well aware” of the problems, and so on, and so on.

So, for the benefit of Diane Coyle and all other mindless practitioners of mainstream economic modeling who don’t want to be disturbed in their doings, David Freedman has put together a very practical list of vacuous responses to criticism that can be freely used to save their peace of mind:

We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions​ are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what everybody​ else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?

The tractability hoax in modern economics

20 April, 2018 at 11:16 | Posted in Economics | Leave a comment

While the paternity of the theoretical apparatus underlying the new neoclassical synthesis in macro is contested, there is wide agreement that the methodological framework was largely architected by Robert Lucas … Bringing a representative agent meant foregoing the possibility to tackle inequality, redistribution and justice concerns. Was it deliberate? How much does this choice owe to tractability? What macroeconomists were chasing, in these years, was a renewed explanation of the business cycle. They were trying to write microfounded and dynamic models …

tractable-2Rational expectations imposed cross-equation restrictions, yet estimating these new models substantially raised the computing burden. Assuming a representative agent mitigated computational demands, and allowed macroeconomists to get away with general equilibrium aggregate issues: it made new-classical models analytically and computationally tractable …

Was tractability the main reason why Lucas embraced the representative agent (and market clearing)? Or could he have improved tractability through alternative hypotheses, leading to opposed policy conclusions? … Some macroeconomists may have endorsed the new class of Lucas-critique-proof models because they liked its policy conclusions. Other may have retained some hypotheses, then some simplifications, “because it makes the model tractable.” And while the limits of simplifying assumptions are often emphasized by those who propose them, as they spread, caveats are forgotten. Tractability restricts the range of accepted models and prevent economists from discussing some social issues, and with time, from even “seeing” them. Tractability ‘filters’ economists’ reality … The aggregate effect of “looking for tractable models” is unknown, and yet it is crucial to understand the current state of economics.

Beatrice Cherrier

Cherrier’s highly readable article underlines​ that the essence of mainstream​ (neoclassical) economic theory is its almost exclusive use of a deductivist methodology. A methodology that is more or less used without a smack of argument to justify its relevance.

The theories and models that mainstream economists construct describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their “laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in economic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery deliver “precise” and “rigorous” results. This, of course, makes exporting to real world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity​ when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real​-world target systems do​ not take us very far​ unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

Henry Louis Mencken once wrote that “there is always an easy solution to every human problem – neat, plausible and wrong.” And mainstream economics has indeed been wrong. Very wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real-world​d target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.

The punch line is that most of the problems that mainstream economics is wrestling with, issues from its attempts at formalistic modelling per se of social phenomena. Reducing microeconomics to refinements of hyper-rational Bayesian deductivist models is not a viable way forward. It will only sentence to irrelevance the most interesting real-world​ economic problems. And as someone has so wisely remarked, murder is — unfortunately — the only way to reduce biology to chemistry – reducing macroeconomics to Walrasian general equilibrium microeconomics basically means committing the same crime.

If scientific progress in economics – as Robert Lucas and other latter days mainstream economists seem to think – lies in our ability to tell “better and better stories” without considering the realm of imagination and ideas a retreat from real-world target systems reality, one would, of course, think our economics journal being filled with articles supporting the stories with empirical evidence. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these theoretical claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject not required. Mainstream economic theory is obviously navigating in dire straits.

If the ultimate criteria for success of a deductivist system is to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economic target systems. These systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

Mainstream economic theory still today consists mainly in investigating economic models. It has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence still only plays a minor role in mainstream economic theory, where models largely function as substitutes for empirical evidence.

What is wrong with mainstream economics is not that it employs models per se, but that it employs poor models. They are poor because they do not bridge to the real world target system in which we live. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on mathematical deductivist modelling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.​​

Marx predicted the present crisis — and points the way out

20 April, 2018 at 08:14 | Posted in Economics | Leave a comment

Marx and Engels based their manifesto on a touchingly simple answer: authentic human happiness and the genuine freedom that must accompany it. For them, these are the only things that truly matter. Their manifesto does not rely on strict Germanic invocations of duty, or appeals to historic responsibilities to inspire us to act. It does not moralise, or point its finger. Marx and Engels attempted to overcome the fixations of German moral philosophy and capitalist profit motives, with a rational, yet rousing appeal to the very basics of our shared human nature.

yanisKey to their analysis is the ever-expanding chasm between those who produce and those who own the instruments of production. The problematic nexus of capital and waged labour stops us from enjoying our work and our artefacts, and turns employers and workers, rich and poor, into mindless, quivering pawns who are being quick-marched towards a pointless existence by forces beyond our control.

But why do we need politics to deal with this? Isn’t politics stultifying, especially socialist politics, which Oscar Wilde once claimed “takes up too many evenings”? Marx and Engels’ answer is: because we cannot end this idiocy individually; because no market can ever emerge that will produce an antidote to this stupidity. Collective, democratic political action is our only chance for freedom and enjoyment. And for this, the long nights seem a small price to pay.

Humanity may succeed in securing social arrangements that allow for “the free development of each” as the “condition for the free development of all”. But, then again, we may end up in the “common ruin” of nuclear war, environmental disaster or agonising discontent. In our present moment, there are no guarantees. We can turn to the manifesto for inspiration, wisdom and energy but, in the end, what prevails is up to us.

Yanis Varoufakis/The Guardian

Sometimes we do not know because we cannot know

18 April, 2018 at 17:11 | Posted in Economics, Statistics & Econometrics | 8 Comments

Some time ago, Bank of England’s Andrew G Haldane and Benjamin Nelson presented a paper with the title Tails of the unexpected. The main message of the paper was that we should not let us be fooled by randomness:

The normal distribution provides a beguilingly simple description of the world. Outcomes lie symmetrically around the mean, with a probability that steadily decays. It is well-known that repeated games of chance deliver random outcomes in line with this distribution: tosses of a fair coin, sampling of coloured balls from a jam-jar, bets on a lottery number, games of paper/scissors/stone. Or have you been fooled by randomness?

blNormality has been an accepted wisdom in economics and finance for a century or more. Yet in real-world systems, nothing could be less normal than normality. Tails should not be unexpected, for they are the rule. As the world becomes increasingly integrated – financially, economically, socially – interactions among the moving parts may make for potentially fatter tails. Catastrophe risk may be on the rise.

If public policy treats economic and financial systems as though they behave like a lottery – random, normal – then public policy risks itself becoming a lottery. Preventing public policy catastrophe requires that we better understand and plot the contours of systemic risk, fat tails and all. It also means putting in place robust fail-safes to stop chaos emerging, the sand pile collapsing, the forest fire spreading. Until then, normal service is unlikely to resume.

Since I think this is a great paper, it merits a couple of comments s.

To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all. What is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules the roost. That was something both Keynes and Knight basically said in their 1921 books. Thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

Knight’s uncertainty concept has an epistemological founding and Keynes’ definitely an ontological founding. Of course, this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’ view is the most warranted of the two.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if one subscribes to the former, Knightian view – as Taleb, Haldane & Nelson and “black swan” theorists basically do – you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As Keynes convincingly argued, that is ontologically just not possible.

If probability distributions do not exist for certain phenomena, those distributions are not only not knowable, but the whole question regarding whether they can or cannot be known is beside the point. Keynes essentially says this when he asserts that sometimes they are simply unknowable.

John Davis

To Keynes, the source of uncertainty was in the nature of the real — nonergodic — world. It had to do, not only — or primarily — with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilities and expectations at all.

Sometimes we do not know because we cannot know.

DSGE models — overconfident macroeconomic story-telling

16 April, 2018 at 16:41 | Posted in Economics | 2 Comments

A recent paper by Christiano, Eichenbaum and Trabandt (C.E.T.) on Dynamic Stochastic General Equilibrium Models (DSGEs) has generated quite a reaction in the blogosphere …

The-DSGE-Model-Quarrel-Again-e1512989462377Bradford Delong points out that new Keynesian models were constructed to show that old Keynesian and old Monetarist policy conclusions were relatively robust, and not blown out of the water by rational expectations … The DSGE framework was then constructed so that new Keynesians could talk to RBCites. None of this has, so far, materially advanced the project of understanding the macroeconomic policy-relevant emergent properties of really existing industrial and post-industrial economies …

Lars Syll thinks that ‘rigorous’ and ‘precise’ DSGE models cannot be considered anything other than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model, and no decisive empirical evidence has been presented. Advocates of DSGE modelling want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealised models. Conclusions can only be as certain as their premises. The modelling convention used when constructing DSGE models makes it impossible to fully incorporate things that we know are of paramount importance for understanding modern economies. Given all these fundamental problems for the use of these models and their underlying methodology, it is beyond understanding how the DSGE approach has come to be the standard approach in ‘modern’ macroeconomics. DSGE models are based on assumptions profoundly at odds with what we know about real-world economies. That also makes them little more than overconfident story-telling devoid of real scientific value …

Brian Romanchuk at Bond Economics thinks that the recent attempt at a defence by C.E.T. was such a spectacular intellectual failure that it is not worth taking seriously … One could easily raise doubts about other methodologies, but the paper by C.E.T. went completely off the rails by arguing that no other economic modelling methodology even exists.

Silvia Merler/Bruegel

Next Page »

Blog at WordPress.com.
Entries and comments feeds.