Nu krävs en expansiv finanspolitik

17 Dec, 2015 at 10:24 | Posted in Economics | 1 Comment

Yours truly, Lars Ahnland och Rodney Edvinsson skrev häromdagen i Svenska Dagbladet om behovet av nytänk inom penningpolitiken.

Hans Palmstierna och Martin Moraeus kommenterade ett par dagar senare artikeln.

Igår skrev vi i vår slutreplik:

Sverige befinner sig i dag i en situation som är obehagligt lik en likviditetsfälla. I en sådan blir penningpolitiken verkningslös eftersom företagen slutar att låna till investeringar och istället amorterar sina befintliga lån. Trots negativ ränta och kraftig expansion av penningmängden har Riksbanken misslyckats med att uppnå sitt inflationsmål. 3330817625_a2b345ac45_oDessutom har resursutnyttjandet och köpkraften utvecklats svagt i många år. Det enda som inflateras är bostadspriserna. Om dessa plötsligt börjar sjunka medan skulderna stiger i värde kommer köpkraften och därmed investeringsviljan att sjunka än mer. Då blir likviditetsfällan till ett slukhål …

Pengar i sig skapar inte resurser, men de kan vara ett smörjmedel för att befintliga resurser utnyttjas. Problemet idag är att det privata banksystemet har misslyckats att skapa pengar för nödvändiga investeringar. Istället går lånen till en skenande bostadsmarknad.

Därför krävs nu kraftigt expansiv finanspolitik, som måste finansieras med nya pengar. Det låga resursutnyttjandet visar att det finns stora möjligheter. Det gäller särskilt bostadsbyggandet, som legat på en europeisk bottennivå i snart tjugo år. Förslagsvis skulle ett nytt, klimatsmart, miljonprogram kunna lösa inte bara bostadsbristen, utan mildra en mängd andra problem också. Det skulle dämpa prisuppgången på bostäder och därmed hushållens skuldsättning, och skapa mängder av nya jobb – även för flyktingar om dessa utbildas rätt. Det skulle även hjälpa Sverige på traven att nå de klimatmål som satts upp i Paris.

Added: Och Thomas Gür är tydligen upprörd över att vi påpekar — vad som för alla som kan åtminstone en gnutta ekonomihistoria är välkänt — att åtstramningspolitiken på 1930-talet medförde en massarbetslöshet som underlättade nazisternas maktövertagande i Tyskland. Ja, vad ska man säga? Man tager sig för pannan …

Dani Rodrik’s blind spot (II)

16 Dec, 2015 at 18:53 | Posted in Economics, Theory of Science & Methodology | 1 Comment

51A1kO+7AoL._SX329_BO1,204,203,200_As I argued in a previous post, Dani Rodrik’s Economics Rules  describes economics as a more or less problem-free smorgasbord collection of models. Economics is portrayed as advancing through a judicious selection from a continually expanding library of models, models that are presented as “partial maps” or “simplifications designed to show how specific mechanisms  work.”

But one of the things that’s missing in Rodrik’s view of economic models is the all-important distinction between core and auxiliary assumptions. Although Rodrik repeatedly speaks of ‘unrealistic’ or ‘critical’ assumptions, he basically just lumps them all together without differentiating between different types of assumptions, axioms or theorems. In a typical passage, Rodrik writes (p. 25):
 

Consumers are hyperrational, they are selfish, they always prefer more consumption to less, and they have a long time horizon, stretching into infinity. Economic models are typically assembled out of many such unrealistic assumptions. To be sure, many models are more realistic in one or more of these dimensions. But even in these more layered guises, other unrealistic assumptions can creep in somewhere else.

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/ goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.

Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other.

So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

But in Rodrik’s model depiction we are essentially given the following structure,

A1, A2, … An
———————-
Theorem,

where a set of undifferentiated assumptions are used to infer a theorem.

This is, however, to vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modeling strategy, where — as I’ve argued in a previous post — there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure

CA1, CA2, … CAn & AA1, AA2, … AAn
———————————————–
Theorem

or,

CA1, CA2, … CAn
———————-
(AA1, AA2, … AAn) → Theorem,

more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.

This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get

CA1, CA2, … CAn
———————
Theorem,

where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or

AA1, AA2, … AAn
———————-
Theorem,

where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.

Not clearly differentiating between CA and AA means that Rodrik can’t make this all-important interpretative distinction, and so opens up for unwarrantedly “saving” or “immunizing” models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.

What we do in life echoes in eternity

15 Dec, 2015 at 10:44 | Posted in Varia | 4 Comments

ken

Courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but stand up for one’s rights not to be humiliated or abused in any ways by the rich and powerful.

Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given. Like in the great stories. The ones where people have lots of chances of turning back — but don’t.

As when Sir Nicholas Winton organised the rescue of 669 children destined for Nazi concentration camps during World War II.

Or as when Ernest Shackleton, in April 1916, aboard the small boat ‘James Caird’, spent 16 days crossing 1,300 km of ocean to reach South Georgia, then trekked across the island to a whaling station, and finally could rescue the remaining men from the crew of ‘Endurance’ left on the Elephant Island.

shackletonNot a single member of the expedition died.

Not to step back — that’s what creates courageous acts that stay in our memories and mean something.

What we do in life echoes in eternity.

Dani Rodrik’s smorgasbord view of economic models (I)

14 Dec, 2015 at 15:56 | Posted in Economics | 7 Comments

Traveling by train to Stockholm during the weekend, yours truly had plenty of time to catch up on some reading.

Dani Rodrik’s Economics Rules (Oxford University Press, 2015) is one of those rare examples where a mainstream economist — instead of just looking the other way — takes his time to ponder on the tough and deep science-theoretic and methodological questions that underpin the economics discipline.

There’s much in the book I like and appreciate, but there is also a very disturbing apologetic tendency to blame all of the shortcomings on the economists and depicting economics itself as a problem-free smorgasbord collection of models. If you just choose the appropriate model from the immense and varied smorgasbord there’s no problem. It is as if all problems in economics were conjured away if only we could make the proper model selection. To Rodrik the problem is always the economists, never economics itself. I sure wish it was that simple, but having written more than ten books on the history and methodology of economics, and having spent almost forty years among them econs, I have to confess I don’t quite recognize the picture …

first economist

Dags för synvända i statsskuldspolitiken

14 Dec, 2015 at 13:26 | Posted in Economics | Comments Off on Dags för synvända i statsskuldspolitiken

Yours truly och ekonomihistorikerna Lars Ahnland och Rodney Edvinsson hade i lördagens Svenska Dagbladet en artikel om behovet av en synvända i statskuldspolitiken:

I normalfallet är det en risk att inflationen ökar om en stat finansierar sig genom att trycka pengar. Men i dag är högre inflationstakt inte ett hot, utan en välsignelse. Man kan likna inflationen i ekonomin vid blodomloppet hos en människa. Cirkulationen av pengar är det som håller den ekonomiska aktiviteten vid liv. Liksom människors hälsa hotas av både för högt och för lågt blodtryck, hotas ekonomins hälsa av för hög och för låg inflation …

varfor-har-staten-budgetunderskott

Sverige är i ett särskilt gynnsamt läge. Den ­svenska utlandsskulden är mycket liten. I fjol var den endast cirka 7 procent av BNP. Det begränsar starkt risken för att valutaspekulanter skulle sänka den svenska kronan genom att dumpa statspapper och därmed blåsa upp utlandsskuldens värde.

I regel har statsskuldens storlek sällan i sig varit någon avgörande orsak bakom ekonomiska kriser. Snarare har den varit ett symtom på att vi befunnit oss i kris – en kris som med stor sannolikhet blivit ännu värre om inte underskotten i de offentliga finan­serna fått öka.

Med tanke på de stora utmaningar som Sverige står inför så ter sig talet om ansvar för statsbudgeten som oansvarigt. Det finns inget självändamål i det. I stället för att ”värna om statsfinanserna” borde rege­ringen göra det den är ämnad att göra – värna om samhällets framtid och ta sig an de stora ut­maningar vi står inför.

Snart kommer änglarna att landa

13 Dec, 2015 at 20:45 | Posted in Varia | Comments Off on Snart kommer änglarna att landa

 

Ronald Fisher and the p value

11 Dec, 2015 at 09:04 | Posted in Statistics & Econometrics | Comments Off on Ronald Fisher and the p value


And who said learning statistics can’t be fun?

[Actually the Ronald Fisher appearing in the video is a mixture of the real Ronald Fisher and Jerzy Neyman and Egon Pearson, but that’s for another blogpost.]

For my own critical view on the value of p values — see e. g. here.

The blatant absence of empirical fit of macroeconomic models

8 Dec, 2015 at 21:00 | Posted in Economics | 7 Comments

Some months ago sorta-kinda ‘New Keynesian’ Paul Krugman argued on his blog that the problem with the academic profession is that some macroeconomists aren’t “bothered to actually figure out” how the ‘New Keynesian’ model with its Euler conditions — “based on the assumption that people have perfect access to capital markets, so that they can borrow and lend at the same rate” — really works. According to Krugman, this shouldn’t be hard at all — “at least it shouldn’t be for anyone with a graduate training in economics.”

aimage.pngBut if people — not the representative agent — at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in these ‘New Keynesian’ macro models going to help us?

Yours truly’s doubts regarding the ‘New Keynesian’ modelers’ obsession with Euler equations is basically that, as with so many other assumptions in ‘modern’ macroeconomics, the Euler equations don’t fit reality.

In a classic paper by Hansen and Singleton (1982) only very little support for the Euler equations was found, and in later paper by Canzoneri, Cumby, and Diba (2006) it was confirmed that there is vanishing little support for real people acting according to the Euler equatons

In the standard neoclassical consumption model — underpinning ‘New Keynesian’ microfounded macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future?

The Euler equation implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. This importantly implies that according to the neoclassical consumption model that changes in the (real) interest rate and the ratio between future and present consumption move in the same direction.

So good, so far. But how about the real world? Is the neoclassical consumption as described in this kind of models in tune with the empirical facts? Not at all — the data and models are as a rule insconsistent!

In the Euler equation we only have one interest rate, equated to the money market rate as set by the central bank. The crux is that — given almost any specification of the utility function – the two rates are actually often found to be strongly negatively correlated in the empirical literature.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

But being able to model a ‘credible world,’ a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

But how do mainstream economists react when confronted with the monumental absence of empirical fit of their macroeconomic models? Well, they do as they always have done — they use one of their four pet strategies for immunizing their models to the facts:

(1) Treat the model as an axiomatic system, making all its claims into tautologies — ‘true’ by the meaning of propositional connectives.

(2) Use unspecified auxiliary ceteris paribus assumptions, giving all claims put forward in the model unlimited ‘alibis.’

(3) Limit the application of the model to restricted areas where the assumptions/hypotheses/axioms are met.

(4) Leave the application of the model open, making it impossible to falsify/refute the model by facts.

Sounds great doesn’t it?

Well, the problem is, of course, that ‘saving’ theories and models by these kind of immunizing strategies are totally unacceptable from a scientific point of view.

If macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? As long as no convincing justification is put forward for how the inferential bridging between model and reality de facto is made, macroeconomic modelbuilding is little more than hand waving.

The real macroeconomic challenge is to face reality and still try to explain why economic transactions take place – instead of simply conjuring the problem away by assuming rational expectations, or treating uncertainty as if it was possible to reduce it to stochastic risk, or by immunizing models by treating them as purely deductive-axiomic systems. That is scientific cheating. And it has been going on for too long now.

 

Added December 09: In a comment on this post, we are directed to a recent post by Chris Dillow, in which it is argued that “economics is primarily a practical discipline” and since “the real world is a complex place” the solution is to pick models “that are good enough”. It is even maintained that since the world is so complex “there is a positive danger in seeking the truth.”

Well, that is in fact nothing but a (slight) variation of the usual fairy-tale told by mainstream economists in defense of their model Platonistic immunizing strategies. Dillow’s reasoning smacks a lot of Friedman’s instrumentalist immunizing strategy in which the value of model is said to have nothing to do with the ‘truth’ of the hypotheses (assumptions), but (only) with how good the model is in predicting things (which, if really believed in, would have put mainstream economics at rest for good more than a century ago …) In a typical Chicago economics fashion, theories and models are to be treated as something that has very little to do with any substantive content. Unfortunately, this only shows the  prevalent deep ignorance of epistemological and methodological thought among mainstream economists nowadays.

My son’s absolute favourite — and mine

8 Dec, 2015 at 16:32 | Posted in Varia | Comments Off on My son’s absolute favourite — and mine

 

The law of demand — a useless tautology immunized against empirical facts

7 Dec, 2015 at 15:57 | Posted in Economics | 1 Comment

Mainstream economics is usually considered to be very ‘rigorous’ and ‘precise.’ And yes, indeed, it’s certainly full of ‘rigorous’ and ‘precise’ statements like “the state of the economy will remain the same as long as it doesn’t change.” Although ‘true,’ this is however — as most other analytical statements — neither particularly interesting nor informative.

For the sphere of consumption goods, the law of demand is an essential component of the theory of consumer market behavior. With this law, a specific procedural pattern of price-dependent demand is not postulated, that is, a certain demand function, but only the general form that such a function ought to have. The quantity of the good demanded by the consumers is namely characterized as a monotone-decreasing function of its price …

As is well known, the law is usually tagged with a clause that entails numerous interpretation problems: the ceteris paribus clause. In the strict sense this must thus at least be formulated as follows to be acceptable to the majority of theoreticians: ceteris paribus – that is, all things being equal – the demanded quantity of a consumer good is a monotone-decreasing function of its price …

peanutsplatonism

If the factors that are to be left constant remain undetermined, as not so rarely happens, then the law of demand under question is fully immunized to facts, because every case which initially appears contrary must, in the final analysis, be shown to be compatible with this law. The clause here produces something of an absolute alibi, since, for every apparently deviating behavior, some altered factors can be made responsible. This makes the statement untestable, and its informational content decreases to zero.

One might think that it is in any case possible to avert this situation by specifying the factors that are relevant for the clause. However, this is not the case. In an appropriate interpretation of the clause, the law of demand that comes about will become, for example, an analytic proposition, which is in fact true for logical reasons, but which is thus precisely for this reason not informative. This of course applies to any interpretation that makes the then-clause of the law of demand under question a logical consequence of its if-clause so that, in this case, an actual logical implication results … Through an explicit interpretation of the ceteris paribus clause, the law of demand is made into a tautology.

24958274Various widespread formulations of the law of demand contain an interpretation of the clause that does not result in a tautology, but that has another weakness. The list of the factors to be held constant includes, among other things, the structure of the needs of the purchasing group in question. This leads to a difficulty connected with the identification of needs. As long as there is no independent test for the constancy of the structures of needs, any law that is formulated in this way has an absolute ‘alibi’. Any apparent counter case can be traced back to a change in the needs, and thus be discounted. Thus, in this form, the law is also immunized against empirical facts. To counter this situation, it is in fact necessary to dig deeper into the problem of needs and preferences; in many cases, however, this is held to be unacceptable, because it would entail crossing the boundaries into social psychology.

Hans Albert

Keynes on the limits of econometrics

7 Dec, 2015 at 14:52 | Posted in Statistics & Econometrics | 3 Comments

fraud-kit

Many thanks for sending me your article. I enjoyed it very much. I am sure these matters need discussing in that sort of way. There is one point, to which in practice I attach a great importance, you do not allude to. In many of these statistical researches, in order to get enough observations they have to be scattered over a lengthy period of time; and for a lengthy period of time it very seldom remains true that the environment is sufficiently stable. That is the dilemma of many of these enquiries, which they do not seem to me to face. Either they are dependent on too few observations, or they cannot rely on the stability of the environment. It is only rarely that this dilemma can be avoided.

Letter from J. M. Keynes to T. Koopmans, May 29, 1941

 

Bravest of the brave

5 Dec, 2015 at 16:16 | Posted in Economics, Politics & Society | Comments Off on Bravest of the brave

Dan-Ellsberg-edward-snowden

Edward Snowden and Daniel Ellsberg.
Bravest of the brave.
Never give in.
Never give up.

My friends — you bow to no one
 

The model of all economic models (wonkish)

3 Dec, 2015 at 19:38 | Posted in Economics | 9 Comments

Economics is perhaps more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.

Economic_eng

Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).

Building their economic models, modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/ goals. How these preferences/wishes/interests/goals are formed is not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.

Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other.

So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

These economic models are not primarily constructed for being able to analyze individuals and their aspirations, motivations, interests, etc., but typically for analyzing social phenomena as a kind of equilibrium that emerges through the interaction between individuals. Employing a reductionist-individualist methodological approach, macroeconomic phenomena are, analytically, given microfoundations.

Now, of course, no one takes the ur-model (and those models that build on it) as a good (or, even less, true) representation of economic reality (which would demand a high degree of appropriate conformity with the essential characteristics of the real phenomena, that, even when weighing inn pragmatic aspects such as ‘purpose’ and ‘adequacy’, it is hard to see that this ‘thin’ model could deliver). The model is typically seen as a kind of ‘thought-experimental’ bench-mark device for enabling a rigorous mathematically tractable illustration of how an ideal market economy functions, and to be able to compare that ‘ideal’ with reality. The model is supposed to supply us with analytical and explanatory power, enabling us to detect, describe and understand mechanisms and tendencies in what happens around us in real economies.

Based on the model — and on interpreting it as something more than a deductive-axiomatic system — predictions and explanations can be made and confronted with empirical data and what we think we know. If the discrepancy between model and reality is too large — ‘falsifying’ the hypotheses generated by the model — the thought is that the modeler through ‘successive approximations’ improves on the explanatory and predictive capacity of the model. 

When applying their preferred deductivist thinking in economics, mainstream neoclassical economists usually use this ur-model and its more or less tightly knit axiomatic core assumptions to set up further “as if” models from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply don’t hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization, that permeats the ur-model, is a marvellous tool in mathematics and axiomatic-deductivist systems, but, a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.

Being told that the model is rigorus and amenable to ‘successive approximations’ to reality is of little avail, especially when the law-like (nomological) core assumptions are highly questionable and extremely difficult to test. Being able to construct “thought-experiments,“ depicting logical possibilities, doesn’t — really — take us very far. An obvious problem with the mainstream neoclassical ur-model — formulated in such a way that it realiter is extremely difficult to empirically test and decisively evaluate if it’s ‘corrobated’ or ‘falsified.’ Such models are from an scientific-explanatory point of view unsatisfying. The ‘thinness’ is bought at to high a price, unless you decide to leave the intended area of application unspecified or immunize your model by interpreting it as nothing more than two sets of core and auxiliary assumptions making up a content-less theoretical system with no connection whatsoever to reality.

Seen from a deductive-nomological perspective, the ur-model (M) consist of, as we have seen, a set of more or less general (typically universal) law-like hypotheses (CA) and a set of (typically spatio-temporal) auxiliary conditions (AA). The auxiliary assumptions give “boundary” descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises CA and AA. Using this kind of model economists can be portrayed as trying to explain/predict facts by subsuming them under CA given AA.

This account of theories, models, explanations and predictions does not — of course — give a realistic account of actual scientific practices, but rather aspires to give an idealized account of them.

An obvious problem with the formal-logical requirements of what counts as CA is the often severely restricted reach of the ‘law.’ In the worst case it may not be applicable to any real, empirical, relevant situation at all. And if AA is not ‘true,’ then M doesn’t really explain (although it may predict) at all. Deductive arguments should be sound — valid and with true premises — so that we are assured of having true conclusions. Constructing models assuming ‘rational’ expectations, says nothing of situations where expectations are ‘non-rational.’

Most mainstream economic models — elaborations on the ur-model — are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?

And where does the drive to build those kinds of models come from?

I think one important rational behind this kind of model building is the quest for rigour, and more precisely, logical rigour. Formalization of economics has been going on for more than a century and with time the it has become obvious that the preferred kind of formalization is the one that rigorously follows the rules of formal logic. As in mathematics, this has gone hand in hand with a growing emphasis on axiomatics. Instead of basically trying to establish a connection between empirical data and assumptions, ‘truth’ has come to be reduced to, a question of fulfilling internal consistency demands between conclusion and premises, instead of showing a ‘congruence’ between model assumptions and reality. This has, of course, severely restricted the applicability of economic theory and models.

Not all mainstream economists subscribe to this rather outré deductive-axiomatic view of modeling, and so when confronted with the massive empirical refutations of almost every theory and model they have set up, many mainstream economists react by saying that these refutations only hit AA (the Lakatosian ‘protective belt’), and that by ‘successive approximations’ it is possible to make the theories and models less abstract and more realistic, and — eventually — more readily testable and predictably accurate. Even if CA & AA1 doesn’t have much of empirical content, if by successive approximation we reach, say, CA & AA25, we are to believe that we can finally reach robust and true predictions and explanations.

But there are grave problems with this modeling view, too. The tendency for modelers to use the method of successive approximations as a kind of ‘immunization,’ implies that it is taken for granted that there can never be any faults with CA. Explanatory and predictive failures hinge solely on AA. That the CA used by mainstream economics should all be held non-defeasibly corrobated, seems, however — to say the least — rather unwarranted.

Confronted with the empirical failures of their models and theories, even these mainstream economists often retreat into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes/pretenses whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way. But restricting the analytical activity to examining and making inferences in the models is tantamount to treating the models as a self-contained substitute systems, rather than as surrogate systems that the modeler uses to indirectly being able to understand or explain the real target system.

Trying to develop a science where we want to be better equipped to explain and understand real societies and economies, it sure can’t be enough to prove or deduce things in model worlds. If theories and models do not — directly or indirectly — tell us anything of the world we live in, then why should we waste time on them?

Three symptoms of the sorry state of economics

2 Dec, 2015 at 16:46 | Posted in Economics | 3 Comments

1. The best-selling economic book explains Sumo, but not economics.

Freakonomics has sold more than 4 million copies making it one of the best-selling economic books in history. 57464026It tells us, for example, that Sumo wrestlers are likely to throw matches when their opponent is in danger of losing status with a loss. Freakonomics is, however, silent on monetary or fiscal policy. This is not negative statement about the book or the authors, but it is a negative statement on the field. Where is the best-selling book that correctly explains how to grow the economy?

2. Nobel Prize winner Professor Harry Markowitz does not use his own theory.

Professor Harry Markowitz won his Nobel Prize for a theory on how to make investments. When investors decide to buy stocks or bonds, for example, Professor Markowitz’s theory argues the optimal mix requires examination not only of historic risk and return, but also the correlation (or co-variance) between returns.

Does Professor Markowitz use his theory when he buys stocks and bonds? No. He splits his money 50-50. He is quoted as saying, “I should have computed the historical co-variances” but through psychological introspection he instead just split his money equally into stocks and bonds.

3. Nobel Prize winners Professor Myron Scholes and Robert C. Merton did use their own theory and almost blew up the world.

Q: What is worse than an economist who doesn’t use his Nobel prize winning theory?
A: Economists who do use their theory (and almost collapse the world economy).

Professor Myron S. Scholes and Robert C. Merton won the Nobel Prize in 1997. Both men were principals in the hedge fund Long-Term Capital Management (LTCM). Soon after their Nobel Prizes, LTCM went bust …

Economics is a lost field. More than 200 years after Adam Smith wrote the Wealth of Nations, economics has no answer to the most important economic questions. Fields go through periods of growth and periods of stasis; I believe we are in a period of prolonged stasis in that we do not know more than we did 10 or 100 years ago.

Terry Burnham

 

One of my favourite classics (1)

2 Dec, 2015 at 16:12 | Posted in Economics | Comments Off on One of my favourite classics (1)

 
51SDawxURvL._SY344_BO1,204,203,200_

An absolute must-read for every social scientist — but if you don’t have the time, here’s the crash course illustration of Schelling’s ‘critical mass’ models:

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.