Original sin in economics

30 August, 2014 at 13:32 | Posted in Theory of Science & Methodology | Leave a comment

Ever since the Enlightenment various economists had been seeking to mathematise the study of the economy. In this, at least prior to the early years of the twentieth century, economists keen to mathematise their discipline felt constrained in numerous ways, and not least by pressures by (non-social) natural scientists and influential peers to conform to the ‘standards’ and procedures of (non-social) natural science, and thereby abandon any idea of constructing an autonomous tradition of mathematical economics. Especially influential, in due course, was the classical reductionist programme, the idea that all mathematical disciplines should be reduced to or based on the model of physics, in particular on the strictly deterministic approach of mechanics, with its emphasis on methods of infinitesimal calculus …

quineHowever, in the early part of the twentieth century changes occurred in the inter-pretation of the very nature of mathe-matics, changes that caused the classical reductionist programme itself to fall into disarray. With the development of relativity theory and especially quantum theory, the image of nature as continuous came to be re-examined in particular, and the role of infinitesimal calculus, which had previously been regarded as having almost ubiquitous relevance within physics, came to be re-examined even within that domain.

The outcome, in effect, was a switch away from the long-standing emphasis on mathematics as an attempt to apply the physics model, and specifically the mechanics metaphor, to an emphasis on mathematics for its own sake.

Mathematics, especially through the work of David Hilbert, became increasingly viewed as a discipline properly concerned with providing a pool of frameworks for possible realities. No longer was mathematics seen as the language of (non-social) nature, abstracted from the study of the latter. Rather, it was conceived as a practice concerned with formulating systems comprising sets of axioms and their deductive consequences, with these systems in effect taking on a life of their own. The task of finding applications was henceforth regarded as being of secondary importance at best, and not of immediate concern.

This emergence of the axiomatic method removed at a stroke various hitherto insurmountable constraints facing those who would mathematise the discipline of economics. Researchers involved with mathematical projects in economics could, for the time being at least, postpone the day of interpreting their preferred axioms and assumptions. There was no longer any need to seek the blessing of mathematicians and physicists or of other economists who might insist that the relevance of metaphors and analogies be established at the outset. In particular it was no longer regarded as necessary, or even relevant, to economic model construction to consider the nature of social reality, at least for the time being. Nor, it seemed, was it possible for anyone to insist with any legitimacy that the formulations of economists conform to any specific model already found to be successful elsewhere (such as the mechanics model in physics). Indeed, the very idea of fixed metaphors or even interpretations, came to be rejected by some economic ‘modellers’ (albeit never in any really plausible manner).

The result was that in due course deductivism in economics, through morphing into mathematical deductivism on the back of developments within the discipline of mathematics, came to acquire a new lease of life, with practitioners (once more) potentially oblivious to any inconsistency between the ontological presuppositions of adopting a mathematical modelling emphasis and the nature of social reality. The consequent rise of mathematical deductivism has culminated in the situation we find today.

Tony Lawson

On confusing research and statistics

30 August, 2014 at 13:16 | Posted in Statistics & Econometrics | Leave a comment

Coupled with downright incompetence in statistics, we often find the syndrome that I have come to call statisticism: the notion that computing is synonymous with doing research, the naïve faith that statistics is a complete or sufficient basis for scientific methodology, the superstition that statistical formulas exist for evaluating such things as the relative merits of different substantive theories or the “importance” of  the causes of a “dependent variable”; and the delusion that decomposing the covariations of some arbitrary and haphazardly assembled collection of variables can somehow justify not only a “causal model” but also, praise a mark, a “measurement model.” There would be no point in deploring such caricatures of the scientific enterprise if there were a clearly identifiable sector of social science research wherein such fallacies were clearly recognized and emphatically out of bounds.

Dudley Duncan:   Notes on Social Measurement

On the difference between stationarity and ergodicity

30 August, 2014 at 10:25 | Posted in Statistics & Econometrics | Leave a comment

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average.

cointossingSay we have two coins, where coin A has a probability 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This “process” is obviously stationary, but the time averages — [H1 + ... + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages has probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

In Dreams

29 August, 2014 at 20:44 | Posted in Varia | Leave a comment


Rom i regnet (privat)

29 August, 2014 at 18:47 | Posted in Varia | Leave a comment

Till A.L. — som lyste upp tillvaron under tre år på Linnéskolan

The Arrow-Debreu obsession

29 August, 2014 at 17:14 | Posted in Economics | 3 Comments

I’ve never yet been able to understand why the economics profession was/is so impressed by the Arrow-Debreu results. They establish that in an extremely abstract model of an economy, there exists a unique equilibrium with certain properties. The assumptions required to obtain the result make this economy utterly unlike anything in the real world. In effect, it tells us nothing at all.what if So why pay any attention to it? The attention, I suspect, must come from some prior fascination with the idea of competitive equilibrium, and a desire to see the world through that lens, a desire that is more powerful than the desire to understand the real world itself. This fascination really does hold a kind of deranging power over economic theorists, so powerful that they lose the ability to think in even minimally logical terms; they fail to distinguish necessary from sufficient conditions, and manage to overlook the issue of the stability of equilibria.

Mark Buchanan

Almost a century and a half after Léon Walras founded neoclassical general equilibrium theory, economists still have not been able to show that markets move economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. After reading Buchanan’s article one however has to ask oneself — what good does that do?

As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is negligible. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up.

Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away) is a gross misallocation of intellectual resources and time.

And then, of course, there is Sonnenschein-Mantel-Debreu!

So what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Instead of real maturity, we see that general equilibrium theory possesses only pseudo-maturity.kornai For the description of the economic system, mathematical economics has succeeded in constructing a formalized theoretical structure, thus giving an impression of maturity, but one of the main criteria of maturity, namely, verification, has hardly been satisfied. In comparison to the amount of work devoted to the construction of the abstract theory, the amount of effort which has been applied, up to now, in checking the assumptions and statements seems inconsequential.

Pedagogikämnets kärna äntligen funnen

29 August, 2014 at 10:32 | Posted in Theory of Science & Methodology | 1 Comment

I senaste numret av Pedagogisk Forskning i Sverige (2-3 2014) ger författaren till artikeln En pedagogisk relation mellan människa och häst. På väg mot en pedagogisk filosofisk utforskning av mellanrummet följande intressanta “programförklaring”:

Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.

elite-daily-sleeping-studentOch så säger man att pedagogikämnet är i kris. Undrar varför …

Ricardo’s theory of comparative advantage in 60 seconds

29 August, 2014 at 08:19 | Posted in Economics | 3 Comments


Regeringens arbetslinje — låt arbetarna betala för krisen

28 August, 2014 at 20:22 | Posted in Economics, Politics & Society | Leave a comment

Poängen är alltså att sätta press på lönerna. Vilket Anders Borg – trots upprepade förnekanden under sin tid som finansminister – faktiskt har erkänt både en och två gånger. ”Det blir naturligtvis tufft för de arbetslösa. Syftet är att öka trycket att söka och acceptera jobb”, sa Anders Borg till LO-tidningen hösten 2004. Och på ett SNS-seminarium om den socialdemokratiska höstbudgeten förklarade han vad detta ökade söktryck, detta ökade ”arbetsutbud”, skulle leda till i förlängningen: ”Så småningom värker sänkta ersättningsnivåer igenom systemet och det blir nya jobb eftersom lönebildningen påverkas, vilket leder till lägre löner.”


Enligt en IFAU-studie skriven av tre nationalekonomer med Lars Calmfors i spetsen har effekten också blivit den avsedda: ”Lönerna har hamnat på en lägre nivå än vad de annars skulle ha gjort till följd av regeringens jobbskatteavdrag och den mindre generösa a-kassan. Det har varit en mycket kontroversiell fråga där man inte så gärna från regeringens sida har velat säga att troliga effekter går via lönebildningen”, sa han till Ekot sommaren 2013, och tillade att man kunde tolka studien som att ”lönerna har hamnat i storleksordningen 3-4 procent lägre i dag än vad de annars skulle ha varit” – inte obetydligt alltså. Men ändå inte tillräckligt för Calmfors, som drog slutsatsen att ”löneskillnaderna är för små” i en artikel i DN våren 2014 …

Att svaret på arbetslöshetsproblemet stavas lägre löner och fler låglönejobb anses snarare självklart bland de nationalekonomer som dominerat debatten sedan 90-talet. Om det uppstår ett ”utbudsöverskott”, det vill säga arbetslöshet, beror det enligt den neoklassiska teorin helt sonika på att priset, det vill säga lönen, satts för högt.

Enligt ekonomer med keynesiansk lutning – som sällan får chansen att påverka jobbpolitiken nuförtiden – bygger dock den här teorin på en begränsad förståelse av arbetslösheten. Lars Pålsson Syll, professor i ekonomisk historia, doktor i nationalekonomi och professor i samhällskunskap vid Malmö högskola, menar till exempel att generella lönesänkningar ökar risken för att fler jobb går förlorade.

”Om ett företag eller en delbransch lyckas få billigare arbetskraft genom att sänka lönerna, ja då är det inget samhällsproblem”, säger han till mig. ”Problemet uppstår om lönesänkarstrategin blir allmänt förekommande. För då minskar den totala efterfrågan i ekonomin, och då blir arbetslösheten i slutändan ännu högre. Till det kommer en ökad ojämlikhet i inkomster och välfärd, vilket i sig få väldigt negativa effekter på sysselsättningen.”

Hur då? undrar jag.

Ja, titta bara på USA, föreslår Lars Pålsson Syll.

Det var just mot USA som svenska nationalekonomer riktade blicken i mitten av 90-talet, i en strävan att verklighetsförankra sina teorier. Under Bill Clintons presidentskap 1992–2000 föll nämligen den amerikanska arbetslösheten ner mot fyra procent, vilket ansågs bero på att ersättningsnivåerna var lägre, löneskillnaderna större och låglönejobben fler. I Sverige och resten av Europa hade däremot massarbetslösheten permanentats. Med USA som ideal sågade nationalekonomins nestor Assar Lindbeck den svenska modellen i Ekonomisk Debatt 1996: ”Generösa bidrag” hade skapat ”speciella arbetslöshetskulturer” och försvagat ”arbetslöshetens dämpande effekt på löneökningstakten, vilket begränsar efterfrågan på arbetskraft”, skrev han.

Lindbeck föreslog ett åtgärdspaket för att amerikanisera Sverige: Fler jobb i privat servicesektor, framdrivna genom sänkta arbetsgivaravgifter för ”lågproduktiva löntagare” och subventioner av hushållstjänster; men också mer ”hårdhänta” metoder som ”flexiblare relativlöner, mindre generösa arbetslöshetsunderstöd” och ”en urholkad lagstiftning om anställningstrygghet”. Så skulle landet komma på fötter.

Nationalekonomernas vurm för USA har dock försvunnit på senare år. Vilket kanske inte är så konstigt. Det ekonomiska under som tycktes bekräfta att vägen till full sysselsättning gick genom sänkt skatt, bantade socialförsäkringar och fler låglönejobb var i mångt och mycket ett luftslott. Den amerikanska tillväxten, visade det sig, byggde mest på en svällande kreditbubbla som sprack i mitten av 00-talet, och den ”lönespridning” som sades vara bra för ekonomin bäddade istället för en finanskris, då arbetarna skuldsatte sig allt mer i ett försök att kompensera de dalande lönerna.

Kent Werne

Why the theory of comparative advantage is obsolete

28 August, 2014 at 08:45 | Posted in Economics | 4 Comments

It’s World Trade Day in Stockholm today, so, of course, I had to contribute …

b812The classical theory of comparative advantage has driven US trade policy for the past fifty years. That policy, in combination with technical innovations that have lowered costs of transportation and communication, has opened the global economy. Yet paradoxically, this opening has rendered classical trade theory obsolete. That in turn has left the US economically vulnerable because its trade policy remains stuck in the past and based on ideas that no longer hold.

The logic behind classical free trade is that all can benefit when countries specialize in producing those things in which they have comparative advantage. The necessary requirement is that the means of production (capital and technology) are internationally immobile and stuck in each country. That is what globalization has undone.

Several years ago Jack Welch, former CEO of General Electric, captured the new reality when he talked of ideally having every plant you own on a barge. The economic logic was that factories should float between countries to take advantage of lowest costs, be they due to under-valued exchange rates, low taxes, subsidies, or a surfeit of cheap labor …

The U.S. and European response to Welch’s barge has been competitiveness policy that advocates measures such as increased education spending to improve skills; lower corporate tax rates; and investment and R&D incentives. The thinking is increased competitiveness can make Europe and the US more attractive to businesses.

Unfortunately, competitiveness policy is not up to the task of anchoring the barge, and it can even be counter-productive. The core problem is corporations are globally mobile. Thus, government can subsidize R&D spending, but the resulting innovations may simply end up in new offshore factories …

A critical consequence of Welch’s barge is the creation of a corporation versus country divide. Previously, when corporations were nationally based, profit maximization by business contributed to national economic success by ensuring efficient resource use. Today, corporations still maximize profits, but they do so from the standpoint of their global operations. Consequently, what is good for corporations may not be good for country …

Thomas Palley


As always with Palley — thought-provoking and interesting.
Re comparative advantages have I argued in a similar vein in The Dialectics of Globalization (in Swedish).

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.