What the euro is all about

30 June, 2018 at 09:58 | Posted in Economics | 3 Comments

There are still economists and politicians out there who think that the euro is the only future for Europe. However, there seem to be some rather basic facts about optimal currency areas that it would perhaps be wise to consider …

The idea that the euro has “failed” is dangerously naive. The euro is doing exactly what its progenitor – and the wealthy 1%-ers who adopted it – predicted and planned for it to do.

suThat progenitor is former University of Chicago economist Robert Mundell. The architect of “supply-side economics” is now a professor at Columbia University, but I knew him through his connection to my Chicago professor, Milton Friedman …

The euro would really do its work when crises hit, Mundell explained. Removing a government’s control over currency would prevent nasty little elected officials from using Keynesian monetary and fiscal juice to pull a nation out of recession.

“It puts monetary policy out of the reach of politicians,” he said. “[And] without fiscal policy, the only way nations can keep jobs is by the competitive reduction of rules on business” …

As another Nobelist, Paul Krugman, notes, the creation of the eurozone violated the basic economic rule known as “optimum currency area”. This was a rule devised by Bob Mundell.

reaganomicsThat doesn’t bother Mundell. For him, the euro wasn’t about turning Europe into a powerful, unified economic unit. It was about Reagan and Thatcher …

Mundell explained to me that, in fact, the euro is of a piece with Reaganomics:

“Monetary discipline forces fiscal discipline on the politicians as well.”

And when crises arise, economically disarmed nations have little to do but wipe away government regulations wholesale, privatize state industries en masse, slash taxes and send the European welfare state down the drain.

Greg Palast/The Guardian

Advertisements

Paul Krugman — a math-is-the-message-modeler

29 June, 2018 at 09:18 | Posted in Economics | 1 Comment

In a post on his blog, Paul Krugman argues that ‘Keynesian’ macroeconomics more than anything else “made economics the model-oriented field it has become.” In Krugman’s eyes, Keynes was a “pretty klutzy modeler,” and it was only thanks to Samuelson’s famous 45-degree diagram and Hicks’ IS-LM that things got into place. Although admitting that economists have a tendency to use ”excessive math” and “equate hard math with quality” he still vehemently defends — and always has — the mathematization of economics:

I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

Sure, ‘New Keynesian’ economists like Mankiw and Krugman — and their forerunners, ‘Keynesian’ economists like Paul Samuelson and (young) John Hicks — certainly have contributed to making economics more mathematical and “model-oriented.”

65547036But if these math-is-the-message-modelers aren’t able to show that the mechanisms or causes that they isolate and handle in their mathematically formalized macromodels also are applicable to the real world, these mathematical models are of limited value to our understandings of real-world​ economies.

When it comes to modeling philosophy, Krugman defends his position in the following words (my italics):

I don’t mean that setting up and working out microfounded models is a waste of time. On the contrary, trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking. In fact, I’ve had that experience several times.

The argument is hardly convincing. If people put that enormous amount of time and energy that they do into constructing macroeconomic models, then they really have to be substantially contributing to our understanding and ability to explain and grasp real macroeconomic processes.

For years Krugman has in more than one article criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now — just to take one example — Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.” When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise criticizes.

If macroeconomic models – no matter of what ilk – make assumptions, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypotheses can be bridged, are obviously non-justifiable.

A gadget is just a gadget — and brilliantly silly simple models — IS-LM included — do not help us working with the fundamental issues of modern economies any more than brilliantly silly complicated models — calibrated DSGE and RBC models included.

Krugman’s formalization schizophrenia

28 June, 2018 at 10:17 | Posted in Economics | 7 Comments

In an article published last week, Nicholas Gruen criticized the modern vogue of formalization in economics and took Paul Krugman as an example:

He’s saying first that economists can’t see what isn’t in their models – whereas Hicks and pretty much every economist until the late twentieth century would have understood the need for careful and ongoing reconciliation of formal modelling and other sources of knowledge. More shockingly he’s saying that those who smell a rat at the dysfunctionality of all this should just get over themselves. To quote Krugman:

“You may not like this tendency; certainly economists tend to be too quick to dismiss what has not been formalized (although I believe that the focus on models is basically right).”

It’s ironic given how compellingly Krugman has documented the regression of macroeconomics in the same period that saw his own rise via new trade theory. I think both retrogressions were driven by formalisation at all costs, though, in the case of new classical macro, this mindset gave additional licence to the motivated reasoning of the libertarian right. In each case, economics regresses into scholastic abstractions, and obviously important parts of the story slide pristine invisibility to the elect.

Responding to the article, Paul Krugman yesterday rode out to defend formalism in economics:

forWhat about new trade theory? What us new trade theorists did was say, “It looks as if there’s a lot going on in world trade that can’t be explained in existing formal models. So let’s see if there’s a different approach to modeling that can make sense of what we see” …

Now, we can argue about how much good this formalization did. I still believe that the formal models provided a level of clarity and legitimacy to trade discussion that wasn’t there before.

Now, this is not — as shown in Gruen’s article — the first time Krugman has felt the urge to defend mainstream formalization. In another post up on his blog, Krugman argues that the ‘discipline of modeling’ is a sine qua non for tackling politically and emotionally charged economic issues:

economist-nakedIn my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

So when Krugman and other ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear’ and provide ‘legitimacy.’

Yours truly is, to say the least,straight jacket far from convinced. The alarm that sets off in my brain is that this, rather than being helpful for understanding real-world economic issues, sounds more like an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Krugman, as we all know, can at times be a harsh critique of economic formalism. That is, other people’s formalism. His own, and other ‘New Keynesians’ formalizations, he always seems to find some handy justification for.

Contrary to the impression Krugman wants to convey, his and other ‘New Keynesians’ modelling strategy has a lot in common with that of people like Robert Lucas and Thomas Sargent. ‘New Keynesian’ macroeconomic models build on Real Business Cycle foundations,  regularly assuming representative actors, rational expectations, market clearing and equilibrium. But if we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable.,

IQ tests, genetics, and the Flynn Effect

28 June, 2018 at 09:17 | Posted in Varia | Comments Off on IQ tests, genetics, and the Flynn Effect

 

The main reason why almost all econometric models are wrong

27 June, 2018 at 11:59 | Posted in Statistics & Econometrics | 3 Comments

Since econometrics doesn’t content itself with only making optimal predictions, but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — most important of these are additivity and linearity. Important, simply because if they are not true, your model is invalid and descriptively incorrect.  And when the model is wrong — well, then it’s wrong.

The assumption of additivity and linearity means that the outcome variable is, in reality, linearly related to any predictors … and that if you have several predictors then their combined effect is best described by adding their effects together …

catdogThis assumption is the most important because if it is not true then even if all other assumptions are met, your model is invalid because you have described it incorrectly. It’s a bit like calling your pet cat a dog: you can try to get it to go in a kennel, or to fetch sticks, or to sit when you tell it to, but don’t be surprised when its behaviour isn’t what you expect because even though you’ve called it a dog, it is in fact a cat. Similarly, if you have described your statistical model inaccurately it won’t behave itself and there’s no point in interpreting its parameter estimates or worrying about significance tests of confidence intervals: the model is wrong.

Andy Field

Let me take the opportunity to elaborate a little on why I find these assumptions of such paramount importance and ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able  to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of mainstream economic theoretical modelling – rather useless. No matter how often you call your pet cat a dog, it is still nothing but a cat …

Uncertainty heuristics

27 June, 2018 at 09:56 | Posted in Theory of Science & Methodology | Comments Off on Uncertainty heuristics

 

Animal cruelty and human dignity

26 June, 2018 at 17:09 | Posted in Politics & Society | 1 Comment

 

pid_1103Throughout European history the idea of the human being has been expressed in contradistinction to the animal. The latter’s lack of reason is the proof of human dignity. So insistently and unanimously has this antithesis been recited … that few other ideas are so fundamental to Western anthropology. The antithesis is acknowledged even today. The behaviorists only appear to have forgotten it. That they apply to human beings the same formulae and results which they wring without restraint from defenseless animals in their abominable physiological laboratories, proclaims the difference in especially subtle way. The conclusion they draw from the mutilated animal bodies applies, not to animals in freedom, but to human beings today. By mistreating animals they announce that they, and only they in the whole of creation, function voluntarily in the same mechanical, blind, automatic way the twitching movements of the bound victims made use of by the expert …

In this world liberated from appearance — in which human beings, having forfeited reflection, have become once more the cleverest animals, which subjugate the rest of the universe when they happen not to be tear­ing themselves apart — to show concern for animals is considered no longer merely sentimental but​ a betrayal of progress.

So much for ‘statistical objectivity’

26 June, 2018 at 15:39 | Posted in Statistics & Econometrics | 4 Comments

Last year, we recruited 29 teams of researchers and asked them to answer the same research question with the same data set. Teams approached the data with a wide array of analytical techniques, and obtained highly varied results …

All teams were given the same large data set collected by a sports-statistics firm across four major football leagues. It included referee calls, counts of how often referees encountered each player, and player demographics including team position, height and weight. It also included a rating of players’ skin colour …

unchallengable-statisticsOf the 29 teams, 20 found a statistically significant correlation between skin colour and red cards … Findings varied enormously, from a slight (and non-significant) tendency for referees to give more red cards to light-skinned players to a strong trend of giving more red cards to dark-skinned players …

Had any one of these 29 analyses come out as a single peer-reviewed publication, the conclusion could have ranged from no race bias in referee decisions to a huge bias.

Raphael Silberzahn & Eric Uhlmann

Research that strongly underlines that even in statistics, the researcher has many degrees of freedom. In statistics — as in economics and econometrics — the results we get depend on the assumptions we make in our models. Changing those assumptions — playing a more important role than the data we feed into our models — leed to far-reaching changes in our conclusions. Using statistics​ is no guarantee we get at any ‘objective truth.’

Take full responsibility for your life

26 June, 2018 at 10:14 | Posted in Varia | Comments Off on Take full responsibility for your life


Never give in. Never give up.

Arons dröm (personal)

26 June, 2018 at 10:03 | Posted in Varia | Comments Off on Arons dröm (personal)


Filmer kan beröra oss på många olika sätt. De flesta är spännande eller roliga tidsfördriv. Men sen finns också de riktigt stora filmerna. De som tar tag i oss och tränger djupt under skinnet och skakar om oss. För alltid.

Kjell-Åke Anderssons filmatisering av Göran Tunströms Juloratoriet — med gudabenådad musik av Stefan Nilsson — är en sådan film. Det är en av de sorgligaste filmer som kan ses. Men på samma gång kanske också den allra vackraste. Den om kärlekens oändliga styrka och kraft.

Lying with statistics

25 June, 2018 at 19:25 | Posted in Statistics & Econometrics | Comments Off on Lying with statistics

 

Nonsense is nonsense

25 June, 2018 at 16:32 | Posted in Economics | 3 Comments

The Conservative belief that there is some law of nature which prevents men from being employed, that it is “rash” to employ men, and that it is financially ‘sound’ to maintain a tenth of the population in idleness for an indefinite period, is crazily improbable – the sort of thing which no man could believe who had not had his head fuddled with nonsense for years and years … 0616_ig-john-maynard-keynes_1024x576Our main task, therefore, will be to confirm the reader’s instinct that what seems sensible is sensible, and what seems nonsense is nonsense. We shall try to show him that the conclusion, that if new forms of employment are offered more men will be employed, is as obvious as it sounds and contains no hidden snags; that to set unemployed men to work on useful tasks does what it appears to do, namely, increases the national wealth; and that the notion, that we shall, for intricate reasons, ruin ourselves financially if we use this means to increase our well-being, is what it looks like – a bogy.

John Maynard Keynes (1929)

Brave women

25 June, 2018 at 15:50 | Posted in Politics & Society | Comments Off on Brave women

Courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but stand up for one’s rights not to be humiliated or abused.

Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given.

Dignity, a better life, or justice and rule of law, are things worth fighting for. Not to step back, in spite of confronting the mighty and powerful, creates courageous acts that stay in our memories and means something — as when this Iranian woman refuses to be harassed by self-appointed moral guardians.

The American descent

24 June, 2018 at 12:57 | Posted in Politics & Society | Comments Off on The American descent

The speed of America’s moral descent under Donald Trump is breathtaking. In a matter of months we’ve gone from a nation that stood for life, liberty and the pursuit of happiness to a nation that tears children from their parents and puts them in cages.

truWhat’s almost equally remarkable about this plunge into barbarism is that it’s not a response to any actual problem. The mass influx of murderers and rapists that Trump talks about, the wave of crime committed by immigrants here (and, in his mind, refugees in Germany), are things that simply aren’t happening. They’re just sick fantasies being used to justify real atrocities.

And you know what this reminds me of? The history of anti-Semitism, a tale of prejudice fueled by myths and hoaxes that ended in genocide.

Paul Krugman

On the limits of ‘mediation analysis’ and ‘statistical causality’

23 June, 2018 at 23:18 | Posted in Statistics & Econometrics | 5 Comments

mediator“Mediation analysis” is this thing where you have a treatment and an outcome and you’re trying to model how the treatment works: how much does it directly affect the outcome, and how much is the effect “mediated” through intermediate variables …

In the real world, it’s my impression that almost all the mediation analyses that people actually fit in the social and medical sciences are misguided: lots of examples where the assumptions aren’t clear and where, in any case, coefficient estimates are hopelessly noisy and where confused people will over-interpret statistical significance …

More and more I’ve been coming to the conclusion that the standard causal inference paradigm is broken … So how to do it? I don’t think traditional path analysis or other multivariate methods of the throw-all-the-data-in-the-blender-and-let-God-sort-em-out variety will do the job. Instead we need some structure and some prior information.

Andrew Gelman

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.

David Freedman

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real world, well, then we haven’t really obtained the causation we are looking for.

If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship should exist between statistical and subject-matter concerns …
introduction-to-statistical-inferenceThe idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods and, especially to ‘practical science’ where the central concern is indeed with ‘the consequences of performing particular acts’. The development of this idea in the context of medical and agricultural research is as understandable as the development of that of causation as robust dependence within applied econometrics. However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that​ under the — highly anthropocentric — principle of ‘no causation without manipulation’, the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.

John H. Goldthorpe

To save Europe we have to abandon the euro

23 June, 2018 at 12:30 | Posted in Economics | 9 Comments

The euro crisis is far from over. The tough austerity measures imposed in the eurozone has made economy after economy contract. And it has not only made things worse in the periphery countries, but also in countries like France and Germany. Alarming facts that should be taken seriously.

The problems — created to a large extent by the euro — may not only endanger our economies, but also our democracy itself. How much whipping can democracy take? How many more are going to get seriously hurt and ruined before we end this madness and scrap the euro?

The ‘European idea’—or better: ideology—notwithstanding, the euro has split Europe in two. As the engine of an ever-closer union the currency’s balance sheet has been disastrous. Norway and Switzerland will not be joining the eu any time soon; Britain is actively considering leaving it altogether. Sweden and Denmark were supposed to adopt the euro at some point; that is now off the table. The Eurozone itself is split between surplus and deficit countries, North and South, Germany and the rest. At no point since the end of World War Two have its nation-states confronted each other with so much hostility; the historic achievements of European unification have never been so threatened …

Anyone wishing to understand how an institution such as the single currency can wreak such havoc needs a concept of money that goes beyond that of the liberal economic tradition and the sociological theory informed by it. The conflicts in the Eurozone can only be decoded with the aid of an economic theory that can conceive of money not merely as a system of signs that symbolize claims and contractual obligations, but also, in tune with Weber’s view, as the product of a ruling organization, and hence as a contentious and contested institution with distributive consequences full of potential for conflict …

NLR95coverNow more than ever there is a grotesque gap between capitalism’s intensifying reproduction problems and the collective energy needed to resolve them … This may mean that there is no guarantee that the people who have been so kind as to present us with the euro will be able to protect us from its consequences, or will even make a serious attempt to do so. The sorcerer’s apprentices will be unable to let go of the broom with which they aimed to cleanse Europe of its pre-modern social and anti-capitalist foibles, for the sake of a neoliberal transformation of its capitalism. The most plausible scenario for the Europe of the near and not-so-near future is one of growing economic disparities—and of increasing political and cultural hostility between its peoples, as they find themselves flanked by technocratic attempts to undermine democracy on the one side, and the rise of new nationalist parties on the other. These will seize the opportunity to declare themselves the authentic champions of the growing number of so-called losers of modernization, who feel they have been abandoned by a social democracy that has embraced the market and globalization.

Wolfgang Streeck

Great article — and it actually confirms what Wynne Godley wrote more than twenty years ago:

If a government stops having its own currency, it doesn’t just give up “control over monetary policy” as normally understood; its spending powers also become constrained in an entirely new way. If a government does not have its own central bank on which it can draw cheques freely, its expenditures can be financed only by borrowing in the open market in competition with businesses, and this may prove excessively expensive or even impossible, particularly under “conditions of extreme emergency.” greece-feb12-bank__3197265k If Europe is not to have a full-scale budget of its own under the new arrangements it will still have, by default, a fiscal stance of its own made up of the individual budgets of component states. The danger, then, is that the budgetary restraint to which governments are individually committed will impart a disinflationary bias that locks Europe as a whole into a depression it is powerless to lift.

Wynne Godley

Marginal productivity theory

23 June, 2018 at 12:15 | Posted in Economics | Comments Off on Marginal productivity theory

The correlation between high executive pay and good performance is “negligible”, a new academic study has found, providing reformers with fresh evidence that a shake-up of Britain’s corporate remuneration systems is overdue.

jpgimageAlthough big company bosses enjoyed pay rises of more than 80 per cent in a decade, performance as measured by economic returns on invested capital was less than 1 per cent over the period, the paper by Lancaster University Management School says.

“Our findings suggest a material disconnect between pay and fundamental value generation for, and returns to, capital providers,” the authors of the report said.

In a study of more than a decade of data on the pay and performance of Britain’s 350 biggest listed companies, Weijia Li and Steven Young found that remuneration had increased 82 per cent in real terms over the 11 years to 2014 … The research found that the median economic return on invested capital, a preferable measure, was less than 1 per cent over the same period.

Patrick Jenkins/Financial Times

Mainstream economics textbooks usually refer to the interrelationship between technological development and education as the main causal force behind increased inequality. If the educational system (supply) develops at the same pace as technology (demand), there should be no increase, ceteris paribus, in the ratio between high-income (highly educated) groups and low-income (low education) groups. In the race between technology and education, the proliferation of skilled-biased technological change has, however, allegedly increased the premium for the highly educated group.

Another prominent explanation is that globalization – in accordance with Ricardo’s theory of comparative advantage and the Wicksell-Heckscher-Ohlin-Stolper-Samuelson factor price theory – has benefited capital in the advanced countries and labour in the developing countries. The problem with these theories is that they explicitly assume full employment and international immobility of the factors of production. Globalization means more than anything else that capital and labour have to a large extent become mobile over country borders. These mainstream trade theories are really not applicable in the world of today, and they are certainly not able to explain the international trade pattern that has developed during the last decades. Although it seems as though capital in the developed countries has benefited from globalization, it is difficult to detect a similar positive effect on workers in the developing countries.

There are, however, also some other quite obvious problems with these kinds of inequality explanations. The increase in incomes has been concentrated especially in the top 1%. If education was the main reason behind the increasing income gap, one would expect a much broader group of people in the upper echelons of the distribution taking part of this increase. It is dubious, to say the least, to try to explain, for example, the high wages in the finance sector with a marginal productivity argument. High-end wages seem to be more a result of pure luck or membership of the same ‘club’ as those who decide on the wages and bonuses, than of ‘marginal productivity.’

Mainstream economics, with its technologically determined marginal productivity theory, seems to be difficult to reconcile with reality. Although card-carrying neoclassical apologetics like Greg Mankiw wants to recall John Bates Clark’s (1899) argument that marginal productivity results in an ethically just distribution, that is not something – even if it was true – we could confirm empirically, since it is impossible to separate out what is the marginal contribution of any factor of production. The hypothetical ceteris paribus addition of only one factor in a production process is often heard of in textbooks, but never seen in reality.

When reading  mainstream economists like Mankiw who argue for the ‘just desert’ of the 0.1 %, one gets a strong feeling that they are ultimately trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they ‘deserve.’ To most social scientists that probably smacks more of being an evasive action trying to explain away a very disturbing structural ‘regime shift’ that has taken place in our societies. A shift that has very little to do with ‘stochastic returns to education.’ Those were in place also 30 or 40 years ago. At that time they meant that perhaps a top corporate manager earned 10–20 times more than ‘ordinary’ people earned. Today it means that they earn 100–200 times more than ‘ordinary’ people earn. A question of education? Hardly. It is probably more a question of greed and a lost sense of a common project of building a sustainable society.

Since the race between technology and education does not seem to explain the new growing income gap – and even if technological change has become more and more capital-augmenting, it is also quite clear that not only the wages of low-skilled workers have fallen, but also the overall wage share – mainstream economists increasingly refer to ‘meritocratic extremism,’ ‘winners-take-all markets’ and ‘super star-theories’ for explanation. But this is also highly questionable.

Fans may want to pay extra to watch top-ranked athletes or movie stars performing on television and film, but corporate managers are hardly the stuff that people’s dreams are made of – and they seldom appear on television and in the movie theatres.

Everyone may prefer to employ the best corporate manager there is, but a corporate manager, unlike a movie star, can only provide his services to a limited number of customers. From the perspective of ‘super-star theories,’ a good corporate manager should only earn marginally better than an average corporate manager. The average earnings of corporate managers of the biggest Swedish companies today is equivalent to the wages of forty-six blue-collar workers.

It is difficult to see the takeoff of the top executives as anything else but a reward for being a member of the same illustrious club. That they should be equivalent to indispensable and fair productive contributions – marginal products – is straining credulity too far. That so many corporate managers and top executives make fantastic earnings today, is strong evidence the theory is patently wrong and basically functions as a legitimizing device of indefensible and growing inequalities.

No one ought to doubt that the idea that capitalism is an expression of impartial market forces of supply and demand, bears but little resemblance to actual reality. Wealth and income distribution, both individual and functional, in a market society is to an overwhelmingly high degree influenced by institutionalized political and economic norms and power relations, things that have relatively little to do with marginal productivity in complete and profit-maximizing competitive market models – not to mention how extremely difficult, if not outright impossible it is to empirically disentangle and measure different individuals’ contributions in the typical teamwork production that characterize modern societies; or, especially when it comes to ‘capital,’ what it is supposed to mean and how to measure it. Remunerations do not necessarily correspond to any marginal product of different factors of production – or to ‘compensating differentials’ due to non-monetary characteristics of different jobs, natural ability, effort or chance.

Put simply – highly paid workers and corporate managers are not always highly productive workers and corporate managers, and less highly paid workers and corporate managers are not always less productive. History has over and over again disconfirmed the close connection between productivity and remuneration postulated in mainstream income distribution theory.

Neoclassical marginal productivity theory is obviously a collapsed theory from both a historical and a theoretical point of view, as shown already by Sraffa in the 1920s, and in the Cambridge capital controversy in the 1960s and 1970s.

When a theory is impossible to reconcile with facts there is only one thing to do — scrap it!

Truth and Politics

21 June, 2018 at 19:25 | Posted in Politics & Society | 1 Comment

Facts and opinions, though they must be kept apart, are not antagonistic to each other; they belong to the same realm.

hannahBut do facts, independent of opinion and interpretation, exist at all? Have not generations of historians and philosophers of history demonstrated the impossibility of ascertaining facts without interpretation, since they must first be picked out of a chaos of sheer happenings (and the principles of choice are surely not factual data) and then be fitted into a story that can be told only in a certain perspective, which has nothing to do with the original occurrence? No doubt these and a great many more perplexities inherent in the historical sciences are real, but they are no argument against the existence of factual matter, nor can they serve as a justification for blurring the dividing lines between fact, opinion, and interpretation, or as an excuse for the historian to manipulate facts as he pleases. Even if we admit that every generation has the right to write its own history, we admit no more than that it has the right to rearrange the facts in accordance with its own perspective; we don’t admit the right to touch the factual matter itself.

Hannah Arendt

Luc Ferry et les maths

21 June, 2018 at 16:44 | Posted in Varia | Comments Off on Luc Ferry et les maths

 

Haavelmo and Frisch on the limited value of econometrics

21 June, 2018 at 12:12 | Posted in Statistics & Econometrics | 1 Comment

ecoFor the sake of balancing the overly rosy picture of econometric achievements given in the usual econometrics textbooks today, it may be interesting to see how Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the role of econometrics in the advancement of economics. Although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

And as the quote below shows, Frisch also shared some of Haavelmo’s — and Keynes’s — doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts.

Since statisticians and econometricians have not been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, additivity as being ontologically isomorphic to real-world economic systems, there are still strong reasons to be critical of the econometric project. There are deep epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

Econometric modelling should never be a substitute for thinking.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

Keynes, letter to E.J. Broster, December 19, 1939

Next Page »

Blog at WordPress.com.
Entries and comments feeds.