Stockholmsmelodi (personal)

30 July, 2017 at 12:44 | Posted in Varia | Comments Off on Stockholmsmelodi (personal)

 

Evert och Sven-Bertil i all ära, men för mig är det Tottes version som gäller!

Advertisements

The Venice of the North (personal)

30 July, 2017 at 09:49 | Posted in Varia | 1 Comment

For the third time in a year yours truly will make a guest appearance in Hamburg — The Venice of the North. Regular blogging will be resumed next weekend. Tschüss!

Ways in which economists overbid their cards

29 July, 2017 at 17:58 | Posted in Theory of Science & Methodology | 1 Comment

 

Why should we care about Sonnenschein-Mantel-Debreu?

26 July, 2017 at 10:47 | Posted in Economics | 7 Comments

Along with the Arrow-Debreu existence theorem and some results on regular economies, SMD (Sonnenschein-Mantel-Debreu) theory fills in many of the gaps we might have in our understanding of general equilibrium theory …

It is also a deeply negative result. SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

24958274Given how sweeping the changes wrought by SMD theory seem to be, it is understand-able that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.” There was a realization of a similar gap in the foundations of empirical economics. General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee either stability or uniqueness of the equlibrium solution.

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length here — rather an evasion whereby issues of distribution, coordination, heterogeneity — everything that really defines macroeconomics — are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds — not so little — of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

Om kvantitet och kvalitet i högskolevärlden

25 July, 2017 at 10:33 | Posted in Education & School | Comments Off on Om kvantitet och kvalitet i högskolevärlden

Den som kommer in på högskolan ska också gå ut med avlagd examen, anser regeringen. Enligt Helene Hellmark Knutsson (S), minister för högre utbildning, ska universitet och högskolor “se till att när man väl kommit in på sin utbildning, har sin behörighet, att man också får det stöd man behöver för att fullfölja sina studier”.

Det låter lite väl enkelt.

elite-daily-sleeping-studentHögskolelagens nuvarande formulering om att högskolorna ska “aktivt främja och bredda rekryteringen” ändras därför till “aktivt främja ett brett deltagande i utbildningen”, enligt det förslag som i veckan skickades ut på remiss.

Å ena sidan framställer Hellmark Knutsson förslaget som ett viktigt steg mot minskad social snedrekrytering. Å andra sidan beskrivs förändringen i remissen närmast som en formalitet, en anpassning av lagens bokstav till hur universitet och högskolor redan arbetar. Några ökade ekonomiska resurser ges utbildningsanstalterna heller inte av regeringen. Så vilket gäller?

Den sociala snedrekryteringen till högre utbildning har visat sig vara svår att komma till rätta med. Och visst är det viktigt att alla som vill studera och har förutsättningar att klara studierna också får chansen. Oavsett bakgrund.

Universitet och högskolor måste ge ett stöd avpassat efter studenternas skilda förutsättningar – inom rimliga gränser. Men det är inte självklart att den som blir antagen också har vad som faktiskt krävs. Det är länge sedan det behövdes toppbetyg för att komma in på universitet. I många fall räcker det att med nöd och näppe ha klarat gymnasiet.

Att ge sken av att det inte finns en motsättning mellan kvantitet och kvalitet är oseriöst. Lärare vid universitet och högskolor har länge uttryckt oro över att studenter är dåligt förberedda, har besvärande kunskapsluckor och svårigheter att uttrycka sig i skrift.

Sydsvenskan

2989556_1200_675Svenska universitet och högskolor brottas idag med många problem. Två av de mer akuta är hur man ska hantera en situation med krympande ekonomi och att allt fler av studenterna är dåligt förberedda för högskolestudier.
Varför har det blivit så här? Yours truly har vid upprepade tillfällen blivit approcherad av media apropå dessa frågor, och har då utöver ‘the usual suspects’ också försökt lyfta en problematik som sällan — av rädsla för att inte vara ‘politiskt korrekt’ — lyfts i debatten.

De senaste femtio åren har vi haft en fullständig explosion av nya studentgrupper som går vidare till universitets- och högskolestudier. Detta är på ett sätt klart glädjande. Idag har vi lika många doktorander i vårt utbildningssystem som vi hade gymnasister på 1950-talet. Men denna utbildningsexpansion har tyvärr i mycket skett till priset av försämrade möjligheter för studenterna att tillgodogöra sig högskoleutbildningens kompetenskrav. Många utbildningar har fallit till föga och sänkt kraven.

Tyvärr är de studenter vi får till universitet och högskolor över lag allt sämre rustade för sina studier. Omstruktureringen av skolan i form av decentralisering, avreglering och målstyrning har tvärtemot politiska utfästelser inte levererat. I takt med den eftergymnasiala utbildningsexpansionen har en motsvarande kunskapskontraktion hos stora studentgrupper ägt rum. Den skolpolitik som lett till denna situation slår hårdast mot dem den utger sig för att värna — de med litet eller inget ‘kulturkapital’ i bagaget hemifrån.

Mot denna bakgrund är det egentligen anmärkningsvärt att man inte i större utsträckning problematiserat vad utbildningsexplosionen i sig kan leda till.

gaussEftersom vi för femtio år sedan vid våra universitet utbildade enbart en bråkdel av befolkningen, är det ingen djärv gissning — under antagande av att ‘begåvning’ i en population är åtminstone approximativt normalfördelad — att lejonparten av dessa studenter ‘begåvningsmässigt’ låg till höger om mittpunkten på normalfördelningskurvan. Om vi idag tar in fem gånger så många studenter på våra högskolor och universitet kan vi — under samma antagande — knappast räkna med att en lika stor del av dessa utgörs av individer som ligger till höger om normalfördelningskurvans mittpunkt. Rimligen torde detta — ceteris paribus — innebära att i takt med att proportionen av befolkningen som går vidare till högskola och universitet ökar, så ökar svårigheterna för många av dessa att uppnå traditionellt högt ställda akademiska kravnivåer.

Här borde i så fall statsmakterna ha ytterligare en stark anledning till att öka resurserna till högskola och universitet, istället för att som idag bedriva utbildningar på mager kost och med få lärarledda föreläsningar i rekordstora studentgrupper. Med nya kategorier av studenter, som i allt större utsträckning rekryteras från studieovana hem, är det svårt att se hur vi med knappare resursramar ska kunna lösa dilemmat med högre krav på meritmässigt allt mer svagpresterande studenter.

The conundrum of unknown unknowns

24 July, 2017 at 17:18 | Posted in Economics | 2 Comments

Short-term weather forecasting is possible because most of the factors that determine tomorrow’s weather are, in a sense, already there … But when you look further ahead you encounter the intractable problem that, in non-linear systems, small changes in initial conditions can lead to cumulatively larger and larger changes in outcomes over time. In these circumstances imperfect knowledge may be no more useful than no knowledge at all.

economic_forecastingMuch the same is true in economics and business. What gross domestic product will be tomorrow is, like tomorrow’s rain or the 1987 hurricane, more or less already there: tomorrow’s output is already in production, tomorrow’s sales are already on the shelves, tomorrow’s business appointments already made. Big data will help us analyse this. We will know more accurately and more quickly what GDP is, we will be more successful in predicting output in the next quarter, and our estimates will be subject to fewer revisions …

Big data can help us understand the past and the present but it can help us understand the future only to the extent that the future is, in some relevant way, contained in the present. That requires a constancy of underlying structure that is true of some physical processes but can never be true of a world that contains Hitler and Napoleon, Henry Ford and Steve Jobs; a world in which important decisions or discoveries are made by processes that are inherently unpredictable and not susceptible to quantitative description.

John Kay

The central problem with the present ‘machine learning’ and ‘big data’ hype is that so many — falsely — think that they can get away with analysing real world phenomena without any (commitment to) theory. But — data never speaks for itself. Without a prior statistical set-up there actually are no data at all to process. And — using a machine learning algorithm will only produce what you are looking for.

Theory matters.

When ignorance is bliss

22 July, 2017 at 22:49 | Posted in Economics | 3 Comments

joanThe production function has been a powerful instrument of miseducation.
The student of economic theory is taught to write Q = f(L, K) where L is a quantity of labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labor; he is told something about the index-number problem in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units K is measured. Before he ever does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.

Joan Robinson The Production Function and the Theory of Capital (1953)

The ultimate insider

21 July, 2017 at 20:27 | Posted in Varia | Comments Off on The ultimate insider

 

One of my absolute favourite movies. Great, true, story. Marvelous actors — Russell Crowe, Al Pacino, Christopher Plummer. Fabulous music by e.g. Lisa Gerrard and Jan Garbarek.

What so many critiques of economics gets right

21 July, 2017 at 12:39 | Posted in Economics | 2 Comments

Yours truly had a post up the other day on John Rapley’s Twilight of the Money Gods.

In the main I think Rapley is right in his attack on contemporary economics and its ‘priesthood,’ although he often seems to forget that there are — yes, it’s true — more than one approach in economics, and that his critique mainly pertains to mainstream neoclassical economics.

Noah Smith, however, is not too happy about the book:

strw manThere are certainly some grains of truth in this standard appraisal. I’ve certainly lobbed my fair share of criticism at the econ profession over the years. But the problem with critiques like Rapley’s is that they offer no real way forward for the discipline … Simply calling for humility and methodological diversity accomplishes little.

Instead, pundits should focus on what is going right in the economics discipline — because there are some very good things happening.

First, economists have developed some theories that really work. A good scientific theory makes testable predictions that apply to situations other than those that motivated the creation of the theory. Slowly, econ is building up a repertoire of these gems. One of them is auction theory … Another example is matching theory, which has made it a lot easier to get an organ transplant …

Second, economics is becoming a lot more empirical, focusing more on examining the data than on constructing yet more theories.

Noah Smith maintains that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic theories and models.

Yours truly begs to differ. Although one, of course, has to agree with Noah’s view that discounting empirical evidence is not the right way to solve economic issues, when looked at carefully, there  are in fact few real reasons to share his optimism on this so called ’empirical revolution’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminicent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts on the generalizability of all three research strategies, because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

As I see it, this is the heart of the matter. External validity and generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance and homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is — unfortunately — exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in year 2008 but not in year 2016? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Noah Smith, but several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Noah Smith’s and others enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export-warrant …

I would, contrary to Noah Smith’s optimism, argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

maxresdefaultThe ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Ricardo’s trade paradigm — a formerly true theory

21 July, 2017 at 10:59 | Posted in Economics | 6 Comments

Two hundred years ago, on 19 April 1817, David Ricardo’s Principles was published. In it he presented a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have comparative advantage and importing goods in which they have a comparative disadvantage.

Heckscher-Ohlin-HO-Modern-Theory-of-International-TradeAlthough a great accomplishment per se, Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. In the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have a comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem — as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — beside the standard market clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with a constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor-prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. 

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real world situations. As so many other mainstream mathematical models taught to economics students today, this theorem has very little to do  with the real world.

Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘free trade is great’ just e.g. assume ‘all economists from Chicago are right’ and ‘all economists from Chicago consider free trade to be great’  The conclusions follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.

What mainstream economics took over from Ricardo was not only the theory of comparative advantage. The whole deductive-axiomatic approach to economics that is still at the core of mainstream methodology was taken over from Ricardo. Nothing has been more detrimental to the development of economics than going down that barren path.

Ricardo shunted the car of economic science on to the wrong track. Mainstream economics is still on that track. It’s high time to get on the right track and make economics a realist and relevant science.

This having been said, I think the most powerful argument against the Ricardian paradigm is that what counts to day is not comparative advantage, but absolute advantage.

David_RicardoWhat has changed since Ricardo’s days is that the assumption of internationally immobile factors of production has been made totally untenable in our globalised world. When our modern corporations maximize their profits they do it by moving capital and technologies to where it is cheapest to produce. So we’re actually in a situation today where absolute — not comparative — advantages rules the roost when it comes to free trade.

And in that world, what is good for corporations is not necessarily good for nations.

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.