The root of all evil

17 oktober, 2014 kl. 18:49 | Publicerat i Politics & Society | Kommentarer inaktiverade för The root of all evil

 

The UKIP Councillor for Henley on Thames in Oxfordshire has written an odd, homophobic letter to a local newspaper.

David Silvester, who resigned from the Conservative Party over David Cameron’s same-sex marriage policy, has said gay marriage is to blame for Britain’s recent spell of bad weather in a letter to The Henley Standard.

He wrote: ”Since the passage of the Marriage (Same Sex Couples) Act, the nation has been beset by serious storms and floods.”

Huffington Post

Wooh! Who would have thought anything like that.
Impressive indeed …
Isn’t it just splendid with all these intelligent and unprejudiced conservative politicians …

The wall that took a tumble

17 oktober, 2014 kl. 18:10 | Publicerat i Politics & Society | 4 kommentarer

 
jag på einstein-berlin1988
Photo by barnilsson

During my first ten years traveling back and forth between Lund and Berlin it was still there, even when this photo of yours truly — leisurely reading taz — was taken at Café Einstein back in the summer of 1988.

Had anyone told me then that the wall would soon come tumbling down, I would probably just have shaken my head and laughed. At the time everyone thought it was there for good.

For twenty-five years now I’ve been happy we were all so wrong, so wrong.
 

Regression analysis of how corruption harms investment and growth

17 oktober, 2014 kl. 14:45 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Regression analysis of how corruption harms investment and growth

 

Questo tempo grigio mi piace

17 oktober, 2014 kl. 08:29 | Publicerat i Varia | Kommentarer inaktiverade för Questo tempo grigio mi piace

 

The Holy Grail of econometrics — ‘true models’

16 oktober, 2014 kl. 10:16 | Publicerat i Economics | Kommentarer inaktiverade för The Holy Grail of econometrics — ‘true models’

achenHaving mastered all the technicalities of regression analysis and econometrics, students often feel as though they are the masters of universe. I usually cool them down with a required reading of Christopher Achen‘s modern classic Interpreting and Using Regression. It usually get them back on track again, and they understand that

no increase in methodological sophistication … alter the fundamental nature of the subject. It remains a wondrous mixture of rigorous theory, experienced judgment, and inspired guesswork. And that, finally, is its charm.

Giving an introductory econometrics course, yours truly usually — at the exam — asks students to explain how one should correctly interpret p-values. Although the correct definition is p(data|null hypothesis), a majority of the students either misinterpreted the p-value as being the likelihood of a sampling error (which of course is wrong, since the very computation of the p-value is based on the assumption that sampling errors are what causes the sample statistics not coinciding with the null hypothesis) or that the p-value is the probability of the null hypothesis being true, given the data (which of course also is wrong, since it is p(null hypothesis|data) rather than the correct p(data|null hypothesis)).

This is not to blame on students’ ignorance, but rather on significance testing not being particularly transparent – conditional probability inference is difficult even to those of us who teach and practice it. A lot of researchers fall pray to the same mistakes. So – given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on weird backward logic that students and researchers usually don’t understand? As Achen writes:

Significance testing as a search for specification errors substitutes calculations for substantive thinking. Worse, it channels energy toward the hopeless search for functionally correct specifications and divert attention from the real tasks, which are to formulate a manageable description of the data and to exclude competing ones.

Modern macroeconomics and the perils of using ‘Mickey Mouse’ models

15 oktober, 2014 kl. 10:23 | Publicerat i Economics | 4 kommentarer

The techniques we use affect our thinking in deep and not always conscious ways. This was very much the case in macroeconomics in the decades preceding the crisis. The techniques were best suited to a worldview in which economic fluctuations occurred but were regular, and essentially self correcting. The problem is that we came to believe that this was indeed the way the world worked.

blanchard02_1To understand how that view emerged, one has to go back to the so-called rational expectations revolution of the 1970s … These techniques however made sense only under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms (and the econometricians who apply statistics to economics) could understand their nature and form expectations of the future, and simple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on economic activity. The reason for this assumption, called linearity, was technical: models with nonlinearities—those in which a small shock, such as a decrease in housing prices, can sometimes have large effects, or in which the effect of a shock depends on the rest of the economic environment—were difficult, if not impossible, to solve under rational expectations.

Thinking about macroeconomics was largely shaped by those assumptions. We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time …

From the early 1980s on, most advanced economies experienced what has been dubbed the “Great Moderation,” a steady decrease in the variability of output and its major components—such as consumption and investment … Whatever caused the Great Moderation, for a quarter Century the benign, linear view of fluctuations looked fine.

Olivier Blanchard

Blanchard’s piece is a confirmation of  what I argued in my paper Capturing causality in economics and the limits of statistical inference —  since ”modern” macroeconom(etr)ics doesn’t content itself with only making ”optimal” predictions,” but also aspires to explain things in terms of causes and effects, macroeconomists and econometricians need loads of assumptions — and one of the more  important of these is linearity.

So bear with me when I take the opportunity to elaborate a little more on why I — and Olivier Blanchard — find that assumption of such paramount importance and ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ”export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes wrote (emphasis added) in Treatise on Probability (1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-linear, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. As Keynes wrote in his critique of econometrics and inferential statistics already in the 1920s (emphasis added):

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of ”laws” and relations that mainstream econ(ometr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and linear (additive). When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics — as most of contemporary endeavours of mainstream economic theoretical modeling — rather useless.

Market clearing and rational expectations — ideas that are neat, plausible and wrong

14 oktober, 2014 kl. 20:45 | Publicerat i Economics | Kommentarer inaktiverade för Market clearing and rational expectations — ideas that are neat, plausible and wrong

emh

Unfortunately, in case it needs restating, freshwater economics turned out to be based on two ideas that aren’t true. The first (Fama) is that financial markets are efficient. The second (Lucas/Sargent/Wallace) is that the economy as a whole is a stable and self-correcting mechanism. The rational-expectations theorists didn’t refute Keynesianism: they assumed away the reason for its existence. Their models were based not just on rational expectations but on the additional assertion that markets clear more or less instantaneously. But were that true, there wouldn’t be any such thing as involuntary unemployment, or any need for counter-cyclical monetary policy.

John Cassidy

Nobel economics

13 oktober, 2014 kl. 21:37 | Publicerat i Economics | 3 kommentarer

One story that can be told about today’s announcement is the Royal Swedish Academy of Sciences’ own explanation: that French economist Jean Tirole has been awarded the The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2014 because he “has clarified how to understand and regulate industries with a few powerful firms.”

tirole2-3The other story is: Tirole has shown how much the real world of capitalism—industries that are dominated by a few firms that have extensive market power, which can charge prices much higher than costs and block the entry of other firms—differs from the fantasy taught in countless introductory courses in economics: a world of perfectly competitive firms, which have no negative effects on society and which therefore don’t need to be regulated …

Last year, the Academy tried to have it both ways, offering the Prize to both Eugene Fama and Robert Schiller. This year, the message is both clearer and yet unspoken: the neoclassical model of perfect competition and individual incentives bears no relation to the kinds of capitalism that exist anywhere in the world.

David Ruccio

Causal inference and implicit superpopulations (wonkish)

13 oktober, 2014 kl. 10:01 | Publicerat i Theory of Science & Methodology | 1 kommentar

morganThe most expedient population and data generation model to adopt is one in which the population is regarded as a realization of an infinite superpopulation. This setup is the standard perspective in mathematical statistics, in which random variables are assumed to exist with fixed moments for an uncountable and unspecified universe of events …

This perspective is tantamount to assuming a population machine that spawns individuals forever (i.e., the analog to a coin that can be flipped forever). Each individual is born as a set of random draws from the distributions of Y¹, Y°, and additional variables collectively denoted by S …

Because of its expediency, we will usually write with the superpopulation model in the background, even though the notions of infinite superpopulations and sequences of sample sizes approaching infinity are manifestly unrealistic.

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

The assumption of imaginary ”superpopulations” is one of the many dubious assumptions used in modern econometrics.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

fraud-kit

In his great book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman also touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

freedLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

And as if this wasn’t enough, one could — as we’ve seen — also seriously wonder what kind of ”populations” these statistical and econometric models ultimately are based on. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the ”infinite superpopulations” used in e.g. the potential outcome framework that has become so popular lately in social sciences?

Of course one could treat  observational or experimental data as random samples from real populations. I have no problem with that. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of ”parallel universes” and assume that our data are random samples from that kind of  ”infinite superpopulations.”

But this is actually nothing else but hand-waving! And it is inadequate for real science. As David Freedman writes:

With this approach, the investigator does not explicitly define a population that could in principle be studied, with unlimited resources of time and money. The investigator merely assumes that such a population exists in some ill-defined sense. And there is a further assumption, that the data set being analyzed can be treated as if it were based on a random sample from the assumed population. These are convenient fictions … Nevertheless, reliance on imaginary populations is widespread. Indeed regression models are commonly used to analyze convenience samples … The rhetoric of imaginary populations is seductive because it seems to free the investigator from the necessity of understanding how data were generated.

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Inequality and the culprit economists overlook — their own wage theory

12 oktober, 2014 kl. 20:20 | Publicerat i Economics | 2 kommentarer

Though many economists today are sounding the alarm over rising income inequality, one culprit somehow has been overlooked: their own wage theory.

corporate-profitsv-wagesWage theory — one of the sacred truths of modern economics — suggests that competitive labor markets are self-regulating. Each worker is paid his or her productive worth. Unions, minimum wages, or any other interference — all just cause unemployment. Nearly all contemporary public policy is dictated by some version of this theory, but it simply no longer holds up.

Adam Smith, often called the father of classical economics, told a very different story. Smith believed that each society sets a living wage to cover “whatever the custom of the country renders it indecent for creditable people, even of the lowest order, to be without.” His successor David Ricardo similarly saw the “habits and customs of the people” as determining how to divide income between profits and wages. Marx’s class struggle was just a more confrontational version of the idea.

Around the turn of the 20th century, economists grew dissatisfied with this squishy sociologist’s answer, and some found it morally problematic. “The indictment that hangs over society is that of ‘exploiting labor,’” conceded John Bates Clark, a founder of the American Economic Association. He set out to disprove it.

Clark and other colleagues posited that firms shop for the best deal among “factors of production” — labor and capital — just as smart consumers shop for the best deal at the supermarket. Automakers, for example, could build cars by employing more workers and less machinery, or vice versa. By seeking the least expensive combination, the firms will pay only wages equal to a worker’s “marginal productivity” — the gain in output added when he or she was hired. …

With an entire organization cooperating to produce goods or services, and no individual contributing any ascertainable productivity, we are back to Smith, Ricardo, and Marx. The habits and customs of the people or class struggle, call it what you will, determine the wage structure. Of course, there are limits. The sum of the slices of the pie — the profits and wages paid to different workers — cannot be bigger than the pie. But how to slice the pie is a fundamentally social decision.

Jonathan Schlefer

A nice piece by the author of The Assumptions Economists Make.

Here is my own latest take on marginal productivity theory.

Branko Milanovic on the economics of inequality

12 oktober, 2014 kl. 11:14 | Publicerat i Economics | Kommentarer inaktiverade för Branko Milanovic on the economics of inequality

 

[h/t Jesper Roine]

On the importance of the time scale of growth and inequality

11 oktober, 2014 kl. 19:55 | Publicerat i Economics | Kommentarer inaktiverade för On the importance of the time scale of growth and inequality

0013729e477111ae3fd836The time scale of growth is clearly an important factor. Most studies look at the level of growth instead of the duration of growth. To better understand the time dimension of these trends, International Monetary Fund economists Andrew G. Berg and Jonathan D. Ostry looked at periods of growth instead of fixed durations. They find that “countries with more equal income distributions tend to
have significantly longer growth spells.” Inequality outweighed other factors in explaining such sustainable growth across 174 countries. Indeed, inequality was a stronger determinant of the quality of economic growth than many other commonly studied factors that were also included in Berg and Ostry’s model, such asexternal demand and price shocks, the initial income of the country, the institutional make-up of the country, its openness to trade, and its macroeconomic stability. Focusing on the question of stability also underscores a key point that inequality may indirectly affect economic growth in profoundly important ways.

In a 2014 extension of this work, Ostry, Berg, and their IMF colleague Charalambos Tsangarides … find that economic growth is lower and periods of growth are shorter in countries that have high inequality … This most recent work provides strong evidence that higher levels of income inequality are detrimental to long-term economic growth and that the policies some nations have taken to redress inequality not only do not adversely impact growth but, instead, spur faster growth. Notably, this finding applies to both developed and developing countries.

Heather Boushey & Carter Price

Piketty and the non-existence of economic science

10 oktober, 2014 kl. 08:54 | Publicerat i Economics | 1 kommentar

 

[h/t Jan Milch]

Brad DeLong on NAIRU hubris

9 oktober, 2014 kl. 23:00 | Publicerat i Economics | 1 kommentar

Back before 2008, we neoclassical new Keynesian-new monetarist types were highly confident that the U.S. macroeconomy as then constituted had very powerful stabilizing forces built into it: if the unemployment rate rose above the so-called natural rate of unemployment, the NAIRU, it would within a very few years return to normal.

This is why we were confident:

1948-2014: Share of Deviation of Unemployment from Trend Erased After…

…1 Yr …2 Yrs …3 Yrs With NAIRU trend…

33.7% 67.4% 88.3% Cubic
32.4% 63.6% 84.1% Quadratic
31.8% 61.5% 80.5% Linear
27.8% 52.5% 69.2% No Trend

We were wrong.

Brad DeLong

Varför Paul Romer bör få ‘Nobelpriset i ekonomi’

9 oktober, 2014 kl. 21:12 | Publicerat i Economics | 4 kommentarer

World_GDP_per_capita_1500_to_2003

Människor i västvärlden är idag tio till tjugo gånger rikare än vad de var för ett och ett halvt sekel sedan. Deras förväntade livstid som är nästan dubbelt så hög som förfädernas. Vad har skapat denna ökning i levnadsstandard?

”En verkligt god förklaring är praktiskt taget sömlös” skrev Adam Smith 1776 i den bok som inledde den vetenskapliga nationalekonomins era – Wealth of Nations. Finns det en sådan förklaring för samhällsvetenskapernas och mänsklighetens kanske viktigaste problemfält – den ekonomiska tillväxten? Så fort man börjar tänka på dessa frågor är det, som Robert Lucas konstaterat, ”svårt att tänka på något annat”.

För de klassiska ekonomerna – från Adam Smith till John Stuart Mill – var sökandet efter källan till nationernas välstånd och den långsiktiga ekonomiska tillväxten den centrala punkt som definierade deras vetenskap. Förutom inkomstfördelningen spelade den tekniska förändringen alltid en avgörande roll i deras analyser. De tekniska förändringarna möjliggjorde större arbetsdelning och därigenom en ökad specialisering och marknadsutvidgning som lade grunden för en snabb expansion inom handel och industri. Frånvaron eller de begränsade möjligheterna till dylika förändringar inom jordbruket – där avtagande skalavkastning ansågs råda – ledde tillsammans med det av Robert Malthus framhållna befolkningstrycket dock till oundviklig stagnation och tillbakagång.

När den neoklassiska ekonomin växte fram i slutet på 1800-talet försvann frågorna om fördelning och långsiktig tillväxt från blickfältet. I stället koncentrerade man sina forskarmödor på att karakterisera kortsiktiga, statiska jämvikter under antaganden om perfekt information och konkurrens. Teknisk förändring betraktades tillsammans med naturresurser och preferenser som exogent givna och utanför den egentliga ekonomiska analysramen. Den kortsiktiga och statiska analysen dominerade.

Till följd av denna utveckling inom den ekonomiska vetenskapen var teknisk förändring länge ett försummat ämne. Under mellankrigstiden var –föga överraskande – ekonomerna mer intresserade av att utveckla stagnationsteorier än tillväxtteorier. Men i perioden efter andra världskriget fick tillväxtproblematiken förnyat intresse. Konkurrensen med de så kallat socialistiska ekonomierna och den ökade statliga interventionen ledde till ett större intresse för frågor kring vad som på sikt möjliggjorde en balanserad tillväxt.

Även utvecklingsländernas i många fall nyvunna självständighet bidrog till tillväxtintresset. Hur skulle man uppnå en hög ihållande ekonomisk tillväxt i ett land med låga inkomster och underutvecklad infrastruktur? Stagnations- och konjunkturteorierna kunde inte ge några svar. Nya teorier behövdes.

På 1950- och 60-talen försökte man att på nytt närma sig frågeställningen om teknikens roll för den ekonomiska tillväxten. Några – med Robert Solow i spetsen – ville främst mäta den tekniska förändringens inverkan på tillväxten via en aggregerad produktionsfunktion, men fann att produktionsökningen inte så mycket var ett resultat av en ökad kvantitet kapital som av residualfaktorn – teknisk förändring. Andra var mer intresserade av att jämföra utvecklingen i olika länder. Denna empiriskt inriktade tillväxtbokföring försökte främst beskriva tillväxten och eventuellt finna några generella mönster för den. Intresset var många gånger knutet till förklaringar av industrialiseringsförloppet och övergången från jordbruksproduktion till industriledd tillväxt. Några övertygande förklaringar till teknikfaktorns roll presterades dock inte i de tillväxtmodeller som konstruerades eller i de empiriska komparativa studier som genomfördes.

Mer ekonomi-historiskt inriktade forskare försökte studera den ekonomiska tillväxten ur ett mer renodlat historiskt perspektiv. De framhöll att teknisk förändring i stor utsträckning är spårbunden och att man måste känna dess historia för att kunna säga något om dess framtid. Tekniker faller inte likt manna från himlen. De utvecklas ur ett givet historiskt sammanhang och är till sin natur evolutionära och kumulativa – de bygger på och förutsätter varandra. Tekniken är till stor del endogen i ett ekonomiskt system och därför också utsatt för en ständig påverkan från detta. Därav följer också att historia borde vara det egentliga objektet för de ekonomiska analyserna.

Över tiden är det uppenbart att man försöker göra tillväxtteorin mer realistisk genom att helt enkelt inkorporera allt fler faktorer. Tekniska framsteg framhävs nu som tillväxtens motor, snarare än en ren kvantitativ investeringsökningar. Och de tekniska framstegen försöker man i sin tur förklara med satsningar på forskning och utvecklingsarbete.

Paul_Romer_in_2005I de traditionella neoklassiska tillväxtmodellerna antas kapitalet betalas i enlighet med sin marginalproduktivitet och inte ge upphov till externa effekter. Men den amerikanske ekonomen Paul Romers uppmärksammade och revolutionerande artikel ”Endogenous Technological Change” från 1990 görs kunskap till tillväxtens viktigaste drivkraft. I Romers modell överskuggar kunskapsproduktionens tilltagande avkastning de övriga produktionsfaktorernas tendens mot avtagande avkastning. Den teknologiska kunskapen har till viss del karaktär av kollektiv nyttighet som man inte fullt kan utestängas från att konsumera även om man inte betalar för den. Skapandet av ny kunskap i ett företag antas ha positiva externa effekter på andra företags produktionsmöjligheter eftersom kunskap inte helt kan hemlighållas eller patentskyddas. I Romers modell spelar speciellt spilleffekterna från privat forsknings- och utvecklingsarbete en central roll.
Read more …

Bootstrapping made easy (wonkish)

9 oktober, 2014 kl. 11:03 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Bootstrapping made easy (wonkish)

 

In Gretl it’s extremely simple to do this kind of bootstrapping. Run the regression and you get an output-window with the regression results. Click on Analysis at the top of the window and then on Bootstrap and select the options Confidence interval and Resample residuals. After having selected the coefficient for which you want to you get bootstrapped estimates, you just click OK and a window will appear showing the 95% confidence interval for the coefficient. It’s as simple as that!

The riddle of induction

8 oktober, 2014 kl. 15:20 | Publicerat i Theory of Science & Methodology | 2 kommentarer

Recall [Russell’s famous] turkey problem. You look at the past and derive some rule about the future. Well, the problems in projecting from the past can be even worse than what we have already learned, because the same past data can confirm a theory and also its exact opposite …

For the technical version of this idea, consider a series of dots on a page representing a number through time … Let’s say your high school teacher asks you to extend the series of dots. With a linear model, that is, using a ruler, you can run only a single straight line from the past to the future. The linear model is unique. There is one and only one straight line that can project a series of points …

grueThis is what philosopher Nelson Goodman called the riddle of induction: we project a straight line only because we have a linear model in our head — the fact that a number has risen for 1 000 days straight should make you more confident that it will rise in the future. But if you have a nonlinear model in your head, it might confirm that the number should decline on day 1 001 …

The severity of Goodman’s riddle of induction is as follows: if there is no longer even a single unique way to ‘generalize’ from what you see, to make an inference about the unknown, then how should you operate? The answer, clearly, will be that you should employ ‘common sense’.

Nassim Taleb

And economists standardly — and without even the slightest justification — assume linearity in their models …

Svenskar höga på hus

8 oktober, 2014 kl. 09:30 | Publicerat i Economics | 4 kommentarer

Inflationsjusterat-Fastighetsprisindex

Trots finanskris och oro över euron verkar det som om svenskarna fortsätter att låna mer och mer för att köpa bostäder – och ytterligare spär på den reala skuldvolym som bara under de senaste femton åren ökat med över 60 %.

Varje sansad bedömare inser att detta är problem som måste lösas innan bostadsbubblan spricker. I annat fall är det hög risk för att låt-gå-politiken kommer att gå igen som svinhugg – och då är det de arbetslösa, bostadslösa och skuldsatta som får ta smällarna.

Hushållens skuldsättning bottnar främst i den ökning av tillgångsvärden som letts av ökad långivning till hushållen och den därav uppkomna bostadsbubblan. På lång sikt är det självklart inte möjligt att bibehålla denna trend. Tillgångspriserna avspeglar i grunden förväntningar om framtida avkastning på investeringar. Om tillgångspriserna fortsätter öka snabbare än inkomsterna blir effekten ökad inflation med vidhängande nedjustering av tillgångarnas realvärde.

hushållsskulder 2014
Källa: SCB och egna beräkningar

Med den skuldkvot vi ser hushållen tagit på sig riskerar vi få en skulddeflationskris som kommer att slå oerhört hårt mot svenska hushåll.

Det är djupt bekymmersamt att svenska hushåll är beredda att ta på sig så stora lån som man gör idag. Det är hög tid att den nästan exponentiella skuldsättningsutvecklingen bromsas. I annat fall kan drömmen om ett eget boende mycket väl visa sig bli en mardröm.

År 1638 kunde priset på en tulpanlök i Nederländerna vara så högt att det motsvarade två årslöner. Och hushållen var övertygade om att priserna bara skulle fortsätta att öka och öka. Som alla andra bubblor sprack dock även denna bubbla – ”tulpanmanin” – och lämnade mängder av utblottade och ruinerade människor efter sig. Liknande ting utspelade sig i exempelvis Mississippibubblan år 1720 och i IT-bubblan för tio år sedan. Hur svårt ska det vara att lära av historien?

Som Cornucopia visar i nedanstående diagram har reala bostadsrättspriser stigit med nästan 900 % de senaste 30 åren! Om detta inte är en bubbla vet jag inte vad som skulle kunna vara det. Men det kanske är bäst att fråga L E O Svensson först …
 
index_realt_villor_br_1952-2013

Det är minst sagt skrämmande att bostadsbubblan bara fortsätter att växa. När den väl spricker – vilket den gör – blir krisen desto värre.

Real-world economics review special issue on Piketty’s Capital

8 oktober, 2014 kl. 07:48 | Publicerat i Economics | Kommentarer inaktiverade för Real-world economics review special issue on Piketty’s Capital

Real-world economics review special issue (no. 69) on Piketty’s Capital

The Piketty phenomenon and the future of inequality
Robert Wade

Egalitarianism’s latest foe
Yanis Varoufakis

Piketty and the limits of marginal productivity theory
Lars Syll

Piketty’s determinism?
Ann Pettifor and Geoff Tily

Piketty’s global tax on capital
Heikki Patomäki

Reading Piketty in Athens
Richard Parker

Pondering Mexican hurdles while reading Capital in the XXI Century
Alicia Puyana Mutis

Piketty’s inequality and local versus global Lewis turning points
Richard Koo

The growth of capital
Merijn Knibbe

Piketty vs. the classical economic reformers
Michael Hudson

Is Capital in the Twenty-first century Das Kapital for the twenty-first century?
Claude Hillinger

Piketty and the resurgence of patrimonial capitalism
Jayati Ghosh

Unpacking the first fundamental law
James K. Galbraith

Capital and capital: The second most fundamental confusion
Edward Fullbrook

Piketty’s policy proposals: How to effectively redistribute income
David Colander

Piketty: Inequality, poverty and managerial capitalism
Victor. A. Beker

Capital in the Twenty-First Century: Are we doomed without a wealth tax?
Dean Baker

Who really pays for the recovery

7 oktober, 2014 kl. 07:40 | Publicerat i Economics, Politics & Society | 1 kommentar

 

For more on the topic, check out this Vox material.

« Föregående sidaNästa sida »

Blogga med WordPress.com.
Entries och kommentarer feeds.