Keynes on the ‘devastating inconsistencies’ of econometrics

30 November, 2016 at 11:05 | Posted in Statistics & Econometrics | 7 Comments

In practice Prof. Tinbergen seems to be entirely indifferent whether or not his basic factors are independent of one another … But my mind goes back to the days when Mr. Yule sprang a mine under the contraptions of optimistic statisticians by his discovery of spurious correlation. In plain terms, it is evident that if what is really the same factor is appearing in several places under various disguises, a free choice of regression coefficients can lead to strange results. It becomes like those puzzles for children where you write down your age, multiply, add this and that, subtract something else, and eventually end up with the number of the Beast in Revelation.

deb6e811f2b49ceda8cc2a2981e309f39e3629d8ae801a7088bf80467303077bProf. Tinbergen explains that, generally speaking, he assumes that the correlations under investigation are linear … I have not discovered any example of curvilinear correlation in this book, and he does not tell us what kind of evidence would lead him to introduce it. If, as he suggests above, he were in such cases to use the method of changing his linear coefficients from time to time, it would certainly seem that quite easy manipulation on these lines would make it possible to fit any explanation to any facts. Am I right in thinking that the uniqueness of his results depends on his knowing beforehand that the correlation curve must be a particular kind of function, whether linear or some other kind ?

Apart from this, one would have liked to be told emphatically what is involved in the assumption of linearity. It means that the quantitative effect of any causal factor on the phenomenon under investigation is directly proportional to the factor’s own magnitude … But it is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves ; indeed, it is ridiculous. Yet this is what Prof. Tinbergen is throughout assuming …

J M Keynes

Keynes’ comprehensive critique of econometrics and the assumptions it is built around — completeness, measurability, indepencence, homogeneity, and linearity — is still valid today.

Most work in econometrics is made on the assumption that the researcher has a theoretical model that is ‘true.’ But — to think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, it is a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

Every econometric model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

The theoretical conditions that have to be fulfilled for econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although econometrics have become the most used quantitative methods in economics today, it’s still a fact that the inferences made from them are as a rule invalid.

Econometrics is basically a deductive method. Given the assumptions it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.

Advertisements

Three suggestions to ‘save’ econometrics

29 November, 2016 at 11:33 | Posted in Economics, Statistics & Econometrics | 6 Comments

Reading an applied econometrics paper could leave you with the impression that the economist (or any social science researcher) first formulated a theory, then built an empirical test based on the theory, then tested the theory. But in my experience what generally happens is more like the opposite: with some loose ideas in mind, the econometrician runs a lot of different regressions until they get something that looks plausible, then tries to fit it into a theory (existing or new) … Statistical theory itself tells us that if you do this for long enough, you will eventually find something plausible by pure chance!

0This is bad news because as tempting as that final, pristine looking causal effect is, readers have no way of knowing how it was arrived at. There are several ways I’ve seen to guard against this:

(1) Use a multitude of empirical specifications to test the robustness of the causal links, and pick the one with the best predictive power …

(2) Have researchers submit their paper for peer review before they carry out the empirical work, detailing the theory they want to test, why it matters and how they’re going to do it. Reasons for inevitable deviations from the research plan should be explained clearly in an appendix by the authors and (re-)approved by referees.

(3) Insist that the paper be replicated. Firstly, by having the authors submit their data and code and seeing if referees can replicate it (think this is a low bar? Most empirical research in ‘top’ economics journals can’t even manage it). Secondly — in the truer sense of replication — wait until someone else, with another dataset or method, gets the same findings in at least a qualitative sense. The latter might be too much to ask of researchers for each paper, but it is a good thing to have in mind as a reader before you are convinced by a finding.

All three of these should, in my opinion, be a prerequisite for research that uses econometrics …

Naturally, this would result in a lot more null findings and probably a lot less research. Perhaps it would also result in fewer attempts at papers which attempt to tell the entire story: that is, which go all the way from building a new model to finding (surprise!) that even the most rigorous empirical methods support it.

Unlearning Economics

Good suggestions, but unfortunately there are many more deep problems with econometrics that have to be ‘solved.’

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture. The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics.

Misapplication of inferential statistics to non-inferential situations is a non-starter for doing proper science. And when choosing which models to use in our analyses, we cannot get around the fact that the evaluation of our hypotheses, explanations, and predictions cannot be made without reference to a specific statistical model or framework. The probabilistic-statistical inferences we make from our samples decisively depends on what population we choose to refer to. The reference class problem shows that there usually are many such populations to choose from, and that the one we choose decides which probabilities we come up with and a fortiori which predictions we make. Not consciously contemplating the relativity effects this choice of ‘nomological-statistical machines’ have, is probably one of the reasons econometricians have a false sense of the amount of uncertainty that really afflicts their models.

As economists and econometricians we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

Economists — and econometricians — have (uncritically and often without arguments) come to simply assume that one can apply probability distributions from statistical theory on their own area of research. However, there are fundamental problems arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels.

Of course one could arguably treat our observational or experimental data as random samples from real populations. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of populations. But this is actually nothing but hand-waving! Doing econometrics it’s always wise to remember C. S. Peirce’s remark that universes are not as common as peanuts …

‘Teaching-to-the-test’ — fel väg framåt för svensk skola

28 November, 2016 at 19:13 | Posted in Education & School | Comments Off on ‘Teaching-to-the-test’ — fel väg framåt för svensk skola

Det är värt att notera att kunskapsnedgången i de internationella undersökningarna överhuvudtaget inte avspeglas i resultaten på de nationella proven (jag är medveten om att axlarna i figuren nedan inte är optimalt skalade). En tolkning av detta är att undervisningen idag är så inriktad på proven att eleverna trots fallande underliggande kunskapsnivå ändå lyckas rätt hyfsat på dem. När eleverna ställs inför nya typer av uppgifter står de emellertid sig slätt. Detta skulle i så fall tyda på att provens utformning gör dem lätta att genomskåda och lära sig prestera bra på, utan att eleverna tillägnat sig djupare ämneskunskaper och ämnesförståelse.

np31

Dessa siffror visar vad som borde vara allmänt känt inom utbildningsforskningen; utvärderingssystemet påverkar verksamheten i betydligt högre grad än styrdokument och allmänna målsättningar … Elever tenderar att bli bättre på just den typ av prov som används som utvärderingsinstrument, men inte nödvändigtvis på andra typer av prov. Även om ett större inslag av ”teaching-to-the-test” inte definitionsmässigt är dåligt, så tyder inte den svenska erfarenheten på att det är en självklar väg till bättre resultat.

Visst är det möjligt att utvecklingen hade varit ännu sämre utan de nationella provens ökade betydelse, men samtidigt finns det en uppenbar möjlighet att motsatsen är sann.

Jonas Vlachos

The elite illusion

28 November, 2016 at 15:25 | Posted in Statistics & Econometrics | Comments Off on The elite illusion

The results reported here suggest that an exam school education produces only scattered gains for applicants, even among students with baseline scores close to or above the mean in the target school. Because the exam school experience is associated with sharp increases in peer achievement, these results weigh against the importance of peer effects in the education production function …

school_choiceOf course, test scores and peer effects are only part of the exam school story. It may be that preparation for exam school entrance is itself worth-while … The many clubs and activities found at some exam schools may expose students to ideas and concepts not easily captured by achievement tests or our post-secondary outcomes. It is also possible that exam school graduates earn higher wages, a question we plan to explore in future work. Still, the estimates reported here suggest that any labor market gains are likely to come through channels other than peer composition and increased cognitive achievement …

Our results are also relevant to the economic debate around school quality and school choice … As with the jump in house prices at school district boundaries, heavy rates of exam school oversubscription suggest that parents believe peer composition matters a great deal for their children’s welfare. The fact that we find little support for causal peer effects suggests that parents either mistakenly equate attractive peers with high value added, or that they value exam schools for reasons other than their impact on learning. Both of these scenarios reduce the likelihood that school choice in and of itself has strong salutary demand-side effects in education production.

A. Abdulkadiroglu, J. D. Angrist, and P. A. Pathak

Results based on one of the latest fads in econometrics — regression discontinuity design. If unfamiliar with the ‘technique,’ here’s a video giving some of the basics:
 

Serenity (personal)

28 November, 2016 at 13:44 | Posted in Varia | Comments Off on Serenity (personal)


[h/t Eric Schüldt]

The Economist — Economics prone to fads and methodological crazes

27 November, 2016 at 18:49 | Posted in Economics | 2 Comments

When a hot new tool arrives on the scene, it should extend the frontiers of economics and pull previously unanswerable questions within reach. What might seem faddish could in fact be economists piling in to help shed light on the discipline’s darkest corners. Some economists, however, argue that new methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy …

16720017-abstract-word-cloud-for-randomized-controlled-trial-with-related-tags-and-terms-stock-photoA paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist (sic!) at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm. RCTs involve randomly assigning a policy to some people and not to others, so that researchers can be sure that differences are caused by the policy. Analysis is a simple comparison of averages between the two. Mr Deaton and Ms Cartwright have a statistical gripe; they complain that researchers are not careful enough when calculating whether two results are significantly different from one another. As a consequence, they suspect that a sizeable portion of published results in development and health economics using RCTs are “unreliable”.

With time, economists should learn when to use their shiny new tools. But there is a deeper concern: that fashions and fads are distorting economics, by nudging the profession towards asking particular questions, and hiding bigger ones from view. Mr Deaton’s and Ms Cartwright’s fear is that RCTs yield results while appearing to sidestep theory, and that “without knowing why things happen and why people do things, we run the risk of worthless causal (‘fairy story’) theorising, and we have given up on one of the central tasks of economics.” Another fundamental worry is that by offering alluringly simple ways of evaluating certain policies, economists lose sight of policy questions that are not easily testable using RCTs, such as the effects of institutions, monetary policy or social norms.

The Economist

For my own take on the RCT fad — here here and here.

Still no. 1 !

27 November, 2016 at 11:58 | Posted in Varia | 1 Comment

 

Om ekonomiskt vetande — Fronesis nr 54-55

26 November, 2016 at 15:37 | Posted in Economics | Comments Off on Om ekonomiskt vetande — Fronesis nr 54-55

fronesis-nr-54-55-omslag-mediumEfter den globala finanskrisen 2008 har den ekonomiska vetenskapen hamnat i blickfånget. Studentrörelser och heterodoxa ekonomer har kritiserat det dominerande ekonomiska paradigmet och krävt ökad pluralism. Den senaste tidens politiska utveckling har blottat nyliberalismens brister och aktualiserat frågan om dess koppling till den ekonomiska vetenskapen. I Fronesis nr 54–55 fördjupar vi oss i det ekonomiska vetandets förutsättningar.

Vänstern har länge inriktat sitt samhällsteoretiska och politiska intresse mot kulturella och symboliska aspekter av makt och dominans, men har mer eller mindre lämnat det ekonomiska fältet därhän. Med Fronesis nr 54–55 vill vi röra oss bortom en simpel kritik av nationalekonomin och fördjupa förståelsen av villkoren för det ekonomiska vetandet. Numret introducerar för en svensk publik en rad centrala samtida teoretiker som belyser frågorna ur olika perspektiv.

Innehåll:

Kajsa Borgnäs och Anders Hylmö: Det ekonomiska vetandet i förändring Ladda ner som PDF
Anders Hylmö: Den moderna nationalekonomin som vetenskaplig stil och disciplin
Dimitris Milonakis: Lärdomar från krisen
Marion Fourcade, Étienne Ollion och Yann Algan: Ekonomernas överlägsenhet
Kajsa Borgnäs: Utanför boxen, eller Vad är heterodox nationalekonomi?
Lars Pålsson Syll: Tony Lawson och kritiken av den nationalekonomiska vetenskapen – en introduktion
Tony Lawson: Den heterodoxa ekonomins natur
Josef Taalbi: Realistisk ekonomisk teori?
Erik Bengtsson: Den heterodoxa nationalekonomins materiella förutsättningar
Julie A. Nelson: Genusmetaforer och nationalekonomi
Linda Nyberg: Nyliberalism, politik och ekonomi
Philip Mirowski: Den politiska rörelsen som inte vågade säga sitt namn
Jason Read: En genealogi över homo oeconomicus
Kajsa Borgnäs: Den vetenskapliga ekonomins politiska makt
Daniel Hirschman och Elizabeth Popp Berman: Skapar nationalekonomer politik?
Peter Gerlach, Marika Lindgren Åsbrink och Ola Pettersson, intervjuade av Daniel Mathisen: Allt annat lika

The use of mathematics in physics and economics

26 November, 2016 at 12:31 | Posted in Economics | Comments Off on The use of mathematics in physics and economics

My idea is to examine the most well-known works of a selection of the most famous neoclassical economists in the period from 1945 to the present.

mathlogo

My survey of well-known works by four famous mathematical neoclassical economists (Samuelson, Arrow, Debreu, Prescott) who all won the Nobel Prize for economics, has not revealed any precise explanations or successful predictions. This supports my conjecture that the use of mathematics in mainstream (or neoclassical) economics has not produced any precise explanations or successful predictions. This, I would claim, is the main difference between neoclassical economics and physics, where both precise explanations and successful predictions have often been obtained by the use of mathematics.

Donald Gillies

What we do in life echoes in eternity

25 November, 2016 at 18:45 | Posted in Varia | Comments Off on What we do in life echoes in eternity

ken

In science courage is to follow the motto of enlightenment and Kant’s dictum — Sapere Aude!  To use your own understanding, having the the courage to think for yourself and question ‘received opinion,’ authority or orthodoxy.

In our daily lives courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but stand up for one’s rights not to be humiliated or abused in any ways by the rich and powerful.

Dignity, a better life, or justice and rule of law, are things worth fighting for. Not to step back creates courageous acts that stay in our memories and means something. As when Rosa Parks sixty years ago, on December 1, 1955, in Montgomery, Alabama, refused to give up her seat to make room for a white passenger.

Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given. Like in the great stories. The ones where people have lots of chances of turning back — but don’t.

As when Sir Nicholas Winton organised the rescue of 669 children destined for Nazi concentration camps during World War II.

222233Or as when Ernest Shackleton, in April 1916, aboard the small boat ‘James Caird’, spent 16 days crossing 1,300 km of ocean to reach South Georgia, then trekked across the island to a whaling station, and finally could rescue the remaining men from the crew of ‘Endurance’ left on the Elephant Island.
Not a single member of the expedition died.

What we do in life echoes in eternity.

1980s nostalgia (personal)

25 November, 2016 at 17:16 | Posted in Varia | 1 Comment

My youngest — born in 1999 — asked me the other day what kind of music her dad listened to back in the swinging 80s. So here, Linnea, is a little taste of paternal nostalgia:






Why IS-LM doesn’t capture Keynes’ approach to the economy

25 November, 2016 at 12:39 | Posted in Economics | 2 Comments

benfineSuppose workers are unemployed. As a result, although willing to work even at lower wages, they are unable to buy consumption goods. As a result, firms are unable to sell those goods if they produced them. So they do not employ the workers who, as a consequence, do not have the wages to buy the consumption goods. The economy is caught in a vicious cycle of deficient demand. According to the IS/LM framework, this would lead to a fall in prices and wages, raise real balances and boost demand. But falling prices and wages might have the effect of both reducing effective demand and confidence, deepening rather than resolving the problem of unemployment.

These considerations raise serious doubts whether the IS/LM approach, despite being the standard representation, fully captures the Keynesian approach to the economy other than in name …

The appeal of the IS/LM lay not only in its formalisation of what is falsely taken to be Keynes’ specific contribution but also in compromising with a Walrasian approach to the economy.

hicksbbcBen Fine and Ourania Dimakou have some further interesting references for those wanting to dwell upon the question of how much Keynes really there is in Hicks’s IS-LM model.

My own view is that  IS-LM doesn’t adequately reflect the width and depth of Keynes’s insights on the workings of modern market economies for the following six reasons:

Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.

IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historic time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …

IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.

IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.

IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

6  IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.

Given this, it’s difficult not agree with Fine and Dimakou. The IS/LM approach doesn’t capture Keynes’ approach to the economy other than in name.

Endogenous growth theory — a crash course

24 November, 2016 at 17:11 | Posted in Economics | 2 Comments

 
endo

Årets dumstrut i Malmös politik

23 November, 2016 at 22:26 | Posted in Education & School | Comments Off on Årets dumstrut i Malmös politik

MALMÖ. Klagomålen på den kommunala förskolan har mer än fördubblats de senaste två åren, men det oroar inte Rose-Marie Carlsson (S), ordförande i förskolenämnden.

– En ökning av klagomålen behöver inte betyda att det blivit sämre. Tvärtom, säger hon. Omorganisationen 2013, då förskolan fick egen nämnd och förvaltning, innebar även ett nytt sätt att arbeta med klagomålshantering.

–Det är enklare nu för vårdnadshavare att lämna in klagomål, bland annat kan det göras direkt på Malmö stads hemsida, säger Rose-Marie Carlsson …

dumstrutRose-Marie Carlsson menar att ökningen av antalet klagomål egentligen är något positivt – det bevisar att klagomålshanteringen fungerar bättre än innan omorganisationen 2013.

–Man måste komma ihåg att vi gått från tio stadsdelar med tio olika sätt att hantera dessa frågor till ett enda sätt att arbeta. Klagomålshanteringen är en del av vårt kvalitetsarbete, och vi arbetar mer systematiskt nu och följer upp klagomål på ett helt annat sätt än tidigare. Vi har bättre kontroll och tillsyn nu. Jag är mer bekymrad över de enheter där det inte anmälts några klagomål, där kan man misstänka bristande rutiner, säger hon.

Markus Celander

groda1Och detta grodors plums och ankors plask ska man behöva läsa år 2016. Herre du milde! Man tager sig för pannan. Sällan eller aldrig har väl ett så urbota löjligt försök att prata bort kritik hörts.

What is ergodicity?

23 November, 2016 at 10:27 | Posted in Economics | 2 Comments

Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

slide_5Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people.

The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

The importance of ergodicity becomes manifest when you think about how we all infer various things, how we draw some conclusion about something while having information about something else. For example, one goes once to a restaurant and likes the fish and next time he goes to the same restaurant and orders chicken, confident that the chicken will be good. Why is he confident? Or one observes that a newspaper has printed some inaccurate information at one point in time and infers that the newspaper is going to publish inaccurate information in the future. Why are these inferences ok, while others such as “more crimes are committed by black persons than by white persons, therefore each individual black person is not to be trusted” are not ok?

The answer is that the ensemble of articles published in a newspaper is more or less ergodic, while the ensemble of black people is not at all ergodic. If one searches how many mistakes appear in an entire newspaper in one issue, and then searches how many mistakes one news editor does over time, one finds the two results almost identical (not exactly, but nonetheless approximately equal). However, if one takes the number of crimes committed by black people in a certain day divided by the total number of black people, and then follows one random-picked black individual over his life, one would not find that, e.g. each month, this individual commits crimes at the same rate as the crime rate determined over the entire ensemble. Thus, one cannot use ensemble statistics to properly infer what is and what is not probable that a certain individual will do.

Or take an even clearer example: In an election each party gets some percentage of votes, party A gets a%, party B gets b% and so on. However, this does not mean that over the course of their lives each individual votes with party A in a% of elections, with B in b% of elections and so on …

A similar problem is faced by scientists in general when they are trying to infer some general statement from various particular experiments. When is a generalization correct and when it isn’t? The answer concerns ergodicity. If the generalization is done towards an ergodic ensemble, then it has a good chance of being correct.

Vlad Tarko

Paul Samuelson once famously claimed that the “ergodic hypothesis” is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

In this video Ole Peters shows why ergodicity is such an important concept for understanding the deep fundamental flaws of mainstream economics:

Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. cointossingSay we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

In an ergodic system time is irrelevant and has no direction. Nothing changes in any significant way; at most you will see some short-lived fluctuations. An ergodic system is indifferent to its initial conditions: if you re-start it, after a little while it always falls into the same equilibrium behavior.

arrow_of_time_1For example, say I gave 1,000 people one die each, had them roll their die once, added all the points rolled, and divided by 1,000. That would be a finite-sample average, approaching the ensemble average as I include more and more people.

Now say I rolled a die 1,000 times in a row, added all the points rolled and divided by 1,000. That would be a finite-time average, approaching the time average as I keep rolling that die.

One implication of ergodicity is that ensemble averages will be the same as time averages. In the first case, it is the size of the sample that eventually removes the randomness from the system. In the second case, it is the time that I’m devoting to rolling that removes randomness. But both methods give the same answer, within errors. In this sense, rolling dice is an ergodic system.

I say “in this sense” because if we bet on the results of rolling a die, wealth does not follow an ergodic process under typical betting rules. If I go bankrupt, I’ll stay bankrupt. So the time average of my wealth will approach zero as time passes, even though the ensemble average of my wealth may increase.

A precondition for ergodicity is stationarity, so there can be no growth in an ergodic system. Ergodic systems are zero-sum games: things slosh around from here to there and back, but nothing is ever added, invented, created or lost. No branching occurs in an ergodic system, no decision has any consequences because sooner or later we’ll end up in the same situation again and can reconsider. The key is that most systems of interest to us, including finance, are non-ergodic.

Ole Peters

Mainstream economics — sacrificing realism at the altar of mathematical purity

22 November, 2016 at 18:05 | Posted in Economics | 6 Comments

e0f5b445c333de539ad33c6a63606b56Economists are too detached from the real world and have failed to learn from the financial crisis, insisting on using mathematical models which do not reflect reality, according to the Bank of England’s chief economist Andy Haldane.

The public has lost faith in economists since the credit crunch, he said, but the profession has failed to thoroughly re-examine its failings to come up with a new model of operating.

“The various reports into the economic costs of the UK leaving the EU most likely fell at the same hurdle. They are written, in the main, by the elite for the elite,” said Mr Haldane, writing the foreword to a new book, called ‘The Econocracy: the perils of leaving economics to the experts’ …

The chief economist said that the Great Depression of the 1930s resulted in a major overhaul of economic thinking, led by John Maynard Keynes, who emerged “as the most influential economist of the twentieth century”.

But the recent financial crisis and slow recovery has not yet prompted this great re-thinking …

For now, economists need to focus on reviewing their models, accepting a diversify of thought rather than one solid orthodoxy, and on communicating more clearly.

Economists should focus on other disciplines as well as maths, he said.

“Mainstream economic models have sacrificed too much realism at the altar of mathematical purity. Their various simplifying assumptions have served aesthetic rather than practical ends,” Mr Haldane wrote.

“As a profession, economics has become too much of a methodological monoculture. And that lack of intellectual diversity cost the profession dear when the single crop failed spectacularly during the crisis.”

Tim Wallace/The Telegraph

The rebel who blew up macroeconomics

22 November, 2016 at 15:18 | Posted in Economics | 1 Comment

rebel-rebel-logo1Paul Romer says he really hadn’t planned to trash macroeconomics as a math-obsessed pseudoscience. Or infuriate countless colleagues. It just sort of happened …

The upshot was “The Trouble With Macroeconomics,” a scathing critique that landed among Romer’s peers like a grenade. In a time of febrile politics, with anti-establishment revolts breaking out everywhere, faith in economists was already ebbing: They got blamed for failing to see the Great Recession coming and, later, to suggest effective remedies. Then, along came one of the leading practitioners of his generation, to say that the skeptics were onto something.

“For more than three decades, macroeconomics has gone backwards,” the paper began. Romer closed out his argument, some 20 pages later, by accusing a cohort of economists of drifting away from science, more interested in preserving reputations than testing their theories against reality, “more committed to friends than facts.” In between, he offers a wicked parody of a modern macro argument: “Assume A, assume B, … blah blah blah … and so we have proven that P is true.” …

Romer said he hopes at least to have set an example, for younger economists, of how scientific inquiry should proceed — on Enlightenment lines. No authority-figures should command automatic deference, or be placed above criticism, and voices from outside the like-minded group shouldn’t be ignored. He worries that those principles are at risk, well beyond his own field … And at the deepest level, he thinks it’s a misunderstanding of science that has sent so many economists down the wrong track. “Essentially, their belief was that math could tell you the deep secrets of the universe,” he said.

Bloomberg

‘Post-real’ macroeconomics — three decades of intellectual regress

22 November, 2016 at 10:40 | Posted in Economics | 1 Comment

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model …

67477738In response to the observation that the shocks are imaginary, a standard defence invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions.” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favourite.

The noncommittal relationship with the truth revealed by these methodological evasions and the “less than totally convinced …” dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.”

Paul Romer

There are many kinds of useless economics held in high regard within the mainstream economics establishment today. Few  are less deserved than the post-real macroeconomic theory — mostly connected with Finn Kydland, Robert Lucas,  Edward Prescott and Thomas Sargent — called RBC.

In Chicago economics one is cultivating the view that scientific theories has nothing to do with truth. Constructing theories and building models is not even considered an activity wth the intent of  approximating truth. For Chicago economists it is only an endeavour to organize their thoughts in a ‘useful’ manner.

What a handy view of science!

What these defenders of scientific storytelling ‘forget’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough.

Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough — not even after having put ‘New Keynesian’ sticky-price DSGE lipstick on the RBC pig.

Truth is an important concept in real science — and models based on meaningless calibrated ‘facts’ and ‘assumptions’ with unknown truth value are poor substitutes.

Public debt should not be zero. Ever!

22 November, 2016 at 09:26 | Posted in Economics | 3 Comments

Nation states borrow to provide public capital: For example, rail networks, road systems, airports and bridges. These are examples of large expenditure items that are more efficiently provided by government than by private companies.

darling-let-s-get-deeply-into-debtThe benefits of public capital expenditures are enjoyed not only by the current generation of people, who must sacrifice consumption to pay for them, but also by future generations who will travel on the rail networks, drive on the roads, fly to and from the airports and drive over the bridges that were built by previous generations. Interest on the government debt is a payment from current taxpayers, who enjoy the fruits of public capital, to past generations, who sacrificed consumption to provide that capital.

To maintain the roads, railways, airports and bridges, the government must continue to invest in public infrastructure. And public investment should be financed by borrowing, not from current tax revenues.

Investment in public infrastructure was, on average, equal to 4.3% of GDP in the period from 1948 through 1983. It has since fallen to 1.6% of GDP. There is a strong case to be made for increasing investment in public infrastructure. First, the public capital that was constructed in the post WWII period must be maintained in order to allow the private sector to function effectively. Second, there is a strong case for the construction of new public infrastructure to promote and facilitate future private sector growth.

The debt raised by a private sector company should be strictly less than the value of assets, broadly defined. That principle does not apply to a nation state. Even if government provided no capital services, the value of its assets or liabilities should not be zero except by chance.

National treasuries have the power to transfer resources from one generation to another. By buying and selling assets in the private markets, government creates opportunities for those of us alive today to transfer resources to or from those who are yet to be born. If government issues less debt than the value of public capital, there will be an implicit transfer from current to future generations. If it owns more debt, the implicit transfer is in the other direction.

The optimal value of debt, relative to public capital, is a political decision … Whatever principle the government does choose to fund its expenditure, the optimal value of public sector borrowing will not be zero, except by chance.

Roger Farmer

Today there seems to be a rather widespread consensus of public debt being acceptable as long as it doesn’t increase too much and too fast. If the public debt-GDP ratio becomes higher than X % the likelihood of debt crisis and/or lower growth increases.

But in discussing within which margins public debt is feasible, the focus, however, is solely on the upper limit of indebtedness, and very few asks the question if maybe there is also a problem if public debt becomes too low.

The government’s ability to conduct an “optimal” public debt policy may be negatively affected if public debt becomes too small. To guarantee a well-functioning secondary market in bonds it is essential that the government has access to a functioning market. If turnover and liquidity in the secondary market becomes too small, increased volatility and uncertainty will in the long run lead to an increase in borrowing costs. Ultimately there’s even a risk that market makers would disappear, leaving bond market trading to be operated solely through brokered deals. As a kind of precautionary measure against this eventuality it may be argued – especially in times of financial turmoil and crises — that it is necessary to increase government borrowing and debt to ensure – in a longer run – good borrowing preparedness and a sustained (government) bond market.

The question if public debt is good and that we may actually have to little of it is one of our time’s biggest questions. Giving the wrong answer to it will be costly.

national debt5One of the most effective ways of clearing up this most serious of all semantic confusions is to point out that private debt differs from national debt in being external. It is owed by one person to others. That is what makes it burdensome. Because it is interpersonal the proper analogy is not to national debt but to international debt…. But this does not hold for national debt which is owed by the nation to citizens of the same nation. There is no external creditor. We owe it to ourselves.

A variant of the false analogy is the declaration that national debt puts an unfair burden on our children, who are thereby made to pay for our extravagances. Very few economists need to be reminded that if our children or grandchildren repay some of the national debt these payments will be made to our children or grandchildren and to nobody else. Taking them altogether they will no more be impoverished by making the repayments than they will be enriched by receiving them.

Abba Lerner The Burden of the National Debt (1948)

Slim by chocolate — a severe case of goofed p-hacking

21 November, 2016 at 17:12 | Posted in Statistics & Econometrics | 2 Comments

679eFrank randomly assigned the subjects to one of three diet groups. One group followed a low-carbohydrate diet. Another followed the same low-carb diet plus a daily 1.5 oz. bar of dark chocolate. And the rest, a control group, were instructed to make no changes to their current diet. They weighed themselves each morning for 21 days, and the study finished with a final round of questionnaires and blood tests …

Both of the treatment groups lost about 5 pounds over the course of the study, while the control group’s average body weight fluctuated up and down around zero. But the people on the low-carb diet plus chocolate? They lost weight 10 percent faster. Not only was that difference statistically significant, but the chocolate group had better cholesterol readings and higher scores on the well-being survey.

I know what you’re thinking. The study did show accelerated weight loss in the chocolate group—shouldn’t we trust it? Isn’t that how science works?

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.

Whenever you hear that phrase, it means that some result has a small p value. The letter p seems to have totemic power, but it’s just a way to gauge the signal-to-noise ratio in the data. The conventional cutoff for being “significant” is 0.05, which means that there is just a 5 percent chance that your result is a random fluctuation. The more lottery tickets, the better your chances of getting a false positive. So how many tickets do you need to buy?

P(winning) = 1 – (1 – p)^n

With our 18 measurements, we had a 60% chance of getting some“significant” result with p < 0.05. (The measurements weren’t independent, so it could be even higher.) The game was stacked in our favor.

It’s called p-hacking—fiddling with your experimental design and data to push p under 0.05—and it’s a big problem. Most scientists are honest and do it unconsciously. They get negative results, convince themselves they goofed, and repeat the experiment until it “works.” Or they drop “outlier” data points.

John Bohannon

Statistical inferences depend on both what actually happens and what might have happened. And Bohannon’s (in)famous chocolate con more than anything else underscores the dangers of confusing the model with reality. Or as W.V.O. Quine had it:”Confusion of sign and object is the original sin.”

There are no such things as free-standing probabilities – simply because probabilities are strictly seen only defined relative to chance set-ups – probabilistic nomological machines like flipping coins or roulette-wheels. And even these machines can be tricky to handle. Although prob(fair coin lands heads|I toss it) = prob(fair coin lands head & I toss it)|prob(fair coin lands heads) may be well-defined, it’s not certain we can use it, since we cannot define the probability that I will toss the coin given the fact that I am not a nomological machine producing coin tosses.

No nomological machine – no probability.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.