Keynes on the ‘devastating inconsistencies’ of econometrics

30 Nov, 2016 at 11:05 | Posted in Statistics & Econometrics | 7 Comments

In practice Prof. Tinbergen seems to be entirely indifferent whether or not his basic factors are independent of one another … But my mind goes back to the days when Mr. Yule sprang a mine under the contraptions of optimistic statisticians by his discovery of spurious correlation. In plain terms, it is evident that if what is really the same factor is appearing in several places under various disguises, a free choice of regression coefficients can lead to strange results. It becomes like those puzzles for children where you write down your age, multiply, add this and that, subtract something else, and eventually end up with the number of the Beast in Revelation.

deb6e811f2b49ceda8cc2a2981e309f39e3629d8ae801a7088bf80467303077bProf. Tinbergen explains that, generally speaking, he assumes that the correlations under investigation are linear … I have not discovered any example of curvilinear correlation in this book, and he does not tell us what kind of evidence would lead him to introduce it. If, as he suggests above, he were in such cases to use the method of changing his linear coefficients from time to time, it would certainly seem that quite easy manipulation on these lines would make it possible to fit any explanation to any facts. Am I right in thinking that the uniqueness of his results depends on his knowing beforehand that the correlation curve must be a particular kind of function, whether linear or some other kind ?

Apart from this, one would have liked to be told emphatically what is involved in the assumption of linearity. It means that the quantitative effect of any causal factor on the phenomenon under investigation is directly proportional to the factor’s own magnitude … But it is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves ; indeed, it is ridiculous. Yet this is what Prof. Tinbergen is throughout assuming …

J M Keynes

Keynes’ comprehensive critique of econometrics and the assumptions it is built around — completeness, measurability, indepencence, homogeneity, and linearity — is still valid today.

Most work in econometrics is made on the assumption that the researcher has a theoretical model that is ‘true.’ But — to think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, it is a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

Every econometric model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

The theoretical conditions that have to be fulfilled for econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although econometrics have become the most used quantitative methods in economics today, it’s still a fact that the inferences made from them are as a rule invalid.

Econometrics is basically a deductive method. Given the assumptions it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.

Three suggestions to ‘save’ econometrics

29 Nov, 2016 at 11:33 | Posted in Economics, Statistics & Econometrics | 6 Comments

Reading an applied econometrics paper could leave you with the impression that the economist (or any social science researcher) first formulated a theory, then built an empirical test based on the theory, then tested the theory. But in my experience what generally happens is more like the opposite: with some loose ideas in mind, the econometrician runs a lot of different regressions until they get something that looks plausible, then tries to fit it into a theory (existing or new) … Statistical theory itself tells us that if you do this for long enough, you will eventually find something plausible by pure chance!

0This is bad news because as tempting as that final, pristine looking causal effect is, readers have no way of knowing how it was arrived at. There are several ways I’ve seen to guard against this:

(1) Use a multitude of empirical specifications to test the robustness of the causal links, and pick the one with the best predictive power …

(2) Have researchers submit their paper for peer review before they carry out the empirical work, detailing the theory they want to test, why it matters and how they’re going to do it. Reasons for inevitable deviations from the research plan should be explained clearly in an appendix by the authors and (re-)approved by referees.

(3) Insist that the paper be replicated. Firstly, by having the authors submit their data and code and seeing if referees can replicate it (think this is a low bar? Most empirical research in ‘top’ economics journals can’t even manage it). Secondly — in the truer sense of replication — wait until someone else, with another dataset or method, gets the same findings in at least a qualitative sense. The latter might be too much to ask of researchers for each paper, but it is a good thing to have in mind as a reader before you are convinced by a finding.

All three of these should, in my opinion, be a prerequisite for research that uses econometrics …

Naturally, this would result in a lot more null findings and probably a lot less research. Perhaps it would also result in fewer attempts at papers which attempt to tell the entire story: that is, which go all the way from building a new model to finding (surprise!) that even the most rigorous empirical methods support it.

Unlearning Economics

Good suggestions, but unfortunately there are many more deep problems with econometrics that have to be ‘solved.’

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture. The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics.

Misapplication of inferential statistics to non-inferential situations is a non-starter for doing proper science. And when choosing which models to use in our analyses, we cannot get around the fact that the evaluation of our hypotheses, explanations, and predictions cannot be made without reference to a specific statistical model or framework. The probabilistic-statistical inferences we make from our samples decisively depends on what population we choose to refer to. The reference class problem shows that there usually are many such populations to choose from, and that the one we choose decides which probabilities we come up with and a fortiori which predictions we make. Not consciously contemplating the relativity effects this choice of ‘nomological-statistical machines’ have, is probably one of the reasons econometricians have a false sense of the amount of uncertainty that really afflicts their models.

As economists and econometricians we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting Haavelmo’s domain of probability theory and sample space of infinite populations – just as Fisher’s ‘hypothetical infinite population,’ von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ – also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

Economists — and econometricians — have (uncritically and often without arguments) come to simply assume that one can apply probability distributions from statistical theory on their own area of research. However, there are fundamental problems arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels.

Of course one could arguably treat our observational or experimental data as random samples from real populations. But probabilistic econometrics does not content itself with that kind of populations. Instead it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of populations. But this is actually nothing but hand-waving! Doing econometrics it’s always wise to remember C. S. Peirce’s remark that universes are not as common as peanuts …

‘Teaching-to-the-test’ — fel väg framåt för svensk skola

28 Nov, 2016 at 19:13 | Posted in Education & School | Comments Off on ‘Teaching-to-the-test’ — fel väg framåt för svensk skola

Det är värt att notera att kunskapsnedgången i de internationella undersökningarna överhuvudtaget inte avspeglas i resultaten på de nationella proven (jag är medveten om att axlarna i figuren nedan inte är optimalt skalade). En tolkning av detta är att undervisningen idag är så inriktad på proven att eleverna trots fallande underliggande kunskapsnivå ändå lyckas rätt hyfsat på dem. När eleverna ställs inför nya typer av uppgifter står de emellertid sig slätt. Detta skulle i så fall tyda på att provens utformning gör dem lätta att genomskåda och lära sig prestera bra på, utan att eleverna tillägnat sig djupare ämneskunskaper och ämnesförståelse.

np31

Dessa siffror visar vad som borde vara allmänt känt inom utbildningsforskningen; utvärderingssystemet påverkar verksamheten i betydligt högre grad än styrdokument och allmänna målsättningar … Elever tenderar att bli bättre på just den typ av prov som används som utvärderingsinstrument, men inte nödvändigtvis på andra typer av prov. Även om ett större inslag av ”teaching-to-the-test” inte definitionsmässigt är dåligt, så tyder inte den svenska erfarenheten på att det är en självklar väg till bättre resultat.

Visst är det möjligt att utvecklingen hade varit ännu sämre utan de nationella provens ökade betydelse, men samtidigt finns det en uppenbar möjlighet att motsatsen är sann.

Jonas Vlachos

The elite illusion

28 Nov, 2016 at 15:25 | Posted in Statistics & Econometrics | Comments Off on The elite illusion

The results reported here suggest that an exam school education produces only scattered gains for applicants, even among students with baseline scores close to or above the mean in the target school. Because the exam school experience is associated with sharp increases in peer achievement, these results weigh against the importance of peer effects in the education production function …

school_choiceOf course, test scores and peer effects are only part of the exam school story. It may be that preparation for exam school entrance is itself worth-while … The many clubs and activities found at some exam schools may expose students to ideas and concepts not easily captured by achievement tests or our post-secondary outcomes. It is also possible that exam school graduates earn higher wages, a question we plan to explore in future work. Still, the estimates reported here suggest that any labor market gains are likely to come through channels other than peer composition and increased cognitive achievement …

Our results are also relevant to the economic debate around school quality and school choice … As with the jump in house prices at school district boundaries, heavy rates of exam school oversubscription suggest that parents believe peer composition matters a great deal for their children’s welfare. The fact that we find little support for causal peer effects suggests that parents either mistakenly equate attractive peers with high value added, or that they value exam schools for reasons other than their impact on learning. Both of these scenarios reduce the likelihood that school choice in and of itself has strong salutary demand-side effects in education production.

A. Abdulkadiroglu, J. D. Angrist, and P. A. Pathak

Results based on one of the latest fads in econometrics — regression discontinuity design. If unfamiliar with the ‘technique,’ here’s a video giving some of the basics:
 

Serenity (personal)

28 Nov, 2016 at 13:44 | Posted in Varia | Comments Off on Serenity (personal)


[h/t Eric Schüldt]

The Economist — Economics prone to fads and methodological crazes

27 Nov, 2016 at 18:49 | Posted in Economics | 2 Comments

When a hot new tool arrives on the scene, it should extend the frontiers of economics and pull previously unanswerable questions within reach. What might seem faddish could in fact be economists piling in to help shed light on the discipline’s darkest corners. Some economists, however, argue that new methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy …

16720017-abstract-word-cloud-for-randomized-controlled-trial-with-related-tags-and-terms-stock-photoA paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist (sic!) at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm. RCTs involve randomly assigning a policy to some people and not to others, so that researchers can be sure that differences are caused by the policy. Analysis is a simple comparison of averages between the two. Mr Deaton and Ms Cartwright have a statistical gripe; they complain that researchers are not careful enough when calculating whether two results are significantly different from one another. As a consequence, they suspect that a sizeable portion of published results in development and health economics using RCTs are “unreliable”.

With time, economists should learn when to use their shiny new tools. But there is a deeper concern: that fashions and fads are distorting economics, by nudging the profession towards asking particular questions, and hiding bigger ones from view. Mr Deaton’s and Ms Cartwright’s fear is that RCTs yield results while appearing to sidestep theory, and that “without knowing why things happen and why people do things, we run the risk of worthless causal (‘fairy story’) theorising, and we have given up on one of the central tasks of economics.” Another fundamental worry is that by offering alluringly simple ways of evaluating certain policies, economists lose sight of policy questions that are not easily testable using RCTs, such as the effects of institutions, monetary policy or social norms.

The Economist

For my own take on the RCT fad — here here and here.

Still no. 1 !

27 Nov, 2016 at 11:58 | Posted in Varia | 1 Comment

 

Om ekonomiskt vetande — Fronesis nr 54-55

26 Nov, 2016 at 15:37 | Posted in Economics | Comments Off on Om ekonomiskt vetande — Fronesis nr 54-55

fronesis-nr-54-55-omslag-mediumEfter den globala finanskrisen 2008 har den ekonomiska vetenskapen hamnat i blickfånget. Studentrörelser och heterodoxa ekonomer har kritiserat det dominerande ekonomiska paradigmet och krävt ökad pluralism. Den senaste tidens politiska utveckling har blottat nyliberalismens brister och aktualiserat frågan om dess koppling till den ekonomiska vetenskapen. I Fronesis nr 54–55 fördjupar vi oss i det ekonomiska vetandets förutsättningar.

Vänstern har länge inriktat sitt samhällsteoretiska och politiska intresse mot kulturella och symboliska aspekter av makt och dominans, men har mer eller mindre lämnat det ekonomiska fältet därhän. Med Fronesis nr 54–55 vill vi röra oss bortom en simpel kritik av nationalekonomin och fördjupa förståelsen av villkoren för det ekonomiska vetandet. Numret introducerar för en svensk publik en rad centrala samtida teoretiker som belyser frågorna ur olika perspektiv.

Innehåll:

Kajsa Borgnäs och Anders Hylmö: Det ekonomiska vetandet i förändring Ladda ner som PDF
Anders Hylmö: Den moderna nationalekonomin som vetenskaplig stil och disciplin
Dimitris Milonakis: Lärdomar från krisen
Marion Fourcade, Étienne Ollion och Yann Algan: Ekonomernas överlägsenhet
Kajsa Borgnäs: Utanför boxen, eller Vad är heterodox nationalekonomi?
Lars Pålsson Syll: Tony Lawson och kritiken av den nationalekonomiska vetenskapen – en introduktion
Tony Lawson: Den heterodoxa ekonomins natur
Josef Taalbi: Realistisk ekonomisk teori?
Erik Bengtsson: Den heterodoxa nationalekonomins materiella förutsättningar
Julie A. Nelson: Genusmetaforer och nationalekonomi
Linda Nyberg: Nyliberalism, politik och ekonomi
Philip Mirowski: Den politiska rörelsen som inte vågade säga sitt namn
Jason Read: En genealogi över homo oeconomicus
Kajsa Borgnäs: Den vetenskapliga ekonomins politiska makt
Daniel Hirschman och Elizabeth Popp Berman: Skapar nationalekonomer politik?
Peter Gerlach, Marika Lindgren Åsbrink och Ola Pettersson, intervjuade av Daniel Mathisen: Allt annat lika

The use of mathematics in physics and economics

26 Nov, 2016 at 12:31 | Posted in Economics | Comments Off on The use of mathematics in physics and economics

My idea is to examine the most well-known works of a selection of the most famous neoclassical economists in the period from 1945 to the present.

mathlogo

My survey of well-known works by four famous mathematical neoclassical economists (Samuelson, Arrow, Debreu, Prescott) who all won the Nobel Prize for economics, has not revealed any precise explanations or successful predictions. This supports my conjecture that the use of mathematics in mainstream (or neoclassical) economics has not produced any precise explanations or successful predictions. This, I would claim, is the main difference between neoclassical economics and physics, where both precise explanations and successful predictions have often been obtained by the use of mathematics.

Donald Gillies

What we do in life echoes in eternity

25 Nov, 2016 at 18:45 | Posted in Varia | Comments Off on What we do in life echoes in eternity

ken

In science courage is to follow the motto of enlightenment and Kant’s dictum — Sapere Aude!  To use your own understanding, having the the courage to think for yourself and question ‘received opinion,’ authority or orthodoxy.

In our daily lives courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but stand up for one’s rights not to be humiliated or abused in any ways by the rich and powerful.

Dignity, a better life, or justice and rule of law, are things worth fighting for. Not to step back creates courageous acts that stay in our memories and means something. As when Rosa Parks sixty years ago, on December 1, 1955, in Montgomery, Alabama, refused to give up her seat to make room for a white passenger.

Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given. Like in the great stories. The ones where people have lots of chances of turning back — but don’t.

As when Sir Nicholas Winton organised the rescue of 669 children destined for Nazi concentration camps during World War II.

222233Or as when Ernest Shackleton, in April 1916, aboard the small boat ‘James Caird’, spent 16 days crossing 1,300 km of ocean to reach South Georgia, then trekked across the island to a whaling station, and finally could rescue the remaining men from the crew of ‘Endurance’ left on the Elephant Island.
Not a single member of the expedition died.

What we do in life echoes in eternity.

1980s nostalgia (personal)

25 Nov, 2016 at 17:16 | Posted in Varia | 1 Comment

My youngest — born in 1999 — asked me the other day what kind of music her dad listened to back in the swinging 80s. So here, Linnea, is a little taste of paternal nostalgia:






Why IS-LM doesn’t capture Keynes’ approach to the economy

25 Nov, 2016 at 12:39 | Posted in Economics | 2 Comments

benfineSuppose workers are unemployed. As a result, although willing to work even at lower wages, they are unable to buy consumption goods. As a result, firms are unable to sell those goods if they produced them. So they do not employ the workers who, as a consequence, do not have the wages to buy the consumption goods. The economy is caught in a vicious cycle of deficient demand. According to the IS/LM framework, this would lead to a fall in prices and wages, raise real balances and boost demand. But falling prices and wages might have the effect of both reducing effective demand and confidence, deepening rather than resolving the problem of unemployment.

These considerations raise serious doubts whether the IS/LM approach, despite being the standard representation, fully captures the Keynesian approach to the economy other than in name …

The appeal of the IS/LM lay not only in its formalisation of what is falsely taken to be Keynes’ specific contribution but also in compromising with a Walrasian approach to the economy.

hicksbbcBen Fine and Ourania Dimakou have some further interesting references for those wanting to dwell upon the question of how much Keynes really there is in Hicks’s IS-LM model.

My own view is that  IS-LM doesn’t adequately reflect the width and depth of Keynes’s insights on the workings of modern market economies for the following six reasons:

Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.

IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historic time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …

IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.

IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.

IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

6  IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.

Given this, it’s difficult not agree with Fine and Dimakou. The IS/LM approach doesn’t capture Keynes’ approach to the economy other than in name.

Endogenous growth theory — a crash course

24 Nov, 2016 at 17:11 | Posted in Economics | 2 Comments

 
endo

Årets dumstrut i Malmös politik

23 Nov, 2016 at 22:26 | Posted in Education & School | Comments Off on Årets dumstrut i Malmös politik

MALMÖ. Klagomålen på den kommunala förskolan har mer än fördubblats de senaste två åren, men det oroar inte Rose-Marie Carlsson (S), ordförande i förskolenämnden.

– En ökning av klagomålen behöver inte betyda att det blivit sämre. Tvärtom, säger hon. Omorganisationen 2013, då förskolan fick egen nämnd och förvaltning, innebar även ett nytt sätt att arbeta med klagomålshantering.

–Det är enklare nu för vårdnadshavare att lämna in klagomål, bland annat kan det göras direkt på Malmö stads hemsida, säger Rose-Marie Carlsson …

dumstrutRose-Marie Carlsson menar att ökningen av antalet klagomål egentligen är något positivt – det bevisar att klagomålshanteringen fungerar bättre än innan omorganisationen 2013.

–Man måste komma ihåg att vi gått från tio stadsdelar med tio olika sätt att hantera dessa frågor till ett enda sätt att arbeta. Klagomålshanteringen är en del av vårt kvalitetsarbete, och vi arbetar mer systematiskt nu och följer upp klagomål på ett helt annat sätt än tidigare. Vi har bättre kontroll och tillsyn nu. Jag är mer bekymrad över de enheter där det inte anmälts några klagomål, där kan man misstänka bristande rutiner, säger hon.

Markus Celander

groda1Och detta grodors plums och ankors plask ska man behöva läsa år 2016. Herre du milde! Man tager sig för pannan. Sällan eller aldrig har väl ett så urbota löjligt försök att prata bort kritik hörts.

What is ergodicity?

23 Nov, 2016 at 10:27 | Posted in Economics | 2 Comments

Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?

Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.

slide_5Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people.

The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.

The importance of ergodicity becomes manifest when you think about how we all infer various things, how we draw some conclusion about something while having information about something else. For example, one goes once to a restaurant and likes the fish and next time he goes to the same restaurant and orders chicken, confident that the chicken will be good. Why is he confident? Or one observes that a newspaper has printed some inaccurate information at one point in time and infers that the newspaper is going to publish inaccurate information in the future. Why are these inferences ok, while others such as “more crimes are committed by black persons than by white persons, therefore each individual black person is not to be trusted” are not ok?

The answer is that the ensemble of articles published in a newspaper is more or less ergodic, while the ensemble of black people is not at all ergodic. If one searches how many mistakes appear in an entire newspaper in one issue, and then searches how many mistakes one news editor does over time, one finds the two results almost identical (not exactly, but nonetheless approximately equal). However, if one takes the number of crimes committed by black people in a certain day divided by the total number of black people, and then follows one random-picked black individual over his life, one would not find that, e.g. each month, this individual commits crimes at the same rate as the crime rate determined over the entire ensemble. Thus, one cannot use ensemble statistics to properly infer what is and what is not probable that a certain individual will do.

Or take an even clearer example: In an election each party gets some percentage of votes, party A gets a%, party B gets b% and so on. However, this does not mean that over the course of their lives each individual votes with party A in a% of elections, with B in b% of elections and so on …

A similar problem is faced by scientists in general when they are trying to infer some general statement from various particular experiments. When is a generalization correct and when it isn’t? The answer concerns ergodicity. If the generalization is done towards an ergodic ensemble, then it has a good chance of being correct.

Vlad Tarko

Paul Samuelson once famously claimed that the “ergodic hypothesis” is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

In this video Ole Peters shows why ergodicity is such an important concept for understanding the deep fundamental flaws of mainstream economics:

Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. cointossingSay we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

In an ergodic system time is irrelevant and has no direction. Nothing changes in any significant way; at most you will see some short-lived fluctuations. An ergodic system is indifferent to its initial conditions: if you re-start it, after a little while it always falls into the same equilibrium behavior.

arrow_of_time_1For example, say I gave 1,000 people one die each, had them roll their die once, added all the points rolled, and divided by 1,000. That would be a finite-sample average, approaching the ensemble average as I include more and more people.

Now say I rolled a die 1,000 times in a row, added all the points rolled and divided by 1,000. That would be a finite-time average, approaching the time average as I keep rolling that die.

One implication of ergodicity is that ensemble averages will be the same as time averages. In the first case, it is the size of the sample that eventually removes the randomness from the system. In the second case, it is the time that I’m devoting to rolling that removes randomness. But both methods give the same answer, within errors. In this sense, rolling dice is an ergodic system.

I say “in this sense” because if we bet on the results of rolling a die, wealth does not follow an ergodic process under typical betting rules. If I go bankrupt, I’ll stay bankrupt. So the time average of my wealth will approach zero as time passes, even though the ensemble average of my wealth may increase.

A precondition for ergodicity is stationarity, so there can be no growth in an ergodic system. Ergodic systems are zero-sum games: things slosh around from here to there and back, but nothing is ever added, invented, created or lost. No branching occurs in an ergodic system, no decision has any consequences because sooner or later we’ll end up in the same situation again and can reconsider. The key is that most systems of interest to us, including finance, are non-ergodic.

Ole Peters

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.