Why game theory will be nothing but a footnote in the history of social science

30 Sep, 2017 at 12:35 | Posted in Economics | Comments Off on Why game theory will be nothing but a footnote in the history of social science

Nash equilibrium has since it was introduced back in the 50’s come to be the standard solution concept used by game theorists. The justification for its use has mainly built on dubious and contentious assumptions like ‘common knowledge’ and individuals exclusively identified as instrumentally rational. And as if that wasn’t enough, one actually, to ‘save’ the Holy Equilibrium Grail, has had to further make the ridiculously unreal assumption that those individuals have ‘consistently aligned beliefs’ — effectively treating different individuals as incarnations of the microfoundationalist ‘representative agent.’

In the beginning — in the 50’s and 60’s — hopes were high that game theory would enhance our possibilities of understanding/explaining the behaviour of interacting actors in non-parametric settings. And this is where we ended up! A sad story, indeed, showing the limits of methodological individualism and instrumental rationality.

Why not give up on the Nash concept altogether? Why not give up the vain dream of trying to understand social interaction by reducing it to something that can be analyzed within a grotesquely unreal model of instrumentally interacting identical robot imitations?

We believe that a variety of contributory factors can be identified …

gaIt is possible that the strange philosophical moorings of neoclassical economics and game theory have played a part. They are strange in at least two respects. The first is a kind of amnesia or lobotomy which the discipline seems to have suffered regarding most things philosophical during the postwar period … The second is the utilitarian historical roots of modern economics … Thirdly, the sociology of the discipline may provide further clues … All academics have fought their corner in battles over resources and they always use the special qualities of their discipline as ammunition in one way or another. Thus one might explain in functionalist terms the mystifying attachments of economics and game theory to Nash.

Half a century ago there was​ widespread hopes game theory would provide a unified theory of all social science. Today it has become obvious those hopes did not materialize. This ought to come as no surprise. Reductionist and atomistic models of social interaction — such as the ones neoclassical mainstream economics is founded on — can never deliver good building blocks for a realist and relevant social science.

Rational expectations — the triumph of ideology over science

29 Sep, 2017 at 18:53 | Posted in Economics | 2 Comments

Research shows not only that individuals sometimes act differently than standard economic theories predict, but that they do so regularly, systematically, and in ways that can be understood and interpreted through alternative hypotheses, competing with those utilised by orthodox economists.

Senate Banking Subcommittee On Financial Institutions Hearing With StiglitzTo most market participants — and, indeed, ordinary observers — this does not seem like big news … In fact, this irrationality is no news to the economics profession either. John Maynard Keynes long ago described the stock market as based not on rational individuals struggling to uncover market fundamentals, but as a beauty contest in which the winner is the one who guesses best what the judges will say …

Adam Smith’s invisible hand — the idea that free markets lead to efficiency as if guided by unseen forces — is invisible, at least in part, because it is not there …

For more than 20 years, economists were enthralled by so-called “rational expectations” models which assumed that all participants have the same (if not perfect) information and act perfectly rationally, that markets are perfectly efficient, that unemployment never exists (except when caused by greedy unions or government minimum wages), and where there is never any credit rationing.

That such models prevailed, especially in America’s graduate schools, despite evidence to the contrary, bears testimony to a triumph of ideology over science. Unfortunately, students of these graduate programmes now act as policymakers in many countries, and are trying to implement programmes based on the ideas that have come to be called market fundamentalism … Good science recognises its limitations, but the prophets of rational expectations have usually shown no such modesty.

Joseph Stiglitz

 
Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to place macroeconomic models building on representative actors and rational expectations microfoundations in the dustbin of pseudo-science.

For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us a rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place — instead of simply conjuring the problem away by assuming rational expectations and treating uncertainty as if it was possible to reduce it to stochastic risk. That is scientific cheating. And it has been going on for too long now.

Like Tears in Rain

29 Sep, 2017 at 15:34 | Posted in Varia | Comments Off on Like Tears in Rain

 

Hicks on neoclassical ‘uncertainty laundering’

28 Sep, 2017 at 09:56 | Posted in Economics | Comments Off on Hicks on neoclassical ‘uncertainty laundering’

To understand real world ‘non-routine’ decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Wickham, Mark, active 1984-2000; Sir John Hicks (1904-1989)When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

Time to abandon statistical significance

27 Sep, 2017 at 10:55 | Posted in Statistics & Econometrics | 6 Comments

worship-p-300x214We recommend dropping the NHST [null hypothesis significance testing] paradigm — and the p-value thresholds associated with it — as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, rather than allowing statistical signicance as determined by p < 0.05 (or some other statistical threshold) to serve as a lexicographic decision rule in scientic publication and statistical decision making more broadly as per the status quo, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with the neglected factors [such factors as prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain] as just one among many pieces of evidence.

We make this recommendation for three broad reasons. First, in the biomedical and social sciences, the sharp point null hypothesis of zero effect and zero systematic error used in the overwhelming majority of applications is generally not of interest because it is generally implausible. Second, the standard use of NHST — to take the rejection of this straw man sharp point null hypothesis as positive or even definitive evidence in favor of some preferredalternative hypothesis — is a logical fallacy that routinely results in erroneous scientic reasoning even by experienced scientists and statisticians. Third, p-value and other statistical thresholds encourage researchers to study and report single comparisons rather than focusing on the totality of their data and results.

Andrew Gelman et al.

ad11As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

statistical-models-sdl609573791-1-42fd0In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.
So?

Neoliberal ‘ethics’

26 Sep, 2017 at 13:20 | Posted in Politics & Society | 3 Comments

As we all know, neoliberalism is nothing but a self-serving con endorsing pernicious moral cynicism. But it’s still sickening to read its gobsmacking trash, maintaining that unregulated capitalism is a ‘superlatively moral system’:

neo The rich man may feast on caviar and champagne, while the poor woman starves at his gate. And she may not even take the crumbs from his table, if that would deprive him of his pleasure in feeding them to his birds.
 

David Gauthier Morals by Agreement

Now, compare that unashamed neoliberal apologetics with what two truly great economists and liberals — John Maynard Keynes and Robert Solow — have to say:

The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist to-day.

John Maynard Keynes General Theory (1936)

4703325Who could be against allowing people their ‘just deserts?’ But there is that matter of what is ‘just.’ Most serious ethical thinkers distinguish between deservingness and happenstance. Deservingness has to be rigorously earned. You do not ‘deserve’ that part of your income that comes from your parents’ wealth or connections or, for that matter, their DNA. You may be born just plain gorgeous or smart or tall, and those characteristics add to the market value of your marginal product, but not to your deserts. It may be impractical to separate effort from happenstance numerically, but that is no reason to confound them, especially when you are thinking about taxation and redistribution. That is why we want to temper the wind to the shorn lamb, and let it blow on the sable coat.

Robert Solow Journal of Economic Perspectives (2014)

Public debt — an economic necessity

26 Sep, 2017 at 09:06 | Posted in Economics | 6 Comments

wvickreyWe are not going to get out of the economic doldrums as long as we continue to be obsessed with the unreasoned ideological goal of reducing the so-called deficit. The “deficit” is not an economic sin but an economic necessity. […]

The administration is trying to bring the Titanic into harbor with a canoe paddle, while Congress is arguing over whether to use an oar or a paddle, and the Perot’s and budget balancers seem eager to lash the helm hard-a-starboard towards the iceberg. Some of the argument seems to be over which foot is the better one to shoot ourselves in. We have the resources in terms of idle manpower and idle plants to do so much, while the preachers of austerity, most of whom are in little danger of themselves suffering any serious consequences, keep telling us to tighten our belts and refrain from using the resources that lay idle all around us.

Alexander Hamilton once wrote “A national debt, if it be not excessive, would be for us a national treasure.” William Jennings Bryan used to declaim, “You shall not crucify mankind upon a cross of gold.” Today’s cross is not made of gold, but is concocted of a web of obfuscatory financial rectitude from which human values have been expunged.

William Vickrey

Right-wing extremist party in German parliament for the first time since WWII

25 Sep, 2017 at 13:57 | Posted in Politics & Society | Comments Off on Right-wing extremist party in German parliament for the first time since WWII

 

Trump’s ‘alternative facts’ exported to German politics …

Seven sins of economics

23 Sep, 2017 at 11:44 | Posted in Economics | 4 Comments

There has always been some level of scepticism about the ability of economists to offer meaningful predictions and prognosis about economic and social phenomenon. That scepticism has heightened in the wake of the global financial crisis, leading to what is arguably the biggest credibility crisis the discipline has faced in the modern era.

assumpSome of the criticisms against economists are misdirected. But the major thrust of the criticisms does have bite.

There are seven key failings, or the ‘seven sins’, as I am going to call them, that have led economists to their current predicament. These include sins of commission as well as sins of omission.

Sin 1: Alice in Wonderland assumptions

The problem with economists is not that they make assumptions. After all, any theory or model will have to rely on simplifying assumptions … But when critical assumptions are made just to circumvent well-identified complexities in the quest to build elegant theories, such theories will simply end up being elegant fantasies.

Sin 2: Abuse of modelling

What compounds the sin of wild assumptions is the sin of careless modelling, and then selling that model as if it were a true depiction of an economy or society …

Sin 3: Intellectual capture

Several post-crisis assessments of the economy and of economics have pointed to intellectual capture as a key reason the profession, as a whole failed, to sound alarm bells about problems in the global economy, and failed to highlight flaws in the modern economic architecture …

Sin 4: The science obsession

The excessive obsession in the discipline to identify itself as science has been costly. This has led to a dangerous quest for standardization in the profession, leading many economists to mistake a model of the economy for ‘the model’ of the economy …

The science obsession has diminished the diversity of the profession, and arguably allowed complacency to take root in the run-up to the global financial crisis …

Sin 5: Perpetuating the myth of ‘the textbook’ and Econ 101

The quest for standardization has also led to an astonishing level of uniformity in the manner in which economists are trained, and in the manner in which economists train others. Central to this exercise are textbooks that help teach the lessons of ‘Econ 101’—lessons disconnected from reality as they are from the frontiers of economic research …

Sin 6: Ignoring society

What makes Econ 101 and a lot of mainstream economics particularly limiting is its neglect of the role of culture and social norms in determining economic outcomes even though classical economists such as Adam Smith and Karl Marx took care to emphasize how social norms and social interactions shape economic outcomes …

Economists typically don’t engage with other social sciences, even though insights from those disciplines have a direct bearing on the subjects of economic enquiry …

Sin 7: Ignoring history

One way in which economists could have compensated for the lack of engagement with other social sciences is by studying economic history. After all, studying economic history carefully can help us understand the social and institutional contexts in which particular economic models worked, or did not work …

But economic history has been relegated to the margins over the past several years, and many graduate students remain unacquainted with the subject still.

Pramit Bhattacharya

Game theory and the shaping of neoliberal capitalism

21 Sep, 2017 at 17:00 | Posted in Economics | 9 Comments

Prisoners_of_Reason Neoliberal subjectivity arises from the intricate pedagogy of game theory that comes to the fore in the Prisoner’s Dilemma game and is interchangeable with contemporary paradigmatic instrumental rationality. Rational choice is promoted as an exhaustive science of decision making, but only by smuggling in a characteristic​ confusion​ suggesting​ that everything of value​ to agents can be reflected in their appraisal of existential worth even though this is patently not the case in life viewed as a ‘fixed game.’ Without a critical and scrupulous pedagogy that carefully identifies as optional the assumptions necessary to operationalize strategic rationality, a new neoliberal understanding of capitalism will dominate the worldview​ of the student of game theory and inhabitant of neoliberal institutions.

When criticising game theory you often get the rather uninformative and vacuous answer that we all have to remember that game theory — as is mainstream neoclassical theory at large — is nothing but ‘as-if-theory’ built on ‘as-if-rationality.’ As Ariel Rubinstein has it, however, this only shows that “the phrase ‘as if’ is a way to avoid taking responsibility for the strong assumptions upon which economic models are founded” …

Missing the point — the quantitative ambitions of DSGE models

19 Sep, 2017 at 17:00 | Posted in Economics | Comments Off on Missing the point — the quantitative ambitions of DSGE models

A typical modern approach to writing a paper in DSGE macroeconomics is as follows:

o to establish “stylized facts” about the quantitative interrelationships of certain macroeconomic variables (e.g. moments of the data such as variances, autocorrelations, covariances, …) that have hitherto not been jointly explained;

o to write down a DSGE model of an economy subject to a defined set of shocks that aims to capture the described interrelationships; and

o to show that the model can “replicate” or “match” the chosen moments when it is fed with stochastic shocks generated by the assumed shock process …

reality-check_600_441_80However, the test imposed by matching DSGE models to the data is problematic in at least three respects:

First, the set of moments chosen to evaluate the model is largely arbitrary …

Second, for a given set of moments, there is no well-defined statistic to measure the goodness of fit of a DSGE model or to establish what constitutes an improvement in such a framework …

Third, the evaluation is complicated by the fact that, at some level, all economic models are rejected by the data … In addition, DSGE models frequently impose a number of restrictions that are in direct conflict with micro evidence. If a model has been rejected along some dimensions, then a statistic that measures the goodness-of-fit along other dimensions is meaningless …

Focusing on the quantitative fit of models also creates powerful incentives for researchers (i) to introduce elements that bear little resemblance to reality for the sake of achieving a better fit (ii) to introduce opaque elements that provide the researcher with free (or almost free) parameters and (iii) to introduce elements that improve the fit for the reported moments but deteriorate the fit along other unreported dimensions.

Albert Einstein observed that “not everything that counts can be counted, and not everything that can be counted counts.” DSGE models make it easy to offer a wealth of numerical results by following a well-defined set of methods (that requires one or two years of investment in graduate school, but is relatively straightforward to apply thereafter). There is a risk for researchers to focus too much on numerical predictions of questionable reliability and relevance that absorb a lot of time and effort rather than focusing on deeper conceptual questions that are of higher relevance for society.

Anton Korinek

Great essay, showing that ‘rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimetre if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not say anything about real-world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Mainstream economists think there is a gain from the DSGE style of modelling in its capacity to offer some kind of structure around which to organise discussions. To me, that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

As Kornik argues, using DSGE models “creates a bias towards models that have a well-behaved ergodic steady state.” Since we know that most real-world processes do not follow an ergodic distribution, this is, to say the least, problematic. To understand real world ‘non-routine’ decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future. Imposing invalid probabilistic assumptions on the data make all DSGE models statistically misspecified.

Advocates of DSGE modelling want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises — and that also applies to the quest for causality and forecasting predictability in DSGE models.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized that have to match reality, not the other way around. The modelling convention used when constructing DSGE models makes it impossible to fully incorporate things that we know are of paramount importance for understanding modern economies — such as income and wealth inequality, asymmetrical power relations and information, liquidity preference, just to mention a few.

Given all these fundamental problems for the use of these models and their underlying methodology, it is beyond understanding how the DSGE approach has come to be the standard approach in ‘modern’ macroeconomics. DSGE models are based on assumptions profoundly at odds with what we know about real-world economies. That also makes them little more than overconfident story-telling devoid of real scientific value. Macroeconomics would do much better with more substantive diversity and plurality.

Dangers of ‘running with the mainstream pack’

18 Sep, 2017 at 14:43 | Posted in Economics | 3 Comments


An absolutely fabulous speech — and Soskice and Carlin’s textbook Macroeconomics: Institutions, Instability, and the Financial System — that Dullien mentions at the beginning of his speech — is really a very good example of the problems you run into if you want to be ‘pluralist’ within the mainstream pack.

wendyCarlin and Soskice explicitly adapts a ‘New Keynesian’ framework including price rigidities and adding a financial system to the usual neoclassical macroeconomic set-up. But although I find things like the latter amendment an improvement, it’s definitely more difficult to swallow their methodological stance, and especially their non-problematized acceptance of the need for macroeconomic microfoundations.

Some months ago, another sorta-kinda ‘New Keynesian’, Paul Krugman, argued on his blog that the problem with the academic profession is that some macroeconomists aren’t “bothered to actually figure out” how the New Keynesian model with its Euler conditions — “based on the assumption that people have perfect access to capital markets, so that they can borrow and lend at the same rate” — really works. According to Krugman, this shouldn’t be hard at all — “at least it shouldn’t be for anyone with a graduate training in economics.”

Carlin & Soskice seem to share Krugman’s attitude. From the first page of the book, they start to elaborate their preferred 3-equations ‘New Keynesian’ macromodel. And after twenty-two pages, they have already come to specifying the demand side with the help of the Permanent Income Hypothesis and its Euler equations.

But if people — not the representative agent — at least sometimes can’t help being off their labour supply curve — as in the real world — then what are these hordes of Euler equations that you find ad nauseam in these ‘New Keynesian’ macromodels gonna help us? Yours truly’s doubts regarding the ‘New Keynesian’ modellers ‘ obsession with Euler equations is basically that, as with so many other assumptions in ‘modern’ macroeconomics, the Euler equations don’t fit reality.

All empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from our models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

No matter how many convoluted refinements of concepts made in the model, if the “successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

From this methodological perspective, yours truly has to conclude that Carlin’s and Soskice’s microfounded macroeconomic model is a rather unimpressive attempt at legitimizing using fictitious idealizations — such as Euler equations — for reasons more to do with model tractability than with a genuine interest in understanding and explaining features of real economies.

Running with the mainstream pack is not a good idea if you want to develop realist and relevant economics.

Putting predictions to the test

17 Sep, 2017 at 11:58 | Posted in Economics | 1 Comment

tetIt is the somewhat gratifying lesson of Philip Tetlock’s new book that people who make prediction their business — people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables — are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.

The New Yorker

Mainstream neoclassical economists often maintain – usually referring to the methodological individualism of Milton Friedman — that it doesn’t matter if the assumptions of the models they use are realistic or not. What matters is if the predictions are right or not. But, if so, then the only conclusion we can make is — throw away the garbage! Because, oh dear, oh dear, how wrong they have been!

When Simon Potter a couple of years ago analyzed the predictions that the Federal Reserve Bank of New York did on the development of real GDP and unemployment for the years 2007-2010, it turned out that the predictions were wrong with respectively 5.9% and 4.4% — which is equivalent to 6 millions of unemployed:

Economic forecasters never expect to predict precisely. One way of measuring the accuracy of their forecasts is against previous forecast errors. When judged by forecast error performance metrics from the macroeconomic quiescent period that many economists have labeled the Great Moderation, the New York Fed research staff forecasts, as well as most private sector forecasts for real activity before the Great Recession, look unusually far off the mark …

Using a similar approach to Reifschneider and Tulip but including forecast errors for 2007, one would have expected that 70 percent of the time the unemployment rate in the fourth quarter of 2009 should have been within 0.7 percentage point of a forecast made in April 2008. The actual forecast error was 4.4 percentage points, equivalent to an unexpected increase of over 6 million in the number of unemployed workers. Under the erroneous assumption that the 70 percent projection error band was based on a normal distribution, this would have been a 6 standard deviation error, a very unlikely occurrence indeed.

In other words — the “rigorous” and “precise” macroeconomic mathematical-statistical forecasting models were wrong. And the rest of us have to pay.

Potter is not the only one who lately has criticized the forecasting business. John Mingers comes to essentially the same conclusion when scrutinizing it from a somewhat more theoretical angle:

It is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not conform with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

In New York State, Section 899 of the Code of Criminal Procedure provides that persons “Pretending to Forecast the Future” shall be considered disorderly under subdivision 3, Section 901 of the Code and liable to a fine of $250 and/or six months in prison. Although the law does not apply to “ecclesiastical bodies acting in good faith and without fees,” I’m not sure where that leaves macroeconomic model-builders and other forecasters.

The accuracy of the predictions that experts make certainly seem to have an inverse relationship to their self-confidence. Being cocky and wrong is a lethal combination — and economists are often wrong and hardly known for being particularly modest people …

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”–which states that the number of potential connections in a network is proportional to the square of the number of participants–becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Paul Krugman

Sign Your Name

16 Sep, 2017 at 22:09 | Posted in Varia | Comments Off on Sign Your Name

 

Nights in White Satin

16 Sep, 2017 at 19:22 | Posted in Varia | Comments Off on Nights in White Satin

 

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.