In the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.
There is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathematical proof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.
Even if some people think that there has been a kind of empirical revolution in economics lately, I would still argue that empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. The one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, still roosts the roost.
But mainstream economists’ belief that theories and models being ‘consistent with’ data will somehow make the theories and models a success story, is nothing but an empty hope. Mere consistency with the facts is never sufficient to prove models or theories true. The fact that US presently has a president named Donald Trump, is ‘consistent with’ US being a democracy — but that doesn’t in any way whatsoever explain why a witless clown came to be elected to a post previously held by people like George Washington and Thomas Jefferson.
Theories and models are always ‘under-determined’ by facts. So a good way to help us choose between different ‘consistent’ theories and models is to actually look at what happens out there in the economy and why it happens.
History and good ordinary social science can also help us. And if we’re not to busy doing the things we do, but once in a while take a brake and do some methodological reflection on why we do what we do — well, that takes us a long way too.
Gödel’s incompleteness theorems raise important questions about the foundations of mathematics.
The most important concerns the question of how to select the specific systems of axioms that mathematics are supposed to be founded on. Gödel’s theorems irrevocably show that no matter what system is chosen, there will always have to be other axioms to prove previously unproved truths.
This, of course, ought to be of paramount interest for those mainstream economists who still adhere to the dream of constructing a deductive-axiomatic economics with analytic truths that do not require empirical verification. Since Gödel showed that any complex axiomatic system is undecidable and incomplete, any such deductive-axiomatic economics will always consist of some undecidable statements. When not even being able to fulfil the dream of a complete and consistent axiomatic foundation for mathematics, it’s totally incomprehensible that some people still think that could be achieved for economics.
The master-economist must possess a rare combination of gifts …. He must be mathematician, historian, statesman, philosopher—in some degree. He must understand symbols and speak in words. He must contemplate the particular, in terms of the general, and touch abstract and concrete in the same flight of thought. He must study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must be entirely outside his regard. He must be purposeful and disinterested in a simultaneous mood, as aloof and incorruptible as an artist, yet sometimes as near to earth as a politician.
Economics students today are complaining more and more about the way economics is taught. The lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and narrowing of the curriculum, dissatisfy econ students all over the world. The frustrating lack of real world relevance has led many of them to demand the discipline to start develop a more open and pluralistic theoretical and methodological attitude.
There are many things about the way economics is taught today that worry yours truly. Today’s students are force-fed with mainstream neoclassical theories and models. That lack of pluralism is cause for serious concern.
However, I find the most salient deficiency in ‘modern’ economics education in the total absence of courses in the history of economic thought and economic methodology. That is deeply worrying since a science that doesn’t self-reflect and ask important methodological and science-theoretical questions about the own activity, is a science in dire straits.
Methodology is about how we do economics, how we evaluate theories, models and arguments. To know and think about methodology is important for every economist. Without methodological awareness it’s really impossible to understand what you are doing and why you’re doing it. Dismissing methodology is dismissing a necessary and vital part of science.
For someone who has spent forty years in the economics academia, it’s hopeful to see all these young economics students that want to see a real change in economics and the way it’s taught. Never give up. Never give in!
Little in the discipline has changed in the wake of the crisis. Mirowski thinks that this is at least in part a result of the impotence of the loyal opposition — those economists such as Joseph Stiglitz or Paul Krugman who attempt to oppose the more viciously neoliberal articulations of economic theory from within the camp of neoclassical economics. Though Krugman and Stiglitz have attacked concepts like the efficient markets hypothesis … Mirowski argues that their attempt to do so while retaining the basic theoretical architecture of neoclassicism has rendered them doubly ineffective.
First, their adoption of the battery of assumptions that accompany most neoclassical theorizing — about representative agents, treating information like any other commodity, and so on — make it nearly impossible to conclusively rebut arguments like the efficient markets hypothesis. Instead, they end up tinkering with it, introducing a nuance here or a qualification there … Stiglitz’s and Krugman’s arguments, while receiving circulation through the popular press, utterly fail to transform the discipline.
Despite all their radical rhetoric, Krugman and Stiglitz are — where it really counts — nothing but die-hard mainstream neoclassical economists. Just like Milton Friedman, Robert Lucas or Greg Mankiw.
The only economic analysis that Krugman and Stiglitz — like other other mainstream economists — accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable — and apply them to whatever you want — as long as you do it within the mainstream approach and its modeling strategy. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’
That isn’t pluralism.
That’s a methodological reductionist straightjacket.
So, even though we have seen a proliferation of models, it has almost exclusively taken place as a kind of axiomatic variation within the standard ‘urmodel’, which is always used as a self-evident bench-mark.
Krugman and Stiglitz want to purvey the view that the proliferation of economic models during the last twenty-thirty years is a sign of great diversity and abundance of new ideas.
But, again, it’s not, really, that simple.
Although mainstream economists like to portray mainstream economics as an open and pluralistic ‘let a hundred flowers bloom,’ in reality it is rather ‘plus ça change, plus c’est la même chose.’
Applying closed analytical-formalist-mathematical-deductivist-axiomatic models, built on atomistic-reductionist assumptions to a world assumed to consist of atomistic-isolated entities, is a sure recipe for failure when the real world is known to be an open system where complex and relational structures and agents interact. Validly deducing things in models of that kind doesn’t much help us understanding or explaining what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream economists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Just telling us that the plethora of mathematical models that make up modern economics “expand the range of the discipline’s insights” is nothing short of hand waving.
No matter how many thousands of technical working papers or models mainstream economists come up with, as long as they are just ‘wildly inconsistent’ axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and possible explanations of real economies.
In many social sciences p values and null hypothesis significance testing (NHST) are often used to draw far-reaching scientific conclusions – despite the fact that they are as a rule poorly understood and that there exist altenatives that are easier to understand and more informative.
Not the least using confidence intervals (CIs) and effect sizes are to be preferred to the Neyman-Pearson-Fisher mishmash approach that is so often practised by applied researchers.
Running a Monte Carlo simulation with 100 replications of a fictitious sample having N = 20, confidence itervals of 95%, a normally distributed population with a mean = 10 and a standard deviation of 20, taking two-tailed p values on a zero null hypothesis, we get varying CIs (since they are based on varying sample standard deviations), but with a minimum of 3.2 and a maximum of 26.1 we still get a clear picture of what would happen in an infinite limit sequence. On the other hand p values (even though from a purely mathematical statistical sense more or less equivalent to CIs) vary strongly from sample to sample, and jumping around between a minimum of 0.007 and a maximum of 0.999 don’t give you a clue of what will happen in an infinite limit sequence! So, I can’t but agree with Geoff Cummings:
The problems are so severe we need to shift as much as possible from NHST … The first shift should be to estimation: report and interpret effect sizes and CIs … I suggest p should be given only a marginal role, its problem explained, and it should be interpreted primarily as an indicator of where the 95% CI falls in relation to a null hypothesised value.
In case you want to do your own Monte Carlo simulation, here’s an example I’ve made using Gretl:
loop 100 –progressive
series y = normal(10,15)
scalar zs = (10-mean(y))/sd(y)
scalar df = $nobs-1
scalar ysd= sd(y)
scalar tstat = (ybar-10)/ybarsd
pvalue t df tstat
scalar lowb = mean(y) – critical(t,df,0.025)*ybarsd
scalar uppb = mean(y) + critical(t,df,0.025)*ybarsd
scalar pval = pvalue(t,df,tstat)
store E:\pvalcoeff.gdt lowb uppb pval
Oxford professor Simon Wren-Lewis isn’t pleased with heterodox attacks on mainstream economics. One of the reasons is that he doesn’t share the heterodox view that mainstream economics and neoliberal ideas are highly linked.
In a post on his blog, Wren-Lewis defends the mainstream economics establishment against critique waged against it by Phil Mirowski:
Mirowski overestimates the extent to which neoliberal ideas have become “embedded in economic theory”, and underestimates the power that economic theory and evidence can have over even those academic economists who might have a neoliberal disposition. If the tide of neoliberal thought is going to be turned back, economics is going to be important in making that happen.
Wren-Lewis admits that “Philip Mirowski is a historian who has written a great deal about both the history of economics as a discipline and about neoliberalism” and that Mirowski “knows much more about the history of both subjects than I [W-L] do.”
Fair enough, but there are simple remedies for the lack of knowledge.
Read this essay, where yours truly try to further analyze — much inspired by the works of Amartya Sen — what kind of philosophical-ideological-political-economic doctrine neoliberalism is, and why it so often comes natural for mainstream economists to embrace neoliberal ideals.
Or maybe — if your Swedish isn’t too rusty … — you could take part of the book-length argumentation in Den dystra vetenskapen (‘The Dismal Science,’ Atlas 2001) for why there has been such a deep and long-standing connection between the dismal science and different varieties of neoliberalism.
Well, sort of, at least.
For those of us who can’t get enough of English eccentrics, Brewer’s Rogues, Villains, Eccentrics by William Donaldson is probably the funniest book ever written. I mean, just to take one example, where else would you find an entry like this one?
Carlton, Sydney (1949- ), painter and decorator. Those who argue that bestiality should be treated with understanding had a setback in 1998 when Carlton, a married man from Bradford, was sentenced to a year in prison for having intercourse with a Staffordshire bull terrier, namned Badger. His defence was that Badger had made the first move. ‘I can’t help it if the dog took a liking to me,’ he told the court. This was not accepted.
If I ask myself what I could legitimately assume a person to have rational expectations about, the technical answer would be, I think, about the realization of a stationary stochastic process, such as the outcome of the toss of a coin or anything that can be modeled as the outcome of a random process that is stationary. I don’t think that the economic implications of the outbreak of World war II were regarded by most people as the realization of a stationary stochastic process. In that case, the concept of rational expectations does not make any sense. Similarly, the major innovations cannot be thought of as the outcome of a random process. In that case the probability calculus does not apply.
‘Modern’ macroeconomic theories are as a rule founded on the assumption of rational expectations — where the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution.
The tiny little problem that there is no hard empirical evidence that verifies these models — cf. Michael Lovell (1986) & Nikolay Gertchev (2007) — usually doesn’t bother its protagonists too much. Rational expectations überpriest Thomas Sargent has the following to say on the epistemological status of the rational expectations hypothesis:
Partly because it focuses on outcomes and does not pretend to have behavioral content, the hypothesis of rational epectations has proved to be a powerful tool for making precise statements about complicated dynamic economic systems.
Precise, yes, in the celestial world of models. But relevant and realistic? I’ll be dipped!