Modelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’
The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …
How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility to teach genuine truths about empirical reality.
Trumponomics: causes and consequences
Trumponomics: everything to fear including fear itself? 3
Can Trump overcome secular stagnation? 20
James K. Galbraith
Trump through a Polanyi lens: considering community well-being 28
Trump is Obama’s legacy. Will this break up the Democratic Party? 36
Causes and consequences of President Donald Trump 44
Explaining the rise of Donald Trump 54
Class and Trumponomics 62
David F. Ruccio
Trump’s Growthism: its roots in neoclassical economic theory 86
Trumponomics: causes and prospects 98
L. Randall Wray
The fall of the US middle class and the hair-raising ascent of Donald Trump 112
Mourning in America: the corporate/government/media complex 125
How the Donald can save America from capital despotism 132
Stephen T. Ziliak
Prolegomenon to a defense of the City of Gold 141
David A. Westbrook
Trump’s bait and switch: job creation in the midst of welfare state sabotage 148
Pavlina R. Tcherneva
Can ‘Trumponomics’ extend the recovery? 159
In the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.
There is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical prof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.
Even if some people think that there has been a kind of empirical revolution in economics lately, I would still argue that empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. The one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, still roosts the roost.
But mainstream economists’ belief that theories and models being ‘consistent with’ data will somehow make the theories and models a success story, is nothing but an empty hope. Mere consistency with the facts is never sufficient to prove models or theories true. The fact that US presently has a president named Donald Trump, is ‘consistent with’ US being a democracy — but that doesn’t in any way whatsoever explain why a witless clown came to be elected to a post previously held by people like George Washington and Thomas Jefferson.
Theories and models are always ‘under-determined’ by facts. So a good way to help us choose between different ‘consistent’ theories and models is to actually look at what happens out there in the economy and why it happens.
History and good ordinary social science can also help us. And if we’re not to busy doing the things we do, but once in a while take a brake and do some methodological reflection on why we do what we do — well, that takes us a long way too.
Gödel’s incompleteness theorems raise important questions about the foundations of mathematics.
The most important concerns the question of how to select the specific systems of axioms that mathematics are supposed to be founded on. Gödel’s theorems irrevocably show that no matter what system is chosen, there will always have to be other axioms to prove previously unproved truths.
This, of course, ought to be of paramount interest for those mainstream economists who still adhere to the dream of constructing a deductive-axiomatic economics with analytic truths that do not require empirical verification. Since Gödel showed that any complex axiomatic system is undecidable and incomplete, any such deductive-axiomatic economics will always consist of some undecidable statements. When not even being able to fulfil the dream of a complete and consistent axiomatic foundation for mathematics, it’s totally incomprehensible that some people still think that could be achieved for economics.
The master-economist must possess a rare combination of gifts …. He must be mathematician, historian, statesman, philosopher—in some degree. He must understand symbols and speak in words. He must contemplate the particular, in terms of the general, and touch abstract and concrete in the same flight of thought. He must study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must be entirely outside his regard. He must be purposeful and disinterested in a simultaneous mood, as aloof and incorruptible as an artist, yet sometimes as near to earth as a politician.
Economics students today are complaining more and more about the way economics is taught. The lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and narrowing of the curriculum, dissatisfy econ students all over the world. The frustrating lack of real world relevance has led many of them to demand the discipline to start develop a more open and pluralistic theoretical and methodological attitude.
There are many things about the way economics is taught today that worry yours truly. Today’s students are force-fed with mainstream neoclassical theories and models. That lack of pluralism is cause for serious concern.
However, I find the most salient deficiency in ‘modern’ economics education in the total absence of courses in the history of economic thought and economic methodology. That is deeply worrying since a science that doesn’t self-reflect and ask important methodological and science-theoretical questions about the own activity, is a science in dire straits.
Methodology is about how we do economics, how we evaluate theories, models and arguments. To know and think about methodology is important for every economist. Without methodological awareness it’s really impossible to understand what you are doing and why you’re doing it. Dismissing methodology is dismissing a necessary and vital part of science.
For someone who has spent forty years in the economics academia, it’s hopeful to see all these young economics students that want to see a real change in economics and the way it’s taught. Never give up. Never give in!
Little in the discipline has changed in the wake of the crisis. Mirowski thinks that this is at least in part a result of the impotence of the loyal opposition — those economists such as Joseph Stiglitz or Paul Krugman who attempt to oppose the more viciously neoliberal articulations of economic theory from within the camp of neoclassical economics. Though Krugman and Stiglitz have attacked concepts like the efficient markets hypothesis … Mirowski argues that their attempt to do so while retaining the basic theoretical architecture of neoclassicism has rendered them doubly ineffective.
First, their adoption of the battery of assumptions that accompany most neoclassical theorizing — about representative agents, treating information like any other commodity, and so on — make it nearly impossible to conclusively rebut arguments like the efficient markets hypothesis. Instead, they end up tinkering with it, introducing a nuance here or a qualification there … Stiglitz’s and Krugman’s arguments, while receiving circulation through the popular press, utterly fail to transform the discipline.
Despite all their radical rhetoric, Krugman and Stiglitz are — where it really counts — nothing but die-hard mainstream neoclassical economists. Just like Milton Friedman, Robert Lucas or Greg Mankiw.
The only economic analysis that Krugman and Stiglitz — like other other mainstream economists — accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable — and apply them to whatever you want — as long as you do it within the mainstream approach and its modeling strategy. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’
That isn’t pluralism.
That’s a methodological reductionist straightjacket.
So, even though we have seen a proliferation of models, it has almost exclusively taken place as a kind of axiomatic variation within the standard ‘urmodel’, which is always used as a self-evident bench-mark.
Krugman and Stiglitz want to purvey the view that the proliferation of economic models during the last twenty-thirty years is a sign of great diversity and abundance of new ideas.
But, again, it’s not, really, that simple.
Although mainstream economists like to portray mainstream economics as an open and pluralistic ‘let a hundred flowers bloom,’ in reality it is rather ‘plus ça change, plus c’est la même chose.’
Applying closed analytical-formalist-mathematical-deductivist-axiomatic models, built on atomistic-reductionist assumptions to a world assumed to consist of atomistic-isolated entities, is a sure recipe for failure when the real world is known to be an open system where complex and relational structures and agents interact. Validly deducing things in models of that kind doesn’t much help us understanding or explaining what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream economists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Just telling us that the plethora of mathematical models that make up modern economics “expand the range of the discipline’s insights” is nothing short of hand waving.
No matter how many thousands of technical working papers or models mainstream economists come up with, as long as they are just ‘wildly inconsistent’ axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and possible explanations of real economies.
In many social sciences p values and null hypothesis significance testing (NHST) are often used to draw far-reaching scientific conclusions – despite the fact that they are as a rule poorly understood and that there exist altenatives that are easier to understand and more informative.
Not the least using confidence intervals (CIs) and effect sizes are to be preferred to the Neyman-Pearson-Fisher mishmash approach that is so often practised by applied researchers.
Running a Monte Carlo simulation with 100 replications of a fictitious sample having N = 20, confidence itervals of 95%, a normally distributed population with a mean = 10 and a standard deviation of 20, taking two-tailed p values on a zero null hypothesis, we get varying CIs (since they are based on varying sample standard deviations), but with a minimum of 3.2 and a maximum of 26.1 we still get a clear picture of what would happen in an infinite limit sequence. On the other hand p values (even though from a purely mathematical statistical sense more or less equivalent to CIs) vary strongly from sample to sample, and jumping around between a minimum of 0.007 and a maximum of 0.999 don’t give you a clue of what will happen in an infinite limit sequence! So, I can’t but agree with Geoff Cummings:
The problems are so severe we need to shift as much as possible from NHST … The first shift should be to estimation: report and interpret effect sizes and CIs … I suggest p should be given only a marginal role, its problem explained, and it should be interpreted primarily as an indicator of where the 95% CI falls in relation to a null hypothesised value.
In case you want to do your own Monte Carlo simulation, here’s an example I’ve made using Gretl:
loop 100 –progressive
series y = normal(10,15)
scalar zs = (10-mean(y))/sd(y)
scalar df = $nobs-1
scalar ysd= sd(y)
scalar tstat = (ybar-10)/ybarsd
pvalue t df tstat
scalar lowb = mean(y) – critical(t,df,0.025)*ybarsd
scalar uppb = mean(y) + critical(t,df,0.025)*ybarsd
scalar pval = pvalue(t,df,tstat)
store E:\pvalcoeff.gdt lowb uppb pval