Master class

31 March, 2017 at 20:14 | Posted in Economics | Comments Off on Master class

 

Elzbieta Towarnicka — with a totally unbelievable and absolutely fabulous voice.

This is as good as it gets in the world of music.

Advertisements

Prayer

31 March, 2017 at 18:55 | Posted in Politics & Society | Comments Off on Prayer


This one is for all you, brothers and sisters, fighting oppression, struggling to survive, and risking your lives on your long walk to freedom. May God be with you.

Min värld är fattig och död när barnasinnet berövats sin glöd

31 March, 2017 at 17:01 | Posted in Varia | Comments Off on Min värld är fattig och död när barnasinnet berövats sin glöd

Poesi och musik i vacker förening.
Hansson de Wolfe United — ett fenomen i svensk popmusik utan motstycke.
 

Probability and economics

30 March, 2017 at 16:02 | Posted in Economics | 2 Comments

Modern mainstream (neoclassical) economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

probabilityWhen attempting to convince us of the necessity of founding empirical economic analysis on probability models,  neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks sound foundations.

When economists and econometricians – uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

Mathematics (by which I shall mean pure mathematics) has no grip on the real world ; if probability is to deal with the real world it must contain elements outside mathematics ; the meaning of ‘ probability ‘ must relate to the real world, and there must be one or more ‘primitive’ propositions about the real world, from which we can then proceed deductively (i.e. mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom’, and we will call it A for short. Although it has got to be true, A is by the nature of the case incapable of deductive proof, for the sufficient reason that it is about the real world  …

We will begin with the … school which I will call philosophical. This attacks directly the ‘real’ probability problem; what are the axiom A and the meaning of ‘probability’ to be, and how can we justify A? It will be instructive to consider the attempt called the ‘frequency theory’. It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated n times the proportion of 6’s will, with certainty, tend to a limit, p say, as n goes to infinity … If we take this proposition as ‘A’ we can at least settle off-hand the other problem, of the meaning of probability; we define its measure for the event in question to be the number p. But for the rest this A takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? A does not say. We have, then, to add to it something about the rate of convergence. Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p +- e for large enough n (the largeness depending on e)’. It can only say ‘the proportion will lie between p +- e with at least such and such probability (depending on e and n*) whenever n>n*’. The vicious circle is apparent. We have not merely failed to justify a workable A; we have failed even to state one which would work if its truth were granted. It is generally agreed that the frequency theory won’t work. But whatever the theory it is clear that the vicious circle is very deep-seated: certainty being impossible, whatever A is made to state can only be in terms of ‘probability ‘.

John Edensor Littlewood 

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations!

 

The problem with unjustified assumptions

29 March, 2017 at 16:12 | Posted in Economics, Statistics & Econometrics | Comments Off on The problem with unjustified assumptions

assumptions-analysis1An ongoing concern is that excessive focus on formal modeling and statistics can lead to neglect of practical issues and to overconfidence in formal results … Analysis interpretation depends on contextual judgments about how reality is to be mapped onto the model, and how the formal analysis results are to be mapped back into reality. But overconfidence in formal outputs is only to be expected when much labor has gone into deductive reasoning. First, there is a need to feel the labor was justified, and one way to do so is to believe the formal deduction produced important conclusions. Second, there seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

Unfortunately, assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Despite assumption uncertainties, modelers often express only the uncertainties derived within their modeling assumptions, sometimes to disastrous consequences. Econometrics supplies dramatic cautionary examples in which complex modeling has failed miserably in important applications …

Sander Greenland

Yes, indeed, econometrics fails miserably over and over again. One reason why it does, is that the error term in the regression models used are thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:

Paul-Romer-727x727With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

Don’t leave me this way

29 March, 2017 at 15:02 | Posted in Varia | 1 Comment

 

Mainstream flimflam defender Wren-Lewis gets it wrong — again!

26 March, 2017 at 20:56 | Posted in Economics | 4 Comments

flimflam-2Again and again, Oxford professor Simon Wren-Lewis rides out to defend orthodox macroeconomic theory against attacks from heterodox critics.

A couple of years ago, it was rational expectations, microfoundations, and representative agent modeling he wanted to save.

And now he is back with new flimflamming against heterodox attacks and pluralist demands from economics students all over the world:

Attacks [against mainstream economics] are far from progressive.

[D]evoting a lot of time to exposing students to contrasting economic frameworks (feminist, Austrian, post-Keynesian) to give them a range of ways to think about the economy, as suggested here, means cutting time spent on learning the essential tools that any economist needs … [E]conomics is a vocational subject, not a liberal arts subject …

This is the mistake that progressives make. They think that by challenging mainstream economics they will somehow make the economic arguments for regressive policies go away. They will not go away. Instead all you have done is thrown away the chance of challenging those arguments on their own ground, using the strength of an objective empirical science …

Economics, as someone once said, is a separate and inexact science. That it is a science, with a mainstream that has areas of agreement and areas of disagreement, is its strength. It is what allows economists to claim that some things are knowledge, and should be treated as such. Turn it into separate schools of thought, and it degenerates into sets of separate opinions. There is plenty wrong with mainstream economics, but replacing it with schools of thought is not the progressive endeavor that some believe. It would just give you more idiotic policies …

Mainstream economics is here depicted by Wren-Lewis as nothing but “essential tools that any economist needs.” Not a theory among other competing theories. Not a “separate school of thoughts,” but an “objective empirical science” capable of producing “knowledge.”

I’ll be dipped!

Reading that kind of nonsense one has to wonder if this guy is for real!

Wren-Lewis always tries hard to give a picture of modern macroeconomics as a pluralist enterprise. But the change and diversity that gets Wren-Lewis approval only takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. You’re free to take your analytical formalist models and apply it to whatever you want — as long as you do it with a modeling methodology that is acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. If you haven’t modeled your thoughts, you’re not in the economics business. But this isn’t pluralism. It’s a methodological reductionist straightjacket.

Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream macroeconomists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had mainstream economists like Wren-Lewis not been so in love with their models, they would have perceived this too. Telling us that the plethora of models that make up modern macroeconomics are not right or wrong, but just more or less applicable to different situations, is nothing short of hand waving.

Wren-Lewis seems to have no problem with the lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and vanishingly little real world relevance that characterize modern mainstream macroeconomics. And he obviously shares the view that there is nothing basically wrong with ‘standard theory.’ As long as policy makers and economists stick to ‘standard economic analysis’ everything is just fine. Economics is just a common language and method that makes us think straight,  reach correct answers, and produce ‘knowledge.’

Just as his mainstream colleagues Paul Krugman and Greg Mankiw, Wren-Lewis is a mainstream neoclassical economist fanatically defending the insistence of using an axiomatic-deductive economic modeling strategy. To yours truly, this attitude is nothing but a late confirmation of Alfred North Whitehead’s complaint that “the self-confidence of learned people is the comic tragedy of civilization.”

Contrary to what Wren-Lewis seems to argue, I would say the recent economic and financial crises and the fact that mainstream economics has had next to nothing to contribute in understanding them, shows that mainstream economics is a degenerative research program in dire need of replacement.

No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in modern ‘the model is the message’ form, mainstream economists like Wren-Lewis do not push economic science forwards one millimeter since they simply do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside their mainstream models are, they do not per se say anything about real world economies.

 

Added March 27: Brad DeLong isn’t too happy either about  some of Wren-Lewis’ dodgings :

Simon needs to face that fact squarely, rather than to dodge it. The fact is that the “mainstream economists, and most mainstream economists” who were heard in the public sphere were not against austerity, but rather split, with, if anything, louder and larger voices on the pro-austerity side. (IMHO, Simon Wren-Lewis half admits this with his denunciations of “City economists”.) When Unlearning Economics seeks the destruction of “mainstream economics”, he seeks the end of an intellectual hegemony that gives Reinhart and Rogoff’s very shaky arguments a much more powerful institutional intellectual voice by virtue of their authors’ tenured posts at Harvard than the arguments in fact deserve. Simon Wren-Lewis, in response, wants to claim that strengthening the “mainstream” would somehow diminish the influence of future Reinharts and Rogoffs in analogous situations. But the arguments for austerity that turned out to be powerful and persuasive in the public sphere came from inside the house!

Textbooks problem — teaching the wrong things all too well

25 March, 2017 at 16:01 | Posted in Statistics & Econometrics | 2 Comments

It is well known that even experienced scientists routinely misinterpret p-values in all sorts of ways, including confusion of statistical and practical significance, treating non-rejection as acceptance of the null hypothesis, and interpreting the p-value as some sort of replication probability or as the posterior probability that the null hypothesis is true …

servicemanIt is shocking that these errors seem so hard-wired into statisticians’ thinking, and this suggests that our profession really needs to look at how it teaches the interpretation of statistical inferences. The problem does not seem just to be technical misunderstandings; rather, statistical analysis is being asked to do something that it simply can’t do, to bring out a signal from any data, no matter how noisy. We suspect that, to make progress in pedagogy, statisticians will have to give up some of the claims we have implicitly been making about the effectiveness of our methods …

It would be nice if the statistics profession was offering a good solution to the significance testing problem and we just needed to convey it more clearly. But, no, … many statisticians misunderstand the core ideas too. It might be a good idea for other reasons to recommend that students take more statistics classes—but this won’t solve the problems if textbooks point in the wrong direction and instructors don’t understand what they are teaching. To put it another way, it’s not that we’re teaching the right thing poorly; unfortunately, we’ve been teaching the wrong thing all too well.

Andrew Gelman & John Carlin

Teaching both statistics and economics, yours truly can’t but notice that the statements “give up some of the claims we have implicitly been making about the effectiveness of our methods” and “it’s not that we’re teaching the right thing poorly; unfortunately, we’ve been teaching the wrong thing all too well” obviously apply not only to statistics …

And the solution? Certainly not — as Gelman and Carlin also underline — to reform p-values. Instead we have to accept that we live in a world permeated by genuine uncertainty and that it takes a lot of variation to make good inductive inferences.

Sounds familiar? It definitely should!

treatprobThe standard view in statistics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues in his seminal A Treatise on Probability (1921), ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different’ — i.e., variation.

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we ‘simply do not know.’

Heterodoxy — necessary for the renewal of economics

24 March, 2017 at 08:20 | Posted in Economics | Comments Off on Heterodoxy — necessary for the renewal of economics

Big-vs-Small

A sense of failure is, for all intents and purposes, being translated into a context of relative success requiring more limited changes – though these are still being seen as significant. Part of the reason that they are seen as significant is that changes from within mainstream economics do not have to be major in order to appear radical. It is our contention that heterodox economics is being marginalised in this process of ‘change’ and that this is to the detriment of the positive potential for transforming the discipline …

Marginalising heterodoxy creates problems for teaching economics as a discipline in which economists constructively disagree and can be in error. This is important because it is through a conformity that suppresses a continual and diverse critical awareness that economics becomes a dangerous discourse prone to lack of realism, complacency, and dogmatism. Marginalising heterodoxy reduces the potential realisation of the different components of economics one might expect to be transformed as part of a project to transform the discipline …

Highlighting the points we have may seem like simple griping by a special interest. But there is far more involved than that. Remember we are talking about the failure of a discipline and how it is to be transformed. The marginalisation of heterodoxy has real consequences. In a general sense the marginalisation creates manifest problems that hamper teaching economics in a plural and critically aware way. For example, the marginalisation promotes a Whig history approach. It is also important to bear in mind that heterodoxy is a natural home of pluralism and of critical thinking in economics … Unlike the mainstream, heterodoxy does not have to be made compatible with pluralism and with critical thinking; it is predisposed to these and is already a resource for their development. So, marginalising heterodoxy really does narrow the base by which the discipline seeks to be renewed. That narrowing contributes to restricting the potential for good teaching in economics (including the profoundly important matter of how economists disagree and how they can be in error).

The Association for Heterodox Economics

Analogue economies and reality

23 March, 2017 at 00:54 | Posted in Economics | 2 Comments

41EofxYHtBLModelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’

The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …

How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility to teach genuine truths about empirical reality.

The power that really counts

22 March, 2017 at 17:38 | Posted in Varia | Comments Off on The power that really counts

 

Trumponomics: causes and consequences

22 March, 2017 at 17:13 | Posted in Economics | Comments Off on Trumponomics: causes and consequences

Real-world economics review issue no. 78 22 March 2017

Trumponomics: causes and consequences

Trumponomics: everything to fear including fear itself? 3
Jamie Morgan

Can Trump overcome secular stagnation? 20
James K. Galbraith

Trump through a Polanyi lens: considering community well-being 28
Anne Mayhew

Trump is Obama’s legacy. Will this break up the Democratic Party? 36
Michael Hudson

Causes and consequences of President Donald Trump 44
Ann Pettifor

Explaining the rise of Donald Trump 54
Marshall Auerback

Class and Trumponomics 62
David F. Ruccio

Trump’s Growthism: its roots in neoclassical economic theory 86
Herman Daly

Trumponomics: causes and prospects 98
L. Randall Wray

The fall of the US middle class and the hair-raising ascent of Donald Trump 112
Steven Pressman

Mourning in America: the corporate/government/media complex 125
Neva Goodwin

How the Donald can save America from capital despotism 132
Stephen T. Ziliak

Prolegomenon to a defense of the City of Gold 141
David A. Westbrook

Trump’s bait and switch: job creation in the midst of welfare state sabotage 148
Pavlina R. Tcherneva

Can ‘Trumponomics’ extend the recovery? 159
Stephanie Kelton

Your model is consistent? So what!

21 March, 2017 at 18:07 | Posted in Economics | 1 Comment

In the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

errorineconomicsThere is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathematical proof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

Even if some people think that there has been a kind of empirical revolution in economics lately, I would still argue that empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. The one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, still roosts the roost.

Ct7x3eOVMAA25klBut mainstream economists’ belief that theories and models being ‘consistent with’ data will somehow make the theories and models a success story, is nothing but an empty hope. Mere consistency with the facts is never sufficient to prove models or theories true. The fact that US presently has a president named Donald Trump, is ‘consistent with’ US being a democracy — but that  doesn’t in any way whatsoever explain why a witless clown came to be elected to a post previously held by people like George Washington and Thomas Jefferson.

Theories and models are always ‘under-determined’ by facts. So a good way to help us choose between different ‘consistent’ theories and models is to actually look at what happens out there in the economy and why it happens.

History and good ordinary social science can also help us. And if we’re not to busy doing the things we do, but once in a while take a brake and do some methodological reflection on why we do what we do — well, that takes us a long way too.

The man who crushed the mathematical dream

21 March, 2017 at 16:17 | Posted in Economics | 1 Comment

b00dshx3_640_360Gödel’s incompleteness theorems raise important questions about the foundations of mathematics.

The most important concerns the question of how to select the specific systems of axioms that mathematics are supposed to be founded on. Gödel’s theorems irrevocably show that no matter what system is chosen, there will always have to be other axioms to prove previously unproved truths.

This, of course, ought to be of paramount interest for those mainstream economists who still adhere to the dream of constructing a deductive-axiomatic economics with analytic truths that do not require empirical verification. Since Gödel showed that any complex axiomatic system is undecidable and incomplete, any such deductive-axiomatic economics will always consist of some undecidable statements. When not even being able to fulfil the dream of a complete and consistent axiomatic foundation for mathematics, it’s totally incomprehensible that some people still think that could be achieved for economics.

To be a good economist one cannot only be an economist

20 March, 2017 at 17:53 | Posted in Economics | 1 Comment

 

The master-economist must possess a rare combination of gifts …. He must be mathematician, historian, statesman, philosopher—in some degree. He must understand symbols and speak in words. He must contemplate the particular, in terms of the general, and touch abstract and concrete in the same flight of thought. He must study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must be entirely outside his regard. He must be purposeful and disinterested in a simultaneous mood, as aloof and incorruptible as an artist, yet sometimes as near to earth as a politician.

John Maynard Keynes

Economics students today are complaining more and more about the way economics is taught. The lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and narrowing of the curriculum, dissatisfy econ students all over the world. The frustrating lack of real world relevance has led many of them to demand the discipline to start develop a more open and pluralistic theoretical and methodological attitude.

There are many things about the way economics is taught today that worry yours truly. Today’s students are force-fed with mainstream neoclassical theories and models. That lack of pluralism is cause for serious concern.

However, I find the most salient deficiency in ‘modern’ economics education in the total absence of courses in the history of economic thought and economic methodology. That is deeply worrying since a science that doesn’t self-reflect and ask important methodological and science-theoretical questions about the own activity, is a science in dire straits.

Methodology is about how we do economics, how we evaluate theories, models and arguments. To know and think about methodology is important for every economist. Without methodological awareness it’s really impossible to understand what you are doing and why you’re doing it. Dismissing methodology is dismissing a necessary and vital part of science.

For someone who has spent forty years in the economics academia, it’s hopeful to see all these young economics students that want to see a real change in economics and the way it’s taught. Never give up. Never give in!

Always on my mind

19 March, 2017 at 23:25 | Posted in Varia | Comments Off on Always on my mind

 

Why Krugman and Stiglitz are no real alternatives to mainstream economics

19 March, 2017 at 13:48 | Posted in Economics | 8 Comments

verso_978-1-781683026_never_let_a_serious_crisis__pb_edition__large_300_cmyk-dc185356d27351d710223aefe6ffad0cLittle in the discipline has changed in the wake of the crisis. Mirowski thinks that this is at least in part a result of the impotence of the loyal opposition — those economists such as Joseph Stiglitz or Paul Krugman who attempt to oppose the more viciously neoliberal articulations of economic theory from within the camp of neoclassical economics. Though Krugman and Stiglitz have attacked concepts like the efficient markets hypothesis … Mirowski argues that their attempt to do so while retaining the basic theoretical architecture of neoclassicism has rendered them doubly ineffective.

First, their adoption of the battery of assumptions that accompany most neoclassical theorizing — about representative agents, treating information like any other commodity, and so on — make it nearly impossible to conclusively rebut arguments like the efficient markets hypothesis. Instead, they end up tinkering with it, introducing a nuance here or a qualification there … Stiglitz’s and Krugman’s arguments, while receiving circulation through the popular press, utterly fail to transform the discipline.

Paul Heideman

Despite all their radical rhetoric, Krugman and Stiglitz are — where it really counts — nothing but die-hard mainstream neoclassical economists. Just like Milton Friedman, Robert Lucas or Greg Mankiw.

The only economic analysis that Krugman and Stiglitz  — like other other mainstream economists — accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable —  and apply them to whatever you want — as long as you do it within the mainstream approach and its modeling strategy. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’

straight-jacketThat isn’t pluralism.

That’s a methodological reductionist straightjacket.

So, even though we have seen a proliferation of models, it has almost exclusively taken place as a kind of axiomatic variation within the standard ‘urmodel’, which is always used as a self-evident bench-mark.

Krugman and Stiglitz want to purvey the view that the proliferation of economic models during the last twenty-thirty years is a sign of great diversity and abundance of new ideas.

But, again, it’s not, really, that simple.

Although mainstream economists like to portray mainstream economics as an open and pluralistic ‘let a hundred flowers bloom,’ in reality it is rather ‘plus ça change, plus c’est la même chose.’

Applying closed analytical-formalist-mathematical-deductivist-axiomatic models, built on atomistic-reductionist assumptions to a world assumed to consist of atomistic-isolated entities, is a sure recipe for failure when the real world is known to be an open system where complex and relational structures and agents interact. Validly deducing things in models of that kind doesn’t much help us understanding or explaining what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream economists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Just telling us that the plethora of mathematical models that make up modern economics  “expand the range of the discipline’s insights” is nothing short of hand waving.

No matter how many thousands of technical working papers or models mainstream economists come up with, as long as they are just ‘wildly inconsistent’ axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and possible explanations of real economies.

When one look says it all

18 March, 2017 at 19:22 | Posted in Politics & Society | Comments Off on When one look says it all

 

Monte Carlo simulation on p-values (wonkish)

18 March, 2017 at 16:41 | Posted in Statistics & Econometrics | Comments Off on Monte Carlo simulation on p-values (wonkish)

monte_carloIn many social sciences p values and null hypothesis significance testing (NHST) are often used to draw far-reaching scientific conclusions – despite the fact that they are as a rule poorly understood and that there exist altenatives that are easier to understand and more informative.

Not the least using confidence intervals (CIs) and effect sizes are to be preferred to the Neyman-Pearson-Fisher mishmash approach that is so often practised by applied researchers.

Running a Monte Carlo simulation with 100 replications of a fictitious sample having N = 20, confidence itervals of 95%, a normally distributed population with a mean = 10 and a standard deviation of 20, taking two-tailed p values on a zero null hypothesis, we get varying CIs (since they are based on varying sample standard deviations), but with a minimum of 3.2 and a maximum of 26.1 we still get a clear picture of what would happen in an infinite limit sequence. On the other hand p values (even though from a purely mathematical statistical sense more or less equivalent to CIs) vary strongly from sample to sample, and jumping around between a minimum of 0.007 and a maximum of 0.999 don’t give you a clue of what will happen in an infinite limit sequence! So, I can’t but agree with Geoff Cummings:

The problems are so severe we need to shift as much as possible from NHST … The first shift should be to estimation: report and interpret effect sizes and CIs … I suggest p should be given only a marginal role, its problem explained, and it should be interpreted primarily as an indicator of where the 95% CI falls in relation to a null hypothesised value.

Geoff Cumming

In case you want to do your own Monte Carlo simulation, here’s an example I’ve made using Gretl:

nulldata 20
loop 100 –progressive
series y = normal(10,15)
scalar zs = (10-mean(y))/sd(y)
scalar df = $nobs-1
scalar ybar=mean(y)
scalar ysd= sd(y)
scalar ybarsd=ysd/sqrt($nobs)
scalar tstat = (ybar-10)/ybarsd
pvalue t df tstat
scalar lowb = mean(y) – critical(t,df,0.025)*ybarsd
scalar uppb = mean(y) + critical(t,df,0.025)*ybarsd
scalar pval = pvalue(t,df,tstat)
store E:\pvalcoeff.gdt lowb uppb pval
endloop

Neoliberalism and mainstream economics

18 March, 2017 at 10:25 | Posted in Economics | 1 Comment

Oxford professor Simon Wren-Lewis isn’t pleased with heterodox attacks on mainstream economics. One of the reasons is that he doesn’t share the heterodox view that mainstream economics and neoliberal ideas are highly linked.

In a post on his blog, Wren-Lewis defends the mainstream economics establishment against critique waged against it by Phil Mirowski:

Mirowski overestimates the extent to which neoliberal ideas have become “embedded in economic theory”, and underestimates the power that economic theory and evidence can have over even those academic economists who might have a neoliberal disposition. If the tide of neoliberal thought is going to be turned back, economics is going to be important in making that happen.

Wren-Lewis admits that “Philip Mirowski is a historian who has written a great deal about both the history of economics as a discipline and about neoliberalism” and that Mirowski “knows much more about the history of both subjects than I [W-L] do.”

632488Fair enough, but there are simple remedies for the lack of knowledge.

Read this essay, where yours truly try to further analyze — much inspired by the works of Amartya Sen — what kind of philosophical-ideological-political-economic doctrine neoliberalism is, and why it so often comes natural for mainstream economists to embrace neoliberal ideals.

Or maybe — if your Swedish isn’t too rusty … — you could take part of the book-length argumentation in Den dystra vetenskapen (‘The Dismal Science,’ Atlas 2001) for why there has been such a deep and long-standing connection between the dismal science and different varieties of neoliberalism.

One of my favourite books

18 March, 2017 at 09:51 | Posted in Varia | 1 Comment

brewers'sWell, sort of, at least.

For those of us who can’t get enough of English eccentrics, Brewer’s Rogues, Villains, Eccentrics by William Donaldson is probably the funniest book ever written. I mean, just to take one example, where else would you find an entry like this one?

Carlton, Sydney (1949- ), painter and decorator. Those who argue that bestiality should be treated with understanding had a setback in 1998 when Carlton, a married man from Bradford, was sentenced to a year in prison for having intercourse with a Staffordshire bull terrier, namned Badger. His defence was that Badger had made the first move. ‘I can’t help it if the dog took a liking to me,’ he told the court. This was not accepted.

Beating a dead horse

17 March, 2017 at 16:27 | Posted in Economics | 2 Comments

Dead-HorseIf I ask myself what I could legitimately assume a person to have rational expectations about, the technical answer would be, I think, about the realization of a stationary stochastic process, such as the outcome of the toss of a coin or anything that can be modeled as the outcome of a random process that is stationary. I don’t think that the economic implications of the outbreak of World war II were regarded by most people as the realization of a stationary stochastic process. In that case, the concept of rational expectations does not make any sense. Similarly, the major innovations cannot be thought of as the outcome of a random process. In that case the probability calculus does not apply.

Robert Solow

‘Modern’ macroeconomic theories are as a rule founded on the assumption of  rational expectations — where the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution.

The tiny little problem that there is no hard empirical evidence that verifies these models — cf. Michael Lovell (1986) & Nikolay Gertchev (2007) — usually doesn’t bother its protagonists too much. Rational expectations überpriest Thomas Sargent has the following to say on the epistemological status of the rational expectations hypothesis:

Partly because it focuses on outcomes and does not pretend to have behavioral content, the hypothesis of rational epectations has proved to be a powerful tool for making precise statements about complicated dynamic economic systems.

Precise, yes, in the celestial world of models. But relevant and realistic? I’ll be dipped!

Feynman’s trick and Leibniz rule (student stuff)

16 March, 2017 at 15:52 | Posted in Economics | Comments Off on Feynman’s trick and Leibniz rule (student stuff)

In his autobiography Surely you’re joking, Mr. Feynman, Richard Feynman among other things discussed his box of tools, and mentioned a wonderful little tool  by which he was able to differentiate under the integral sign more easily than by the methods usually taught at our universities:

The more standard procedure is well described here:

What can economists know?

16 March, 2017 at 13:02 | Posted in Economics | 2 Comments

The early concerns voiced by such critics as Keynes and Hayek, while they may indeed have been exaggerated, were not misplaced. 51ffpHXDowL._SX326_BO1,204,203,200_I believe that much of the difficulty economists have encountered over the past fifty years can be traced to the fact that the economic environment we seek to model are sometimes too messy to be fitted into the mold of a well-behaved, complete model of the standard kind. It is not generally the case that some sharp dividing line separates a set of important systematic influences that we can measure, proxy, or control for, from the many small unsystematic influences that we can bundle into a ‘noise’ term. So when we set out to test economic theories in the framework of the standard paradigm, we face quite serious and deep-seated difficulties. The problem of model selection may be such that the embedded test ends up being inconclusive, or unpersuasive.

Although advances have been made using a modern empiricist approach in modern economics, there are still some unsolved ‘problematics’ with its epistemological and ontological presuppositions. There is, e. g., an implicit assumption that the data generating process (DGP) fundamentally has an invariant property and that models that are structurally unstable just have not been able to get hold of that invariance. But one cannot just presuppose or take for granted that kind of invariance. It has to be argued and justified. Grounds have to be given for viewing reality as satisfying conditions of model-closure. It is as if the lack of closure that shows up in the form of structurally unstable models somehow could be solved by searching for more autonomous and invariable ‘atomic uniformity.’ But if reality is ‘congruent’ to this analytical prerequisite has to be argued for, and not simply taken for granted.

Even granted that closures come in degrees, we should not compromise on ontology. Some methods simply introduce improper closures, closures that make the disjuncture between models and real world target systems inappropriately large. Garbage in, garbage out.

Underlying the search for these immutable ‘fundamentals’ lays the implicit view of the world as consisting of material entities with their own separate and invariable effects. These entities are thought of as being able to be treated as separate and addible causes, thereby making it possible to infer complex interaction from knowledge of individual constituents with limited independent variety. But if this is a justified analytical procedure cannot be answered without confronting it with the nature of the objects the models are supposed to describe, explain or predict.

Nothing compares

16 March, 2017 at 12:49 | Posted in Varia | Comments Off on Nothing compares


This one is in loving memory of Kristina, beloved wife and mother of David and Tora.

But in dreams,
I can hear your name.
And in dreams,
We will meet again.

When the seas and mountains fall
And we come to end of days,
In the dark I hear a call
Calling me there
I will go there
And back again.

The Euler-Lagrange equation (student stuff)

14 March, 2017 at 22:53 | Posted in Economics | 1 Comment


Excellent lecture.

And here’s for another one just as excellent!

Factor analysis — like telling time with a stopped clock

14 March, 2017 at 17:33 | Posted in Statistics & Econometrics | 1 Comment

even-a-stopped-clock-detailExploratory factor analysis exploits correlations to summarize data, and confirmatory factor analysis — stuff like testing that the right partial correlations vanish — is a prudent way of checking whether a model with latent variables could possibly be right. What the modern g-mongers do, however, is try to use exploratory factor analysis to uncover hidden causal structures. I am very, very interested in the latter pursuit, and if factor analysis was a solution I would embrace it gladly. But if factor analysis was a solution, when my students asked me (as they inevitably do) “so, how do we know how many factors we need?”, I would be able to do more than point them to rules of thumb based on squinting at “scree plots” like this and guessing where the slope begins. (There are ways of estimating the intrinsic dimension of noisily-sampled manifolds, but that’s not at all the same.) More broadly, factor analysis is part of a larger circle of ideas which all more or less boil down to some combination of least squares, linear regression and singular value decomposition, which are used in the overwhelming majority of work in quantitative social science, including, very much, work which tries to draw causal inferences without the benefit of experiments. A natural question — but one almost never asked by users of these tools — is whether they are reliable instruments of causal inference. The answer, unequivocally, is “no”.

I will push extra hard, once again, Clark Glymour’s paper on The Bell Curve, which patiently explains why these tools are just not up to the job of causal inference … The conclusions people reach with such methods may be right and may be wrong, but you basically can’t tell which from their reports, because their methods are unreliable.

This is why I said that using factor analysis to find causal structure is like telling time with a stopped clock. It is, occasionally, right. Maybe the clock stopped at 12, and looking at its face inspires you to look at the sun and see that it’s near its zenith, and look at shadows and see that they’re short, and confirm that it’s near noon. Maybe you’d not have thought to do those things otherwise; but the clock gives no evidence that it’s near noon, and becomes no more reliable when it’s too cloudy for you to look at the sun.

Cosma Shalizi

Reasons to reject the standard rationality axioms in economics

13 March, 2017 at 16:22 | Posted in Economics | Comments Off on Reasons to reject the standard rationality axioms in economics

Those axioms—and you can find a whole series of people from Pareto onwards who make the same argument—come from economists’ introspection and what they think is necessary for their work, not from observation of what people are doing.

miracle_cartoon

Some of these axioms seem natural, at least at first sight. For example, transitivity seems a natural idea—if you prefer A to B and B to C, you also prefer A to C. But if you look carefully at how economists define the things over which you are making choices, you could never observe whether or not an individual is making transitive choices.

My main problem is that none of these axioms is taken by observing lots and lots of people. In other disciplines, that is what you do. You look and then you try to develop a model which might explain what you observe. In economics, we started out by doing the formalization and building models which were internally consistent but often far removed from reality. To construct models which we could analyse formally, we needed to make some formal assumptions … These assumptions are somehow not natural, they are not about what people do, they are more about what we need in order to pursue our analysis. So that is my real objection.

What do you replace that with? Do you just say that people just make arbitrary random choices? Well of course not. The argument I would make would be that, in some sense, people see directions in which they think their welfare improves, and they try to move in those directions. A simple way to model this is to give simple rules to agents that you find plausible and then look at how that works. In such a model, people are not irrational, but rationality must have a much more open definition …

One of the problems that we find is that people have now somehow absorbed the economist’s notion of rationality, so that when people say ‘rational’, they immediately have in mind something like what economists define as rationality. In fact, rationality can be thought of in many different ways. Rationality for me would mean something more like coherent or interpretable behaviour; behaviour that is not just random.

Alan Kirman

When back in the 1980s giving graduate courses in microeconomics, Alan Kirman’s books were self-evidently on the reading list.

They still are.

Edward Snowden — unbroken and unconquerable

13 March, 2017 at 09:08 | Posted in Varia | Comments Off on Edward Snowden — unbroken and unconquerable


This one is for you.
Bravest of the brave.
Never give in.
Never give up.

Modern macroeconomics — a walk down a blind alley

12 March, 2017 at 20:46 | Posted in Economics | Comments Off on Modern macroeconomics — a walk down a blind alley

I would say that macroeconomic theory has gone down a blind alley in the sense that we have locked onto a particular model: general equilibrium. blindalleyBut it is not really general equilibrium, I mean, it is a one-man model! In particular, it has become mathematically sophisticated without representing the fundamental features of the macro-economy.

So I would say that people like Kydland and Prescott, and so forth, people like that … changed the way that people do macroeconomics. But in my view it was not a positive change … One predominant idea is that of external shocks—and in particular the idea that the shocks that happen to the economy should essentially be the technological shocks. As Joe Stiglitz said, what could we mean by a negative technological shock? That people forget what they could do before?

So we have this idea that we have a system which is in equilibrium and that every now and then it gets knocked off the equilibrium by ‘a shock’. But shocks are part of the system! We have gone down a track that actually does not allow us to say much about the real, major movements in the macro-economy … We should be studying non-normal periods, instead of normal ones, because that is what causes real problems. And we do not do that.

unemployed-thumbSo my vision of the state of macroeconomics is that it somehow has the wrong view: an equilibrium view and a stationary state view. But what is important and interesting about macroeconomics is precisely when those two things do not hold. How can you talk of equilibrium when we move from 5% unemployment to 10% unemployment? If you are in Chicago, you say “Well, those extra 5% have made the calculation that it was better for them to be out of work”. But look at the reality; that is not what happens. People do not want to be out of work … Millions of people are out of work, and we are not worried about that?

That is the major failure in macroeconomics. It does not address the serious problems that we face when we get out of equilibrium. And we are out of equilibrium most of the time.

Alan Kirman

Yours truly is extremely fond of economists like Alan Kirman. With razor-sharp intellects they immediately go for the essentials. They have no time for bullshit. And neither should we.

Next Page »

Blog at WordPress.com.
Entries and comments feeds.