Master class

31 March, 2017 at 20:14 | Posted in Economics | Comments Off on Master class


Elzbieta Towarnicka — with a totally unbelievable and absolutely fabulous voice.

This is as good as it gets in the world of music.



31 March, 2017 at 18:55 | Posted in Politics & Society | Comments Off on Prayer

This one is for all you, brothers and sisters, fighting oppression, struggling to survive, and risking your lives on your long walk to freedom. May God be with you.

Min värld är fattig och död när barnasinnet berövats sin glöd

31 March, 2017 at 17:01 | Posted in Varia | Comments Off on Min värld är fattig och död när barnasinnet berövats sin glöd

Poesi och musik i vacker förening.
Hansson de Wolfe United — ett fenomen i svensk popmusik utan motstycke.

Probability and economics

30 March, 2017 at 16:02 | Posted in Economics | 2 Comments

Modern mainstream (neoclassical) economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

probabilityWhen attempting to convince us of the necessity of founding empirical economic analysis on probability models,  neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks sound foundations.

When economists and econometricians – uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

Mathematics (by which I shall mean pure mathematics) has no grip on the real world ; if probability is to deal with the real world it must contain elements outside mathematics ; the meaning of ‘ probability ‘ must relate to the real world, and there must be one or more ‘primitive’ propositions about the real world, from which we can then proceed deductively (i.e. mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom’, and we will call it A for short. Although it has got to be true, A is by the nature of the case incapable of deductive proof, for the sufficient reason that it is about the real world  …

We will begin with the … school which I will call philosophical. This attacks directly the ‘real’ probability problem; what are the axiom A and the meaning of ‘probability’ to be, and how can we justify A? It will be instructive to consider the attempt called the ‘frequency theory’. It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated n times the proportion of 6’s will, with certainty, tend to a limit, p say, as n goes to infinity … If we take this proposition as ‘A’ we can at least settle off-hand the other problem, of the meaning of probability; we define its measure for the event in question to be the number p. But for the rest this A takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? A does not say. We have, then, to add to it something about the rate of convergence. Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p +- e for large enough n (the largeness depending on e)’. It can only say ‘the proportion will lie between p +- e with at least such and such probability (depending on e and n*) whenever n>n*’. The vicious circle is apparent. We have not merely failed to justify a workable A; we have failed even to state one which would work if its truth were granted. It is generally agreed that the frequency theory won’t work. But whatever the theory it is clear that the vicious circle is very deep-seated: certainty being impossible, whatever A is made to state can only be in terms of ‘probability ‘.

John Edensor Littlewood 

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations!


The problem with unjustified assumptions

29 March, 2017 at 16:12 | Posted in Economics, Statistics & Econometrics | Comments Off on The problem with unjustified assumptions

assumptions-analysis1An ongoing concern is that excessive focus on formal modeling and statistics can lead to neglect of practical issues and to overconfidence in formal results … Analysis interpretation depends on contextual judgments about how reality is to be mapped onto the model, and how the formal analysis results are to be mapped back into reality. But overconfidence in formal outputs is only to be expected when much labor has gone into deductive reasoning. First, there is a need to feel the labor was justified, and one way to do so is to believe the formal deduction produced important conclusions. Second, there seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

Unfortunately, assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Despite assumption uncertainties, modelers often express only the uncertainties derived within their modeling assumptions, sometimes to disastrous consequences. Econometrics supplies dramatic cautionary examples in which complex modeling has failed miserably in important applications …

Sander Greenland

Yes, indeed, econometrics fails miserably over and over again. One reason why it does, is that the error term in the regression models used are thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:

Paul-Romer-727x727With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Paul Romer

Don’t leave me this way

29 March, 2017 at 15:02 | Posted in Varia | 1 Comment


Mainstream flimflam defender Wren-Lewis gets it wrong — again!

26 March, 2017 at 20:56 | Posted in Economics | 4 Comments

flimflam-2Again and again, Oxford professor Simon Wren-Lewis rides out to defend orthodox macroeconomic theory against attacks from heterodox critics.

A couple of years ago, it was rational expectations, microfoundations, and representative agent modeling he wanted to save.

And now he is back with new flimflamming against heterodox attacks and pluralist demands from economics students all over the world:

Attacks [against mainstream economics] are far from progressive.

[D]evoting a lot of time to exposing students to contrasting economic frameworks (feminist, Austrian, post-Keynesian) to give them a range of ways to think about the economy, as suggested here, means cutting time spent on learning the essential tools that any economist needs … [E]conomics is a vocational subject, not a liberal arts subject …

This is the mistake that progressives make. They think that by challenging mainstream economics they will somehow make the economic arguments for regressive policies go away. They will not go away. Instead all you have done is thrown away the chance of challenging those arguments on their own ground, using the strength of an objective empirical science …

Economics, as someone once said, is a separate and inexact science. That it is a science, with a mainstream that has areas of agreement and areas of disagreement, is its strength. It is what allows economists to claim that some things are knowledge, and should be treated as such. Turn it into separate schools of thought, and it degenerates into sets of separate opinions. There is plenty wrong with mainstream economics, but replacing it with schools of thought is not the progressive endeavor that some believe. It would just give you more idiotic policies …

Mainstream economics is here depicted by Wren-Lewis as nothing but “essential tools that any economist needs.” Not a theory among other competing theories. Not a “separate school of thoughts,” but an “objective empirical science” capable of producing “knowledge.”

I’ll be dipped!

Reading that kind of nonsense one has to wonder if this guy is for real!

Wren-Lewis always tries hard to give a picture of modern macroeconomics as a pluralist enterprise. But the change and diversity that gets Wren-Lewis approval only takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. You’re free to take your analytical formalist models and apply it to whatever you want — as long as you do it with a modeling methodology that is acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. If you haven’t modeled your thoughts, you’re not in the economics business. But this isn’t pluralism. It’s a methodological reductionist straightjacket.

Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modeling exercises pursued by mainstream macroeconomists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had mainstream economists like Wren-Lewis not been so in love with their models, they would have perceived this too. Telling us that the plethora of models that make up modern macroeconomics are not right or wrong, but just more or less applicable to different situations, is nothing short of hand waving.

Wren-Lewis seems to have no problem with the lack of fundamantal diversity — not just path-dependent elaborations of the mainstream canon — and vanishingly little real world relevance that characterize modern mainstream macroeconomics. And he obviously shares the view that there is nothing basically wrong with ‘standard theory.’ As long as policy makers and economists stick to ‘standard economic analysis’ everything is just fine. Economics is just a common language and method that makes us think straight,  reach correct answers, and produce ‘knowledge.’

Just as his mainstream colleagues Paul Krugman and Greg Mankiw, Wren-Lewis is a mainstream neoclassical economist fanatically defending the insistence of using an axiomatic-deductive economic modeling strategy. To yours truly, this attitude is nothing but a late confirmation of Alfred North Whitehead’s complaint that “the self-confidence of learned people is the comic tragedy of civilization.”

Contrary to what Wren-Lewis seems to argue, I would say the recent economic and financial crises and the fact that mainstream economics has had next to nothing to contribute in understanding them, shows that mainstream economics is a degenerative research program in dire need of replacement.

No matter how precise and rigorous the analysis is, and no matter how hard one tries to cast the argument in modern ‘the model is the message’ form, mainstream economists like Wren-Lewis do not push economic science forwards one millimeter since they simply do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside their mainstream models are, they do not per se say anything about real world economies.


Added March 27: Brad DeLong isn’t too happy either about  some of Wren-Lewis’ dodgings :

Simon needs to face that fact squarely, rather than to dodge it. The fact is that the “mainstream economists, and most mainstream economists” who were heard in the public sphere were not against austerity, but rather split, with, if anything, louder and larger voices on the pro-austerity side. (IMHO, Simon Wren-Lewis half admits this with his denunciations of “City economists”.) When Unlearning Economics seeks the destruction of “mainstream economics”, he seeks the end of an intellectual hegemony that gives Reinhart and Rogoff’s very shaky arguments a much more powerful institutional intellectual voice by virtue of their authors’ tenured posts at Harvard than the arguments in fact deserve. Simon Wren-Lewis, in response, wants to claim that strengthening the “mainstream” would somehow diminish the influence of future Reinharts and Rogoffs in analogous situations. But the arguments for austerity that turned out to be powerful and persuasive in the public sphere came from inside the house!

Textbooks problem — teaching the wrong things all too well

25 March, 2017 at 16:01 | Posted in Statistics & Econometrics | 2 Comments

It is well known that even experienced scientists routinely misinterpret p-values in all sorts of ways, including confusion of statistical and practical significance, treating non-rejection as acceptance of the null hypothesis, and interpreting the p-value as some sort of replication probability or as the posterior probability that the null hypothesis is true …

servicemanIt is shocking that these errors seem so hard-wired into statisticians’ thinking, and this suggests that our profession really needs to look at how it teaches the interpretation of statistical inferences. The problem does not seem just to be technical misunderstandings; rather, statistical analysis is being asked to do something that it simply can’t do, to bring out a signal from any data, no matter how noisy. We suspect that, to make progress in pedagogy, statisticians will have to give up some of the claims we have implicitly been making about the effectiveness of our methods …

It would be nice if the statistics profession was offering a good solution to the significance testing problem and we just needed to convey it more clearly. But, no, … many statisticians misunderstand the core ideas too. It might be a good idea for other reasons to recommend that students take more statistics classes—but this won’t solve the problems if textbooks point in the wrong direction and instructors don’t understand what they are teaching. To put it another way, it’s not that we’re teaching the right thing poorly; unfortunately, we’ve been teaching the wrong thing all too well.

Andrew Gelman & John Carlin

Teaching both statistics and economics, yours truly can’t but notice that the statements “give up some of the claims we have implicitly been making about the effectiveness of our methods” and “it’s not that we’re teaching the right thing poorly; unfortunately, we’ve been teaching the wrong thing all too well” obviously apply not only to statistics …

And the solution? Certainly not — as Gelman and Carlin also underline — to reform p-values. Instead we have to accept that we live in a world permeated by genuine uncertainty and that it takes a lot of variation to make good inductive inferences.

Sounds familiar? It definitely should!

treatprobThe standard view in statistics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues in his seminal A Treatise on Probability (1921), ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different’ — i.e., variation.

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we ‘simply do not know.’

Heterodoxy — necessary for the renewal of economics

24 March, 2017 at 08:20 | Posted in Economics | Comments Off on Heterodoxy — necessary for the renewal of economics


A sense of failure is, for all intents and purposes, being translated into a context of relative success requiring more limited changes – though these are still being seen as significant. Part of the reason that they are seen as significant is that changes from within mainstream economics do not have to be major in order to appear radical. It is our contention that heterodox economics is being marginalised in this process of ‘change’ and that this is to the detriment of the positive potential for transforming the discipline …

Marginalising heterodoxy creates problems for teaching economics as a discipline in which economists constructively disagree and can be in error. This is important because it is through a conformity that suppresses a continual and diverse critical awareness that economics becomes a dangerous discourse prone to lack of realism, complacency, and dogmatism. Marginalising heterodoxy reduces the potential realisation of the different components of economics one might expect to be transformed as part of a project to transform the discipline …

Highlighting the points we have may seem like simple griping by a special interest. But there is far more involved than that. Remember we are talking about the failure of a discipline and how it is to be transformed. The marginalisation of heterodoxy has real consequences. In a general sense the marginalisation creates manifest problems that hamper teaching economics in a plural and critically aware way. For example, the marginalisation promotes a Whig history approach. It is also important to bear in mind that heterodoxy is a natural home of pluralism and of critical thinking in economics … Unlike the mainstream, heterodoxy does not have to be made compatible with pluralism and with critical thinking; it is predisposed to these and is already a resource for their development. So, marginalising heterodoxy really does narrow the base by which the discipline seeks to be renewed. That narrowing contributes to restricting the potential for good teaching in economics (including the profoundly important matter of how economists disagree and how they can be in error).

The Association for Heterodox Economics

Analogue economies and reality

23 March, 2017 at 00:54 | Posted in Economics | 2 Comments

41EofxYHtBLModelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’

The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …

How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility to teach genuine truths about empirical reality.

Next Page »

Blog at
Entries and comments feeds.