Krugman on math and economics

22 Nov, 2018 at 00:26 | Posted in Economics | 3 Comments

Yours truly have no problem with Krugman’s views on the use of math in this video in his recent tweet. I have said similar things myself for decades.

But there is a BUT here …

At other times Krugman — although admitting that economists have a tendency to use ”excessive math” and “equate hard math with quality” — has however vehemently defended the mathematization of economics:

I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

And when it comes to modeling philosophy, Krugman has more than once defended a ‘the model is the message’ position (my italics):

I don’t mean that setting up and working out microfounded models is a waste of time. On the contrary, trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking. In fact, I’ve had that experience several times.

For years Krugman has in more than one article criticized mainstream economics for using too much (bad) mathematics and axiomatics in their model-building endeavours. But when it comes to defending his own position on various issues he usually himself ultimately falls back on the same kind of models. In his End This Depression Now — just to take one example — Krugman maintains that although he doesn’t buy “the assumptions about rationality and markets that are embodied in many modern theoretical models, my own included,” he still find them useful “as a way of thinking through some issues carefully.” When it comes to methodology and assumptions, Krugman obviously has a lot in common with the kind of model-building he otherwise — sometimes — criticizes.

My advice to Krugman: stick with Marshall and ‘burn the math’!

Die erschöpfte Gesellschaft

21 Nov, 2018 at 23:34 | Posted in Politics & Society | Comments Off on Die erschöpfte Gesellschaft

 

P-values are no substitute for thinking

21 Nov, 2018 at 22:18 | Posted in Statistics & Econometrics | Comments Off on P-values are no substitute for thinking

 

A non-trivial part of statistics education is made up of teaching students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests really are, still most students misinterpret them.

This is not to blame on students’ ignorance, but rather on significance testing not being particularly transparent (conditional probability inference is difficult even to those of us who teach and practice it). A lot of researchers fall prey​ to the same mistakes.

If anything, the above video underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero —  even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science.

In its standard form, a significance test is not the kind of ‘severe test’ that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypotheses. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give ​the same 10% result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

Most importantly — we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. Statistical​ significance tests DO NOT validate models!

statistical-models-theory-and-practice-2-e-original-imaeahk3hfzrmxz9In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p-1 degrees of freedom in the numerator and n-p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.
So?

Why Minsky matters

21 Nov, 2018 at 14:04 | Posted in Economics | Comments Off on Why Minsky matters

skmbt_c45213091916091Listen to BBC 4 where Duncan Weldon tries to explain in what way Hyman Minsky’s thoughts on banking and finance offer a radical challenge to mainstream economic theory.

As a young research stipendiate in the U.S. yours truly had the great pleasure and privelege of having Hyman Minsky as teacher. He was a great inspiration at the time. He still is.

Gender pay gap and transparency

20 Nov, 2018 at 12:42 | Posted in Economics | Comments Off on Gender pay gap and transparency

GENDEREs wird oft vermutet, dass mehr Transparenz hier Abhilfe schaffen könnte: Wenn alle wüssten, was alle verdienen – dann wären diese Unterschiede nicht länger haltbar. Die zu kurz Gekommenen würden protestieren, mehr fordern und so für mehr Gleichheit und Gerechtigkeit sorgen. Seit dem 6. Januar dieses Jahres haben Beschäftigte ein Recht darauf zu erfahren, wie viel Kollegen beziehungsweise Kolleginnen des jeweils anderen Geschlechts verdienen. (Vorausgesetzt, der Betrieb hat mindestens 200 Angestellte und es gibt sechs oder mehr Personen, die einen gleichwertigen Job ausüben.) Dieses Lohntransparenzgesetz soll dafür sorgen, dass Gehaltsunterschiede zwischen Männern und Frauen schrumpfen.

Ganz so einfach scheint es allerdings nicht zu sein. In zwei Studien ist die Betriebswirtschaftsprofessorin Zoe Cullen von der Universität Harvard jetzt zu dem Ergebnis gekommen, dass mehr Einkommenstransparenz nicht automatisch hilft. Nicht nur, dass sie keineswegs zu einer Angleichung der Gehälter führt. Sie hatte sogar in der Gesamtheit niedrigere Löhne zur Folge, und zwar in einem erheblichen Ausmaß von sieben bis 25 Prozent.

Wie kann das sein? Der Hauptgrund dafür ist, dass es bei transparenten Verhältnissen weniger Ausreißer nach oben gibt: Wenn niemand weiß, wer wie viel verdient, zögern Arbeitgeber nicht lange, um Mitarbeitern, die sie unbedingt halten möchten, mehr zu zahlen als anderen. Würde das jedoch allgemein bekannt werden, könnte das zu Unmut in der Belegschaft führen. Also verzichten die Unternehmen auf die besonders hohen Gehälter. Dieser Effekt macht offenbar eventuelle Lohnsteigerungen zur Angleichung von Gehältern mehr als wett.

Die Zeit

Johan Asplund

19 Nov, 2018 at 23:22 | Posted in Politics & Society | Comments Off on Johan Asplund

En av Sveriges främsta och mest uppslagsrika sociologer — Johan Asplund (1937-2018) har gått ur tiden.

När yours truly som ung student på 1970-talet läste några betyg i sociologi, var en av mina stora inspirationskällor Johan Asplunds underbara lilla klassiker Om mättnadsprocesser (Argos 1967).

Jag tror den känsla många av oss har inför samhällsutvecklingens “kvalitetsförbättringar” är inte så lite av “rulla ut röda mattan och rulla in den bakom oss.” Asplunds funderingar kring mättnadsprocesser ger också ett intressant perspektiv:

Så länge ett mode är fräscht, hävdar man sig på det hela taget bäst genom att ansluta sig till det. Först när det undergått en mättnadsprocess lönar det sig att träda fram med ett nytt program … Beatles förlorade i vissa grupper sin popularitet på just det sätt som all härlighet i regel förgår: genom mättnadsprocesser … [F]orskarleda är samma fenomen som mättnadsprocesser – hos råttor … [D]en extremt specialiserade vetenskapsmannen har endast en verksamhetsform, och mättnadsprocessen bör sålunda bli snabb. Han har också försatt sig i en situation, där han inte kan alternera sitt beteende … Ett samhälle utan mättnadsprocesser vore ett öde land, stelnat i en evig förnöjelse med sakernas nuvarande tillstånd. Det vore ett tusenårsrike.

Inte minst lärde mig Asplund hur nödvändig tidsdimensionen borde vara i ekonomisk teori, en lärdom jag tog med mig som ekonom och ekonomihistoriker. Det finns inga tidsoberoende preferenser.

On health and inequality

19 Nov, 2018 at 16:26 | Posted in Economics, Politics & Society | Comments Off on On health and inequality

 

Mainstream economics and the We-Have-To-Do-Something fallacy

17 Nov, 2018 at 15:49 | Posted in Economics | 5 Comments

mirowski_200_170_s_c1Twenty-five years ago Phil Mirowski was invited to give a speech on themes from his book More Heat than Light at my old economics department in Lund, Sweden. All the mainstream professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defence. Being at a nonplus, one of them, in total desperation, finally asked: “But what shall we do then?”

Yes indeed — what shall they do? Because, sure, they have to do something. Or?

briggsNot all fallacies are what they seem … To avoid fallacy, we always must take the information or evidence supplied as given and concrete, as sacrosanct, even, just as we do in any mathematical problem … Perhaps the worst fallacy is the We-Have-To-Do-Something fallacy. Interest centers around some proposition Y, about which little is known. The suggestion will arise that some inferior, or itself fallacious, method be used to judge Y because action on Y must take place. The inferiority of fallaciousness of the method used to judge Y will then be forgotten. You may think that is rare. It isn’t.

Blacklisted economics professor found dead

17 Nov, 2018 at 14:13 | Posted in Economics | 3 Comments

dumstrutProfessor Outis Philalithopoulos was found dead in his home three days ago; the coroner’s report cited natural causes that were left unspecified. Unfortunately, all of the professor’s academic work has disappeared; the only trace left appears to be the following letter, which he sent to an admirer shortly before his death. The understandably concerned recipient of the letter has shared its contents with Naked Capitalism, and has insisted that her identity be protected.

Dear * * *,

Reading your generous letter was an unexpectedly encouraging experience. I rarely feel that others truly understand the purport of my theories, but when I see a high school student such as yourself navigate her way through the vilifications that surrounds my work, it makes me want to redouble my efforts to explain my ideas to a larger audience.

How did you become the most courageous economics professor of our time?
Really, you are far too kind. I never thought of myself as anyone out of the ordinary while working as a young PhD on technical questions in Public Choice theory. As you probably know, Public Choice is the pathbreaking theory that demystified the decisions of politicians, showing that they act rationally in order to maximize their own economic benefits.

Soon after receiving tenure, it occurred to me that we were being profoundly inconsistent. While we had correctly criticized the previous mainstream view that politics involved benevolent efforts to serve the common good, we had failed to apply the same rigor to the community of academic economists. As a result, we were modeling both economic and political actors as self-interested utility-maximizing agents, while continuing to see economics professors as idealistic pursuers of truth. I decided to correct this oversight by developing my theory of Academic Choice, in which economists are theorized as rational agents who continually seek to maximize their future earnings potential.

The way I would describe Academic Choice theory is that it is “the sociology of economists, without romance.” Is this right? What an insightful comment. As you say, Academic Choice theory is a descriptive project, with no normative orientation. We apply a critical approach in order to counterbalance pervasive earlier notions of economists as scientific heroes struggling against popular ignorance in order to serve the common good …

Isn’t it offensive to assume that economists, for motives of personal gain, shade their theoretical allegiances in the directions preferred by powerful interest groups? How could it ever be offensive to assume that a person acts rationally in pursuit of maximizing his or her own utility? I’m afraid I don’t understand this question.

Is there a “behavioral” version of Academic Choice theory, in which the basic premises are enriched by the possibility that economists sometimes act irrationally? Great question. One of my students developed just such a theory – he postulated that economists sometimes do act benevolently, but they have access to limited information and are subject to cognitive biases …

However, while his dissertation was unquestionably a valuable contribution to the literature, I am personally convinced that the original Academic Choice theory is more empirically realistic. Studies have shown that many people do act irrationally, but not economists …

Yves Smith/Naked Capitalism

Was heißt es in einem reichen Land arm zu sein?

17 Nov, 2018 at 13:46 | Posted in Politics & Society | Comments Off on Was heißt es in einem reichen Land arm zu sein?

armut #Unten illustriert, was die versammelte deutsche Bildungsforschung seit Jahren stets aufs Neue nachgewiesen hat. Die Herkunft bestimmt, insbesondere in Deutschland, den Lebensweg, wer unten geboren wird, der bleibt in der Regel dort. Der soziale Aufstieg wird zum unerreichbaren Ziel. Doch sind es in den sozialen Netzwerken vor allem die Aufsteiger, die unter dem Hashtag von ihren Erfahrungen berichten. Jene, die sich auszudrücken wissen, nicht selten akademisch gebildet, die also die Ausnahme sind und nicht die Regel. Gerade gegen sie richtet sich nun die Kritik. “Was wollt ihr denn eigentlich?”, heißt es dann. “Ihr habt es doch geschafft. Stellt euch nicht so an.”

Dabei ist es gerade bemerkenswert, dass nun die Aufsteiger ihre Erfahrungen öffentlich zum Thema machen. Schließlich sagte man ihnen stets nach, sich nach dem Aufstieg schleunigst von ihrer Herkunft distanziert zu haben. Sie trainierten sich den Dialekt ab und sprachen wenig über ihre Kindheit. Bei #unten weicht diese “Herkunftsscham” (Didier Eribon) nun einem selbstbewussten Sprechen über die Erfahrungen von Abwertung und Diskriminierung. Und wo die Demütigen zu Anklägern werden, da beginnt die Emanzipation.

Robert Pausch/Die Zeit

The New Classical counterrevolution​

17 Nov, 2018 at 09:52 | Posted in Economics | 3 Comments

scrrewIn a post on his blog, Oxford macroeconomist Simon Wren-Lewis discusses if modern academic macroeconomics is eclectic or not. When it comes to methodology it seems as though his conclusion is that it is not:

The New Classical Counter Revolution of the 1970s and 1980s … was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful … Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.

In an earlier post he elaborated on why the New Classical Counterrevolution was so successful in replacing older theories, despite the fact that the New Classical models weren’t able to explain what happened to output and inflation in the 1970s and 1980s:

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example …

If mainstream academic macroeconomists were seduced by anything, it was a methodology — a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the New Classical Counterrevolution. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous!

Wren-Lewis seems to be impressed by the ‘rigour’ brought to macroeconomics by the New Classical counterrevolution and its rational expectations, microfoundations and ‘Lucas Critique’.

I fail to see why.

Wren-Lewis’ portrayal of rational expectations is not as innocent as it may look. Rational expectations in the mainstream economists’ world imply that relevant distributions have to be time independent. This amounts to assuming that an economy is a closed system with known stochastic probability distributions for all different events. In reality, it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds since an economy can hardly be conceived as being completely replicated over time. The similarity between these modelling assumptions and the expectations of real persons is vanishingly small. In the world of the rational expectations hypothesis, we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents ‘make predictable errors’ in the New Keynesian models doesn’t take us a closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

‘Rigorous’ and ‘precise’ New Classical or ‘New Keynesian’ models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

The failure in the attempt to anchor the analysis in the alleged stable deep parameters ‘tastes’ and ‘technology’ shows that if you neglect ontological considerations pertaining to real-world economies, ultimately reality gets its revenge when at last questions of bridging and exportation of model exercises are laid on the table.

keynes-right-and-wrong

Mainstream economists are proud of having an ever-growing smorgasbord of models to cherry-pick from (as long as, of course, the models do not question the standard modelling strategy) when performing their analyses. The ‘rigorous’ and ‘precise’ deductions made in these closed models, however, are not in any way matched by a similar stringency or precision when it comes to what ought to be the most important stage of any research — making statements and explaining things in real economies. Although almost every mainstream economist holds the view that thought-experimental modelling has to be followed by confronting the models with reality — which is what they indirectly want to predict/explain/understand using their models — they all of a sudden become exceedingly vague and imprecise. It is as if all the intellectual force has been invested in the modelling stage and nothing is left for what really matters — what exactly do these models teach us about real economies.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimetre if they do not stand the acid test of relevance to the target.

Proving things ‘rigorously’ in mathematical models is at most a starting point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

My favourite girls (personal)

15 Nov, 2018 at 16:33 | Posted in Varia | Comments Off on My favourite girls (personal)

 
my girls

Hedda (5), Linnea (19), and Tora (25)

Being a class mongrel

15 Nov, 2018 at 14:05 | Posted in Politics & Society | 3 Comments

bragg We were working class, and you don’t lose that. Later on, I bolted on middle classness but I think the working-class thing hasn’t gone away and it never will go away. Quite a few of my interactions and responses are still the responses I had when I was 18 or 19. And the other things are bolted on and it is a mix. It is what it is, and a lot of people are like that. I’m a class mongrel.

Melvyn Bragg

Most people think of social mobility as something unproblematically​ positive. Sharing much the same experience as the one Bragg describes, it is difficult to share that sentiment. Becoming — basically through educational prowess — part of the powers and classes that for centuries have oppressed and belittled the working classes can be a rather mixed experience. As a rags-to-riches traveller,​ you always find yourself somewhere in between​ the world you are leaving and the world you are entering. Moving up the social ladder does not erase your past. Forgetting that, rest assured there are others that are more than happy to remind you. The social mobility many of us who grew​ up in the 50’s and 60’s experienced, only underscores that the real freedom of the working classes has to transcend the individual. It has to be a collective endeavour, whereby we rise with our class and not out of it.

Kalecki and Keynes on the loanable funds fallacy

14 Nov, 2018 at 23:17 | Posted in Economics | 8 Comments

kal It should be emphasized that the equality between savings and investment … will be valid under all circumstances. In particular, it will be independent of the level of the rate of interest which was customarily considered in economic theory to be the factor equilibrating the demand for and supply of new capital. In the present conception investment, once carried out, automatically provides the savings necessary to finance it. Indeed, in our simplified model, profits in a given period are the direct outcome of capitalists’ consumption and investment in that period. If investment increases by a certain amount, savings out of profits are pro tanto higher …

One important consequence of the above is that the rate of interest cannot be determined by the demand for and supply of new capital because investment ‘finances itself.’

The loanable funds theory is in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credits set by banks and determined by supply and demand — as Bertil Ohlin put it — “in the same way as the price of eggs and strawberries on a village market.”

loanIt is a beautiful fairy tale, but the problem is that banks are not barter institutions that transfer pre-existing loanable funds from depositors to borrowers. Why? Because, in the real world, there simply are no pre-existing loanable funds. Banks create new funds — credit — only if someone has previously got into debt! Banks are monetary institutions, not barter vehicles.

In the traditional loanable funds theory — as presented in mainstream macroeconomics textbooks — the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving is the supply of loanable funds, investment is the demand for loanable funds and assumed to be negatively related to the interest rate. Lowering households’ consumption means increasing savings via a lower interest.

That view has been shown to have very little to do with reality. It’s nothing but an otherworldly neoclassical fantasy. But there are many other problems as well with the standard presentation and formalization of the loanable funds theory:

As already noticed by James Meade decades ago, the causal story told to explicate the accounting identities used gives the picture of “a dog called saving wagged its tail labelled investment.” In Keynes’s view — and later over and over again confirmed by empirical research — it’s not so much the interest rate at which firms can borrow that causally determines the amount of investment undertaken, but rather their internal funds, profit expectations and capacity utilization.

As is typical of most mainstream macroeconomic formalizations and models, there is pretty little mention of real-world phenomena, like e. g. real money, credit rationing and the existence of multiple interest rates, in the loanable funds theory. Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something they definitely are not. As emphasized especially by Minsky, to understand and explain how much investment/loaning/crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

The loanable funds theory in the ‘New Keynesian’ approach means that the interest rate is endogenized by assuming that Central Banks can (try to) adjust it in response to an eventual output gap. This, of course, is essentially nothing but an assumption of Walras’ law being valid and applicable, and that a fortiori the attainment of equilibrium is secured by the Central Banks’ interest rate adjustments. From a realist Keynes-Minsky point of view, this can’t be considered anything else than a belief resting on nothing but sheer hope. [Not to mention that more and more Central Banks actually choose not to follow Taylor-like policy rules.] The age-old belief that Central Banks control the money supply has more an more come to be questioned and replaced by an ‘endogenous’ money view, and I think the same will happen to the view that Central Banks determine “the” rate of interest.

A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. This is seriously wrong:

gtThe classical theory of the rate of interest [the loanable funds theory] seems to suppose that, if the demand curve for capital shifts or if the curve relating the rate of interest to the amounts saved out of a given income shifts or if both these curves shift, the new rate of interest will be given by the point of intersection of the new positions of the two curves. But this is a nonsense theory. For the assumption that income is constant is inconsistent with the assumption that these two curves can shift independently of one another. If either of them shifts​, then, in general, income will change; with the result that the whole schematism based on the assumption of a given income breaks down … In truth, the classical theory has not been alive to the relevance of changes in the level of income or to the possibility of the level of income being actually a function of the rate of the investment.

There are always (at least) two parts in an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no ‘direct and immediate’ automatic interest mechanism at work in modern monetary economies. What this ultimately boils done to is — iter — that what happens at the microeconomic level — both in and out of equilibrium —  is not always compatible with the macroeconomic outcome. The fallacy of composition (the ‘atomistic fallacy’ of Keynes) has many faces — loanable funds is one of them.

Contrary to the loanable funds theory, finance in the world of Keynes and Minsky precedes investment and saving. Highlighting the loanable funds fallacy, Keynes wrote in “The Process of Capital Formation” (1939):

Increased investment will always be accompanied by increased saving, but it can never be preceded by it. Dishoarding and credit expansion provides not an alternative to increased saving, but a necessary preparation for it. It is the parent, not the twin, of increased saving.

What is ‘forgotten’ in the loanable funds theory, is the insight that finance — in all its different shapes — has its own dimension, and if taken seriously, its effect on an analysis must modify the whole theoretical system and not just be added as an unsystematic appendage. Finance is fundamental to our understanding of modern economies​ and acting like the baker’s apprentice who, having forgotten to add yeast to the dough, throws it into the oven afterwards, simply isn’t enough.

All real economic activities nowadays depend on a functioning financial machinery. But institutional arrangements, states of confidence, fundamental uncertainties, asymmetric expectations, the banking system, financial intermediation, loan granting processes, default risks, liquidity constraints, aggregate debt, cash flow fluctuations, etc., etc. — things that play decisive roles in channelling money/savings/credit — are more or less left in the dark in modern formalizations of the loanable funds theory.

So, yes, the ‘secular stagnation’ will be over, as soon as we free ourselves from the loanable funds theory — and scholastic gibbering about ZLB — and start using good old Keynesian fiscal policies.

In search of causality

14 Nov, 2018 at 14:41 | Posted in Statistics & Econometrics | 3 Comments

dilbert

One of the few statisticians that yours truly have on the blogroll is Andrew Gelman. Although not sharing his Bayesian leanings, I find his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer below for ‘reverse causal questioning’ is typical Gelmanian:

When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …

Reverse causal reasoning is different; it involves asking questions and searching for new variables that might not yet even be in our model. We can frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to “need an explanation”? It means that existing explanations — the existing model of the phenomenon — does not do the job …

By formalizing reverse casual reasoning within the process of data analysis, we hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. This is consistent with views such as Cartwright (2007) that causal inference in reality is more complex than is captured in any theory of inference … What we are really suggesting is a way of talking about reverse causal questions in a way that is complementary to, rather than outside of, the mainstream formalisms of statistics and econometrics.

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is revealing what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structures and forces that produce the observed events.

mcgregor4_clip_image002_0000

In Gelman’s essay there is  no explicit argument for abduction —  inference to the best explanation — but I would still argue that it is de facto nothing but a very strong argument for why scientific realism and inference to the best explanation are the best alternatives for explaining what is going on in the world we live in. The focus on causality, model checking, anomalies and context-dependence — although here expressed in statistical terms — is as close to abductive reasoning as we get in statistics and econometrics today.

Kalecki on wage-led growth

13 Nov, 2018 at 11:33 | Posted in Economics | Comments Off on Kalecki on wage-led growth

One of the main features of the capitalist system is the fact that what is to the advantage of a single entrepreneur does not necessarily benefit all entrepreneurs as a class. If one entrepreneur reduces wages he is able ceteris paribus to expand production; but once all entrepreneurs do the same thing — the result will be entirely different.

kaleckiLet us assume that wages have been in fact generally reduced … and in consequence unemployment vanishes. Has depression thus been overcome? By no means, as the goods produced have still to be sold … A precondition for an equilibrium at this new higher level is that the part of production which is not consumed by workers or by civil servants should be acquired by capitalists for their increased profits; in other words, the capitalists must spend immediately all their additional profits on consumption or investment. It is however most unlikely that this should happen … It is true that increased profitability stimulates investment but this stimulus will not work right away since the entrepreneurs will temporise until they are convinced that higher profitability is going to last … A reduction of wages does not constitute a way out of depression, because the gains are not used immediately by the capitalists for purchase of investment goods.

The power of self-belief …

13 Nov, 2018 at 11:24 | Posted in Varia | 3 Comments

 
ego

Istället för terapi (personal)

12 Nov, 2018 at 16:00 | Posted in Varia | Comments Off on Istället för terapi (personal)

I alla moderna människors liv behövs det tid för andhämtning och reflektion. Och ibland — när alla möjliga och omöjliga måsten och krav från omgivningen bara blir för många och högljudda — kan det vara skönt att dra sig undan lite grand och slå av på takten för en stund.

Alla har vi väl olika sätt att göra det på. Själv brukar jag gå in på Öppet Arkiv och titta på Gubben i stugan — Nina Hedenius underbart fina dokumentärfilm om den pensionerade skogsarbetaren Ragnars liv i Dalarnas finnskogar.

Enkelt. Vackert. En lisa för själen.

Kausala modeller och heterogenitet (wonkish)

12 Nov, 2018 at 13:50 | Posted in Statistics & Econometrics | Comments Off on Kausala modeller och heterogenitet (wonkish)

The Book of Why_coverI The Book of Why för Judea Pearl fram flera tunga skäl till varför den numera så populära kausala grafteoretiska ansatsen är att föredra framför mer traditionella regressionsbaserade förklaringsmodeller. Ett av skälen är att kausala grafer är icke-parametriska och därför inte behöver anta exempelvis additivitet och/eller frånvaro av interaktionseffekter — pilar och noder ersätter regressionsanalysens nödvändiga specificeringar av funktionella relationer mellan de i ekvationerna ingående variablerna.

Men även om Pearl och andra av grafteorins anhängare mest framhäver fördelarna med den flexibilitet det nya verktyget ger oss, finns det också klara risker och nackdelar med användandet av kausala grafer. Bristen på klargörande om additivitet, interaktion, eller andra variabel- och relationskaraktäristika föreligger och hur de i så fall specificeras, kan ibland skapa mer problem än de löser.

Många av problemen — precis som med regressionsanalyser — hänger samman med förekomsten och graden av heterogenitet. Låt mig ta ett exempel från skolforskningens område för att belysa problematiken.

En på senare år återkommande fråga som både politiker och forskare ställt sig (se t ex här och här) är om friskolor leder till att höja kunskapsnivå och provresultat bland landets skolelever. För att kunna svara på denna (realiter mycket svåra) kausala fråga, behöver vi ha kännedom om mängder av kända, observerbara variabler och bakgrundsfaktorer (föräldrars inkomster och utbildning, etnicitet, boende, etc, etc). Därutöver också faktorer som vi vet har betydelse men är icke-observerbara och/eller mer eller mindre omätbara.

Problemen börjar redan när vi frågar oss vad som döljer sig bakom den allmänna termen ‘friskola’. Alla friskolor är inte likvärdiga (homogenitet). Vi vet att det föreligger många gånger stora skillnader mellan dem (heterogenitet). Att då lumpa ihop alla och försöka besvara den kausala frågan utan att ta hänsyn till dessa skillnader blir många gånger poänglöst och ibland också fullständigt missvisande.

Ett annat problem är att en annan typ av heterogenitet — som har med specifikation av de funktionella relationerna att göra — kan dyka upp. Anta att friskoleeffekten hänger samman med exempelvis etnicitet, och att elever med ‘svensk bakgrund’ presterar bättre än elever med ‘invandrarbakgrund.’ Detta behöver inte nödvändigtvis innebära att elever med olika etnisk bakgrund i sig påverkas olika av att gå på friskola. Effekten kan snarare härröra, exempelvis, ur det faktum att de alternativa kommunala skolor invandrareleverna kunnat gå på varit sämre än de ‘svenska’ elever kunnat gå på. Om man inte tar hänsyn till dessa skillnader i jämförelsegrund blir de skattade friskoleeffekterna missvisande.

Ytterligare heterogenitetsproblem uppstår om de mekanismer som är verksamma vid skapandet av friskoleeffekten ser väsentligt annorlunda ut för olika grupper av elever. Friskolor med ‘fokus’ på invandrargrupper kan exempelvis tänkas vara mer medvetna om behovet av att stötta dessa elever och vidta kompenserande åtgärder för att motarbeta fördomar och dylikt. Utöver effekterna av den (förmodade) bättre undervisningen i övrigt på friskolor är effekterna för denna kategori av elever också en effekt av den påtalade heterogeniteten, och kommer följaktligen inte att sammanfalla med den för den andra gruppen elever.

Tyvärr är det inte slut på problemen här. Vi konfronteras också med ett svårlöst och ofta förbisett selektivitetsproblem. När vi vill försöka få svar på den kausala frågan kring effekterna av friskolor är ett vanligt förfarande i regressionsanalyser att ‘konstanthålla’ eller ‘kontrollera’ för påverkansfaktorer utöver de vi främst är intresserade av. När det gäller friskolor är en vanlig kontrollvariabel föräldrarnas inkomst- eller utbildnings-bakgrund. Logiken är att vi på så vis ska kunna simulera en (ideal) situation som påminner så mycket som möjligt om ett randomiserat experiment där vi bara ‘jämför’ (matchar) elever till föräldrar med jämförbar utbildning eller inkomst, och på så vis hoppas kunna erhålla ett bättre mått på den ‘rena’ friskoleeffekten. Kruxet här är att det inom varje inkomst- och utbildningskategori kan dölja sig ytterligare en – ibland dold och kanske omätbar — heterogenitet som har med exempelvis inställning och motivation att göra och som gör att vissa elever tenderar välja (selektera) att gå på friskolor eftersom de tror sig veta att de kommer att prestera bättre där än på kommunala skolor (i friskoledebatten är ett återkommande argument kring segregationseffekterna att elever till föräldrar med hög ‘socio-ekonomisk status’ här bättre tillgång till information om skolvalets effekter än andra elever). Inkomst- eller utbildningsvariabeln kan på så vis de facto  ‘maskera’ andra faktorer som ibland kan spela en mer avgörande roll än de. Skattningarna av friskoleeffekten kan därför här — åter — bli missvisande, och ibland till och med ännu mer missvisande än om vi inte ‘konstanthållit’ för någon kontrollvariabel alls (jfr med ‘second-best’ teoremet i välfärdsekonomisk teori)!

Att ‘kontrollera’ för möjliga ‘confounders’ är alltså inte alltid självklart rätt väg att gå. Om själva relationen mellan friskola (X) och studieresultat (Y) påverkas av införandet av  kontrollvariabeln ‘socio-ekonomisk status'(W) är detta troligen ett resultat av att det föreligger någon typ av samband mellan X och W. Detta  innebär också att vi inte har en  ideal ‘experimentsimulering’ eftersom det uppenbarligen finns faktorer som påverkar Y och som inte är slumpmässigt fördelade (randomiserade). Innan vi kan gå vidare måste vi då fråga oss varför sambandet i fråga föreligger. För att kunna kausalt förklara sambandet mellan X och Y, måste vi veta mer om hur W påverkar valet av X. Bland annat kan vi då finna att det föreligger en skillnad i valet av X mellan olika delar av  gruppen med hög ‘socio-ekonomisk status’ W. Utan kunskaper om denna selektionsmekanism kan vi inte  på ett tillförlitligt sätt mäta X:s effekt på Y — den randomiserade förklaringsmodellen är helt enkelt inte applicerbar. Utan kunskap om varför det föreligger ett samband  — och hur det ser ut — mellan X och W, hjälper oss inte ‘kontrollerandet’ eftersom det inte tar höjd för den verksamma selektionsmekanismen.

Utöver de här tangerade problemen har vi andra sedan gammalt välkända problem. Den så kallade kontext- eller gruppeffekten — för en elev som går på en friskola kan resultaten delvis vara en effekt av att hennes skolkamrater har liknande bakgrund och att hon därför i någon mening dra fördel av sin omgivning, vilket inte skulle ske om hon gick på en kommunal skola — innebär åter att ’confounder’ eliminering via kontrollvariabler inte självklart fungerar när det föreligger ett samband mellan kontrollvariabel och icke-eller svårmätbara icke-observerbara attribut som själva påverkar den beroende variabeln. I vårt skolexempel kan man anta att de föräldrar med en viss socio-ekonomisk status som skickar sina barn till friskolor skiljer sig från samma grupp av föräldrar som väljer låta barnen gå i kommunal skola. Kontrollvariablerna fungerar — åter igen — inte som fullödiga  substitut för ett verkligt experiments randomiserade ’assignment.’

Am I right in thinking that the method of multiple correlation analysis essentially depends on the economist having furnished, not merely a list of the significant causes, which is correct so far as it goes, but a complete list? For example, suppose three factors are taken into account, it is not enough that these should be in fact vera causa; there must be no other significant factor. If there is a further factor, not taken account of, then the method is not able to discover the relative quantitative importance of the first three. If so, this means that the method is only applicable where the economist is able to provide beforehand a correct and indubitably complete analysis of the significant factors. The method is one neither of discovery nor of criticism. It is a means of giving quantitative precision to what, in qualitative terms, we know already as the result of a complete theoretical analysis …

John Maynard Keynes

Vad avser användandet av kontrollvariabler får man inte heller bortse från en viktig aspekt som sällan berörs av de som använder berörda statistiska metoder. De i studierna ingående variablerna behandlas ‘som om’ relationerna mellan dem i populationen är slumpmässig. Men variabler kan ju de facto ha de värden de har just för att de ger upphov till de konsekvenser de har. Utfallet bestämmer på så vis alltså i viss utsträckning varför de ‘oberoende’ variablerna har de värden de har. De ”randomiserade’ oberoende variablerna visar sig i själva verket vara något annat än vad de antas vara, och omöjliggör därför också att observationsstudierna och kvasiexperimenten ens är i närheten av att vara riktiga experiment. Saker och ting ser ut som de gör många gånger av ett skäl. Ibland är skälen just de konsekvenser regler, institutioner och andra faktorer anteciperas ge upphov till! Det som uppfattas som ‘exogent’ är i själva verket inte alls ‘exogent’

Those variables that have been left outside of the causal system may not actually operate as assumed; they may produce effects that are nonrandom and that may become confounded with those of the variables directly under consideration.

Hubert Blalock

Vad drar vi för slutsats av allt detta då? Kausalitet är svårt och vi ska — trots kritiken — så klart inte kasta ut barnet med badvattnet. Men att inta en hälsosam skepsis och försiktighet när det gäller bedömning och värdering av statistiska metoders — vare sig det gäller kausal grafteori eller mer traditionell regressionsanalys — förmåga att verkligen slå fast kausala relationer, är definitivt att rekommendera.

Truth and probability

11 Nov, 2018 at 13:05 | Posted in Theory of Science & Methodology | 1 Comment

uncertainty-7Truth exists, and so does uncertainty. Uncertainty acknowledges the existence of an underlying truth: you cannot be uncertain of nothing: nothing is the complete absence of anything. You are uncertain of something, and if there is some thing, there must be truth. At the very least, it is that this thing exists. Probability, which is the science of uncertainty, therefore aims at truth. Probability presupposes truth; it is a measure or characterization of truth. Probability is not necessarily the quantification of the uncertainty of truth, because not all uncertainty is quantifiable. Probability explains the limitations of our knowledge of truth, it never denies it. Probability is purely epistemological, a matter solely of individual understanding. Probability does not exist in things; it is not a substance. Without truth, there could be no probability.

William Briggs’ approach is — as he acknowledges in the preface of his interesting and thought-provoking book — “closely aligned to Keynes’s.”

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in statistics — and the axiomatic probability theory underlying it — is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues – ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different.’

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we ‘simply do not know.’ As Keynes writes in Treatise:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …  In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history onto the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different ‘variables’ is not enough. If they cannot get at the causal structure that generated the data, they are not really ‘identified.’

How strange that writers of statistics textbook, as a rule, do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical ‘probability.’ In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the statistics carpet.

It’s high time that statistics textbooks give Keynes his due.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.