Uncertainty and reflexivity — two things missing from Krugman’s economics

18 September, 2014 at 09:36 | Posted in Theory of Science & Methodology | 7 Comments

One thing that’s missing from Krugman’s treatment of useful economics is the explicit recognition of what Keynes and before him Frank Knight, emphasized: the persistent presence of enormous uncertainty in the economy. Most people most of the time don’t just face quantifiable risks, to be tamed by statistics and probabilistic reasoning. We have to take decisions in the prospect of events–big and small–we can’t predict even with probabilities.uncertainty Keynes famously argued that classical economics had no role for money just because it didn’t allow for uncertainty. Knight similarly noted that it made no room for the entrepreneur owing to the same reason. That to this day standard economic theory continues to rules out money and excludes entrepreneurs may strike the noneconomist as odd to say the least. But there it is. Why is uncertainty so important? Because the more of it there is in the economy the less scope for successful maximizing and the more unstable are the equilibria the economy exhibits, if it exhibits any at all. Uncertainty is just what the New Classical neglected when they endorsed the efficient market hypothesis and the Black-Scholes formulae for pumping returns out of well-behaved risks.

If uncertainty is an ever present, pervasive feature of the economy, then we can be confident, along with Krugman, that New Classical models wont be useful over the long haul. Even if people are perfectly rational too many uncertain, “exogenous” events will divert each new equilibrium path before it can even get started.

There is a second feature of the economy that Krugman’s useful economics needs to reckon with, one that Keynes and after him George Soros, emphasized. Along with uncertainty, the economy exhibits pervasive reflexivity: expectations about the economic future tend to actually shift that future. This will be true whether those expectations are those of speculators, regulators, even garden-variety consumers and producers. Reflexiveness is everywhere in the economy, though it is only easily detectable when it goes to extremes, as in bubbles and busts, or regulatory capture …

When combined uncertainty and reflexivity greatly limit the power of maximizing and equilibrium to do useful economics … Between them, they make the economy a moving target for the economist. Models get into people’s heads and change their behavior, usually in ways that undermine the model’s usefulness to predict.

Which models do this and how they work is not a matter of quantifiable risk, but radical uncertainty …

Between them reflexivity and uncertainty make economics into a retrospective, historical science, one whose models—simple or complex—are continually made obsolete by events, and so cannot be improved in the direction of greater predictive power, even by more complication. The way expectations reflexively drive future economic events, and are driven by past ones, is constantly being changed by the intervention of unexpected, uncertain, exogenous ones.

Alex Rosenberg

[h/t Jan Milch]

Rethinking methodology and theory in economics

16 September, 2014 at 07:59 | Posted in Theory of Science & Methodology | Leave a comment

 

Original sin in economics

30 August, 2014 at 13:32 | Posted in Theory of Science & Methodology | 1 Comment

Ever since the Enlightenment various economists had been seeking to mathematise the study of the economy. In this, at least prior to the early years of the twentieth century, economists keen to mathematise their discipline felt constrained in numerous ways, and not least by pressures by (non-social) natural scientists and influential peers to conform to the ‘standards’ and procedures of (non-social) natural science, and thereby abandon any idea of constructing an autonomous tradition of mathematical economics. Especially influential, in due course, was the classical reductionist programme, the idea that all mathematical disciplines should be reduced to or based on the model of physics, in particular on the strictly deterministic approach of mechanics, with its emphasis on methods of infinitesimal calculus …

quineHowever, in the early part of the twentieth century changes occurred in the inter-pretation of the very nature of mathe-matics, changes that caused the classical reductionist programme itself to fall into disarray. With the development of relativity theory and especially quantum theory, the image of nature as continuous came to be re-examined in particular, and the role of infinitesimal calculus, which had previously been regarded as having almost ubiquitous relevance within physics, came to be re-examined even within that domain.

The outcome, in effect, was a switch away from the long-standing emphasis on mathematics as an attempt to apply the physics model, and specifically the mechanics metaphor, to an emphasis on mathematics for its own sake.

Mathematics, especially through the work of David Hilbert, became increasingly viewed as a discipline properly concerned with providing a pool of frameworks for possible realities. No longer was mathematics seen as the language of (non-social) nature, abstracted from the study of the latter. Rather, it was conceived as a practice concerned with formulating systems comprising sets of axioms and their deductive consequences, with these systems in effect taking on a life of their own. The task of finding applications was henceforth regarded as being of secondary importance at best, and not of immediate concern.

This emergence of the axiomatic method removed at a stroke various hitherto insurmountable constraints facing those who would mathematise the discipline of economics. Researchers involved with mathematical projects in economics could, for the time being at least, postpone the day of interpreting their preferred axioms and assumptions. There was no longer any need to seek the blessing of mathematicians and physicists or of other economists who might insist that the relevance of metaphors and analogies be established at the outset. In particular it was no longer regarded as necessary, or even relevant, to economic model construction to consider the nature of social reality, at least for the time being. Nor, it seemed, was it possible for anyone to insist with any legitimacy that the formulations of economists conform to any specific model already found to be successful elsewhere (such as the mechanics model in physics). Indeed, the very idea of fixed metaphors or even interpretations, came to be rejected by some economic ‘modellers’ (albeit never in any really plausible manner).

The result was that in due course deductivism in economics, through morphing into mathematical deductivism on the back of developments within the discipline of mathematics, came to acquire a new lease of life, with practitioners (once more) potentially oblivious to any inconsistency between the ontological presuppositions of adopting a mathematical modelling emphasis and the nature of social reality. The consequent rise of mathematical deductivism has culminated in the situation we find today.

Tony Lawson

Pedagogikämnets kärna äntligen funnen

29 August, 2014 at 10:32 | Posted in Theory of Science & Methodology | 1 Comment

I senaste numret av Pedagogisk Forskning i Sverige (2-3 2014) ger författaren till artikeln En pedagogisk relation mellan människa och häst. På väg mot en pedagogisk filosofisk utforskning av mellanrummet följande intressanta “programförklaring”:

Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.

elite-daily-sleeping-studentOch så säger man att pedagogikämnet är i kris. Undrar varför …

How to prove labour market discrimination

27 August, 2014 at 11:40 | Posted in Theory of Science & Methodology | 1 Comment

A 2005 governmental inquiry led to a trial period involving anonymous job applications in seven public sector workplaces during 2007. In doing so, the public sector aims to improve the recruitment process and to increase the ethnic diversity among its workforce. There is evidence to show that gender and ethnicity have an influence in the hiring process although this is considered as discrimination by current legislation …

shutterstock_98546318-390x285The process of ‘depersonalising’ job applications is to make these applications anonymous. In the case of the Gothenburg trial, certain information about the applicant – such as name, sex, country of origin or other identifiable traits of ethnicity and gender – is hidden during the first phase of the job application procedure. The recruiting managers therefore do not see the full content of applications when deciding on whom to invite for interview. Once a candidate has been selected for interview, this information can then be seen.

The trial involving job applications of this nature in the city of Gothenburg is so far the most extensive in Sweden. For this reason, the Institute for Labour Market Policy Evaluation (IFAU) has carried out an evaluation of the impact of anonymous job applications in Gothenburg …

The data used in the IFAU study derive from three districts in Gothenburg … Information on the 3,529 job applicants and a total of 109 positions were collected from all three districts …

A difference-in-difference model was used to test the findings and to estimate the effects in the outcome variables: whether a difference emerges regarding an invitation to interview and job offers in relation to gender and ethnicity in the case of anonymous job applications compared with traditional application procedures.

For job openings where anonymous job applications were applied, the IFAU study reveals that gender and the ethnic origin of the applicant do not affect the probability of being invited for interview. As would be expected from previous research, these factors do have an impact when compared with recruitment processes using traditional application procedures where all the information on the applicant, such as name, sex, country of origin or other identifiable traits of ethnicity and gender, is visible during the first phase of the hiring process. As a result, anonymous applications are estimated to increase the probability of being interviewed regardless of gender and ethnic origin, showing an increase of about 8% for both non-western migrant workers and women.

Paul Andersson/EWCO

 

RCTs — false validity claims

23 August, 2014 at 19:33 | Posted in Theory of Science & Methodology | 4 Comments

As yours truly has repeatedly argued (here here here) on this blog, RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

In an extremely interesting article on the grand claims to external validity often raised by advocates of RCTs, Lant Pritchett  and Justin Sandefur now confirm this view and show that using an RCT is not at all the “gold standard” it is portrayed as:

 

Our point here is not to argue against any well-founded generalization of research findings, nor against the use of experimental methods. Both are central pillars of scientific research. As a means of quantifying the impact of a given development project, or measuring the underlying causal parameter of a clearly-specified economic model, field experiments provide unquestioned advantages over observational studies.

Test-GroupBut the popularity of RCTs in development economics stems largely from the claim that they provide a guide to making “evidence-based” policy decisions. In the vast majority of cases, policy recommendations based on experimental results hinge not only on the interior validity of the treatment effect estimates, but also on their external validity across contexts.

Inasmuch as development economics is a worthwhile, independent field of study – rather than a purely parasitic form of regional studies, applying the lessons of rich-country economies to poorer settings – its central conceit is that development is different. The economic, social, and institutional systems of poor countries operate differently than in rich countries in ways that are sufficiently fundamental to require different models and different data.

It is difficult if not impossible to adjudicate the external validity of an individual eperimental result in isolation. But experimental results do not exist in a vacuum. On many development policy questions, the literature as a whole — i. e., the combination of experimental and non-experimental results across multiple contexts — collectively invalidate any claim of external validity for any individual experimental result.

Lant Pritchett & Justin Sandefur

Fem skäl att ifrågasätta randomiserade kontrollstudier

23 August, 2014 at 10:17 | Posted in Theory of Science & Methodology | Leave a comment

rctYours truly har i ett antal artiklar här på bloggen — se t. ex. här, här och här — ifrågasatt värdet av randomiserade kontrollstudier (RKS) utifrån vetenskapsteoretiska och metodologiska utgångspunkter. I ett läsvärt gästinläggekonomistas ifrågasätter Björn Ekman starkt värdet av RKS som vägledning för biståndsarbete:

Randomisering är svårt. Det är i själva verket så knepigt att uppnå en ”ren” randomisering att det till och med kan vara en utmaning i laboratoriemiljö, dvs där metoden först uppstod och där man, i princip, kan kontrollera för alla de faktorer som man vill ta hänsyn till …

Randomisering behövs inte …

Randomisering svarar bara delvis på relevanta frågor. Ett problem med RKS-insatser som till och med många ”randomistas” åtminstone delvis håller med om är de relativt begränsade frågor som en sådan typ av insats ger svar på. Grovt sagt så svarar en randomiserad studie på om just den här insatsen fungerade just i denna kontext vid den här tidpunkten, dvs den har hög intern validitet. Det är inte oväsentliga svar, men de ger inte vid handen om insatsen bör utvidgas till andra områden och förhållanden. För svar på sådana frågor behövs studier med hög extern validitet.

Randomisering är inte tillämpbart inom stora delar av biståndet. Det är mycket möjligt, kanske till och med troligt, att det finns biståndsfinansierade projekt och insatser som bör designas och genomföras på ett randomiserat sätt för att utvärdera dess effekter. Men, de insatserna tillhör sannolikt minoriteten av allt det som finansieras av bistånd. Det allra mesta går inte att genomföra på ett slumpmässigt och kontrollerat sätt.

Dessutom finns det uppenbara etiska aspekter på en insats genomförande där man ger en grupp ett bevisligen effektivt medel, medan en annan grupp inte tilldelas samma medel …

Randomisering är inte kostnadseffektivt. Att genomföra en randomiserad kontrollstudie är dyrt, väldigt dyrt.

The vanity of deductivity

14 August, 2014 at 14:44 | Posted in Theory of Science & Methodology | 3 Comments

41EofxYHtBLModelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’

The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …

How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility … to teach genuine truths about empirical reality.

Non-contagious rigour and microfounded DSGE models

13 August, 2014 at 20:59 | Posted in Theory of Science & Methodology | Leave a comment

broken-link

Microfounded DSGE models standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

Behavioural and experimental economics – not to speak of psychology – show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same – why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions – so, in what way can one maintain that these models give workable microfoundations for macroeconomics?

I think science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

Simply the best

13 August, 2014 at 17:35 | Posted in Theory of Science & Methodology | 4 Comments

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, it is to identify the underlying structure and forces that produce the observed events.

In a truly wonderful essay in Error and Inference (Cambridge University Press, 2010, eds. Deborah Mayo and Aris Spanos), Alan Musgrave explains why scientific realism and inference to the best explanation (IBE) are the best alternatives for explaining what’s going on in the world we live in:

For realists, the name of the scientific game is explaining phenomena, not just saving them. Realists typically invoke ‘inference to the best explanation’ or IBE …

IBE is a pattern of argument that is ubiquitous in science and in everyday life as well. van Fraassen has a homely example:

“I hear scratching in the wall, the patter of little feet at midnight, my cheese disappears – and I infer that a mouse has come to live with me. Not merely that these apparent signs of mousely presence will continue, not merely that all the observable phenomena will be as if there is a mouse, but that there really is a mouse.” (1980: 19-20)

Here, the mouse hypothesis is supposed to be the best explanation of the phenomena, the scratching in the wall, the patter of little feet, and the disappearing cheese.

alan musgraveWhat exactly is the inference in IBE, what are the premises, and what the conclusion? van Fraassen says “I infer that a mouse has come to live with me”. This suggests that the conclusion is “A mouse has come to live with me” and that the premises are statements about the scratching in the wall, etc. Generally, the premises are the things to be explained (the explanandum) and the conclusion is the thing that does the explaining (the explanans). But this suggestion is odd. Explanations are many and various, and it will be impossible to extract any general pattern of inference taking us from explanandum to explanans. Moreover, it is clear that inferences of this kind cannot be deductively valid ones, in which the truth of the premises guarantees the truth of the conclusion. For the conclusion, the explanans, goes beyond the premises, the explanandum. In the standard deductive model of explanation, we infer the explanandum from the explanans, not the other way around – we do not deduce the explanatory hypothesis from the phenomena, rather we deduce the phenomena from the explanatory hypothesis …

Read more

Next Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.