My philosophy of economics

18 Jun, 2019 at 13:45 | Posted in Economics, Theory of Science & Methodology | 4 Comments

A critique yours truly sometimes encounters is that as long as I cannot come up with some own alternative to the failing mainstream theory, I shouldn’t expect people to pay attention.

This is, however, to totally and utterly misunderstand the role of philosophy and methodology of economics!

As John Locke wrote in An Essay Concerning Human Understanding:

19557-004-21162361The Commonwealth of Learning is not at this time without Master-Builders, whose mighty Designs, in advancing the Sciences, will leave lasting Monuments to the Admiration of Posterity; But every one​e must not hope to be a Boyle, or a Sydenham; and in an Age that produces such Masters, as the Great-Huygenius, and the incomparable Mr. Newton, with some other of that Strain; ’tis Ambition enough to be employed as an Under-Labourer in clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge.

That’s what philosophy and methodology can contribute to economics — clearing obstacles to science by clarifying limits and consequences of choosing specific modelling strategies, assumptions, and ontologies.

respectEvery now and then I also get some upset comments from people wondering why I’m not always ‘respectful’ of people like Eugene Fama, Robert Lucas, Greg Mankiw, Paul Krugman, Simon Wren-Lewis, and others of the same ilk.

But sometimes it might actually, from a Lockean perspective, be quite appropriate to be disrespectful.

New Classical and ‘New Keynesian’ macroeconomics is rubbish that ‘lies in the way to Knowledge.’

And when New Classical and ‘New Keynesian’ economists resurrect fallacious ideas and theories that were proven wrong already in the 1930s, then I think a less respectful and more colourful language is called for.

The LOGIC of science vs the METHODS of science

10 Jun, 2019 at 10:02 | Posted in Theory of Science & Methodology | Leave a comment

 

Postmodern mumbo jumbo

30 May, 2019 at 13:18 | Posted in Theory of Science & Methodology | 2 Comments

hall
Fyra viktiga drag är gemensamma för de olika rörelserna:

    1. Centrala idéer förklaras inte.
    2. Grunderna för en övertygelse anges inte.
    3. Framställningen av läran har en språklig stereotypi …
    4. När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn återkommer. Heidegger, Foucault, och Derrida kommer tillbaka, åter och åter …

Till de fyra punkterna vill jag emellertid … lägga till en femte:

5. Vederbörande har inte något väsentligen nytt att framföra.

Överdrivet? Elakt? Tja, smaken är olika. Men smaka på den här soppan och försök sen säga att det inte ligger något i den gamle lundaprofessorns karakteristik …

MUMBO-JUMBO1The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

Judith Butler

RCTs — gold standard or monster?

8 May, 2019 at 16:11 | Posted in Theory of Science & Methodology | Comments Off on RCTs — gold standard or monster?

tttOne important comment, repeated — but not unanimously — can perhaps be summarized as ‘All that said and done, RCTs are still generally the best that can be done in estimating average treatment effects and in warranting causal conclusions.’ It is this claim that is the monster that seemingly can never be killed, no matter how many stakes are driven through its heart. We strongly endorse Robert Sampson’s statement “That experiments have no special place in the hierarchy of scientific evidence seems to me to be clear.” Experiments are sometimes the best that can be done, but they are often not. Hierarchies that privilege RCTs over any other evidence irrespective of context or quality are indefensible and can lead to harmful policies. Different methods have different relative advantages and disadvantages.

Angus Deaton & Nancy Cartwright

Revisiting the foundations of randomness and probability

30 Apr, 2019 at 14:17 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 5 Comments

dudeRegarding models as metaphors leads to a radically different view regarding the interpretation of probability. This view has substantial advantages over conventional interpretations …

Probability does not exist in the real world. We must search for her in the Platonic world of ideals. We have shown that the interpretation of probability as a metaphor leads to several substantial changes in interpretations and justifications for conventional frequentist procedures. These changes remove several standard objections which have been made to these procedures. Thus our model seems to offer a good foundation for re-building our understanding of how probability should be interpreted in real world applications. More generally, we have also shown that regarding scientific models as metaphors resolves several puzzles in the philosophy of science.

Asad Zaman

Although yours truly has to confess of not being totally convinced that redefining​ probability as a metaphor is the right way to go forward on these foundational issues, Zaman’s article​ sure raises some very interesting questions on the way the concepts of randomness and probability are used in economics.

Modern mainstream economics relies to a large degree on the notion of probability. To at all be amenable to applied economic analysis, economic observations have to be conceived as random events that are analyzable within a probabilistic framework. But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

slide_1When attempting to convince us of the necessity of founding empirical economic analysis on probability models,  mainstream economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world (although I’m not sure Zaman agrees but rather puts also randomness in ‘the Platonic world of ideals’). Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events — in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment — there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures — something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences — and certainly in economics — are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data — whether cross-section, time series or panel data — as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes — at least outside of nomological machines like dice and roulette-wheels — are not self-evidently best modelled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks sound foundations.

When economists and econometricians — often uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations!​

On the impossibility of objectivity in science

25 Apr, 2019 at 14:16 | Posted in Theory of Science & Methodology | Comments Off on On the impossibility of objectivity in science

objOperations Research does not incorporate the arts and humanities largely because of its distorted belief that doing so would reduce its objectivity, a misconception it shares with much of science. The meaning of objectivity is less clear than that of optimality. Nevertheless, most scientists believe it is a good thing. They also believe that objectivity in research requires the exclusion of any ethical-moral values held by the researchers. We need not argue the desirability of objectivity so conceived; it is not possible.

Most, if not all, scientific inquiries involve either testing hypotheses or estimating the values of variables. Both of these procedures necessarily involve balancing two types of error. Hypotheses-testing procedures require use of a significance level, the significance of which appears to escape most scientists. Their choice of such a level is usually made unconsciously, dictated by convention. This level, as many of you know, is a probability of rejecting a hypothesis when it is true. Naturally, we would like to make this probability as small as possible. Unfortunately, however, the lower we set this probability, the higher is the probability of accepting a hypothesis when it is false. Therefore, choice of a significance level involves a value judgment by the scientist about the relative seriousness of these two types of error. The fact that he usually makes this value judgment unconsciously does not attest to his objectivity, but to his ignorance.

There is a significance level at which any hypothesis is acceptable, and a level at which it is not. Therefore, statistical significance is not a property of data or a hypothesis but is a consequence of an implicit or explicit value judgment applied to them.

The choice of an estimating procedure can also be shown to require the evaluation of the relative importance of negative and positive errors of estimation. The most commonly used procedures are “unbiased”; therefore, they provide best estimates only when errors of equal magnitude but of opposite sign are equally serious — a condition I have never found in the real world.

Russell L. Ackoff

RCTs — a method in search of ontological foundations

29 Mar, 2019 at 19:41 | Posted in Theory of Science & Methodology | 3 Comments

dessin2sRCTs treat social reality as though some simulacrum of laboratory conditions was a feasible and appropriate scientific method to apply, but in development research, unlike laboratory condition treatments, interventions are not manipulations of individuated and additive or simply combinable material components … but rather intervention into material social relations. While for the former, assuming away or stripping away everything other than a given effect focus can reveal the underlying invariant mechanics of that effect, in the latter one cannot take it as given that there is an underlying invariant mechanics that will continue to apply and one is just as liable to be assuming or stripping away what is important to the constitution of the material social relations … As such, RCTs may make for poor social science, because the approach is based on a mismatch between the RCT procedure and the constitution of reality under investigation—including the treatment of humans as deliberative centers of ultimate concern. In any case, technical sophistication is no guarantor of appropriately conceived “rigour” if the orientation of methods is inappropriate …

Jamie Morgan

Morgan’s reasoning confirms what yours truly has repeatedly argued on this blog and in On the use and misuse of theories and models in mainstream economics  — RCTs usually do not provide evidence that the results are exportable to other target systems. The almost religious belief with which its propagators portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

What makes collective action more likely?

25 Feb, 2019 at 18:09 | Posted in Theory of Science & Methodology | Comments Off on What makes collective action more likely?

Why I am not a Bayesian

17 Feb, 2019 at 18:20 | Posted in Theory of Science & Methodology | 16 Comments

What I do not believe is that the relation that matters is simply the entailment relation between the theory, on the one hand, and the evidence on the other. The reasons that the relation cannot be simply that of entailment are exactly the reasons why the hypothetico-deductive account … is inaccurate; but the suggestion is at least correct in sensing that our judgment of the relevance of evidence to theory depends on the perception of a structural connection between the two, and that degree of belief is, at best, epiphenomenal. In the determination of the bearing of evidence on theory there seem to be mechanisms and strategems that have no apparent connection with degrees of belief, which are shared alike by people advocating different theories. Save for the most radical innovations, scientists seem to be in close agreement regarding what would or would not be evidence relevant to a novel theory; claims as to the relevance to some hypothesis of some observation or experiment are frequently buttressed by detailed calculations and arguments.e905b578f6 All of these features of the determination of evidential relevance suggest that that relation depends somehow on structural, objective features connecting statements of evidence and statements of theory. But if that is correct, what is really important and really interesting is what these structural features may be. The condition of positive relevance, even if it were correct, would simply be the least interesting part of what makes evidence relevant to theory.

None of these arguments is decisive against the Bayesian scheme of things … But taken together, I think they do at least strongly suggest that there must be relations between evidence and hypotheses that are important to scientific argument and to confirmation but to which the Bayesian scheme has not yet penetrated.

Clark Glymour

The vain search​ for The Holy Grail of Science

13 Feb, 2019 at 17:40 | Posted in Theory of Science & Methodology | Comments Off on The vain search​ for The Holy Grail of Science

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.
monthly-sharpe-header

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive inference, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

The quest​ for certainty — a new substitute for religion

10 Feb, 2019 at 16:13 | Posted in Theory of Science & Methodology | 6 Comments

popIn this post-rationalist age of ours, more and more books are written in symbolic languages, and it becomes more and more difficult to see why: what it is all about, and why it should be necessary, or advantageous, to allow oneself to be bored by volumes of symbolic trivialities. It almost seems as if the symbolism were becoming a value in itself, to be revered for its sublime ‘exactness’: a new expression of the old quest for certainty, a new symbolic ritual, a new substitute for religion.

As a critic of mainstream economics mathematical-formalist Glasperlenspiel it is easy to share the feeling of despair …

Bayesian ‘old evidence’ problems

9 Feb, 2019 at 21:08 | Posted in Theory of Science & Methodology | Comments Off on Bayesian ‘old evidence’ problems

debWhy is the subjective Bayesian supposed to have an old evidence problem?

The allegation … goes like this: If probability is a measure of degree of belief, then if an agent already knows that e has occurred, the agent must assign P(e) the value 1. Hence P(e|H) is assigned a value of 1. But this means no Bayesian support accrues from e. For if P(e) = P(e|H) = 1, then P(H|e) = P(H). The Bayesian condition for support is not met …

How do subjective Bayesians respond to the charge that they have an old evidence problem? The standard subjective Bayesian response is  …

“The Bayesian interprets P(e|H) as how likely you think e would be were h to be false” …

But many people — Bayesians included — are not too clear about how this “would be” probability is supposed to work.

Yes indeed — how is such a “would be” probability to be interpreted? The only feasible solution is arguably to restrict the Bayesian calculus to problems where well-specified nomological machines are operating. Throwing a die or pulling balls from an urn is fine, but then the Bayesian calculus would of course not have much to say about science …

Den postmoderna utförsbacken

4 Feb, 2019 at 17:20 | Posted in Theory of Science & Methodology | Comments Off on Den postmoderna utförsbacken

slippyDisse teorier om det sociale kan så rettes mod den sociale institution, vi kalder viden. Hermed bliver viden til eksempelvis internaliserede objektiverede og legitimerede eksternaliseringer af menneskelig adfærd … Dette er i og for sig uproblematisk forudsat, at man kan opretholde en distinktion mellem det vi i samfundet kalder ”viden” og så viden i sin filosofiske og epistemologiske forstand, som viden om virkeligheden (”justified true belief”) … Men det er lige her, at problemerne begynder. For den instans, der skulle opretholde og indfri dette skel mellem det vi opfatter som viden og så rigtig – altså sand – viden, er netop videnskaben … Det klassiske videnssociologiske relativismeproblem dukker hermed op. Videnskabelig viden er ikke særligt begrundet, og har i den sidste ende ingen epistemologisk fortrinsret over for f.eks. pseudo-videnskabelige videnspåstande.

Man starter som god kritisk samfundsforsker og ender som relativist. Det er, hvad jeg kalder den socialkonstruktivistiske glidebane. Den er ingen naturlige steder, hvor man kan hoppe af undervejs. I hvert fald ikke, når videnskaben ikke længere kan skelne mellem samfundsmæssige viden og rigtig (videnskabelig) viden. Ja – faktisk kan glidebanen fortsætte længere endnu. Man kan – også på baggrund af videnskabsforståelsens seneste udviklinger – få så store problemer med at skelne mellem viden og virkelighed, at man ender i en ontologisk idealisme, hvor også virkeligheden selv er socialt konstrueret …

Mange af socialkonstruktivismens tilhængere har på grund af de mange forskellige socialkonstruktivismer ikke lige øje for skellet mellem det uproblematiske udgangspunkt og den radikale erkendelsesteoretiske relativisme, som de ender i, og derfor erklærer de ubekymret deres socialkonstruktivistiske ståsted.

Søren Barlebo Wenneberg

Den postmoderna socialkonstruktivismens ‘slippery slope’ diskuteras även i den här powerpointen som yours truly presenterade för några år sedan.

Postmodernism — en antiintellektuell avgrund

3 Feb, 2019 at 11:28 | Posted in Theory of Science & Methodology | Comments Off on Postmodernism — en antiintellektuell avgrund

Den antiintellektuella avgrunden är nära när den postmoderna sanningsrelativismen infekterar det offentliga samtalet på alla nivåer, inklusive den akademiska världen.

truth_exit_signI Sverige tycks den pedagogiska disciplinen vara värst smittad. En docent i pedagogik fick för några år sedan Skolverkets uppgift att skriva en rapport om fysikundervisningen i den svenska skolan, samt komma med förslag på hur den skulle attrahera fler flickor.

Ur rapporten:

”Föreställningen om det vetenskapliga tänkandets självklara överhöghet rimmar illa med jämställdhets- och demokratiidealen. […] Vissa sätt att tänka och resonera premieras mera än andra i naturvetenskapliga sammanhang. […] Om man inte uppmärksammar detta riskerar man att göra missvisande bedömningar. Till exempel genom att oreflekterat utgå från att ett vetenskapligt tänkande är mer rationellt och därför borde ersätta ett vardagstänkande” …

Pedagogen skriver vidare i rapporten: ”En genusmedveten och genuskänslig fysik förutsätter en relationell infallsvinkel på fysiken samt att en hel del av det traditionella vetenskapliga kunskapsinnehållet i fysiken plockas bort.”

Det vetenskapliga kunskapsinnehållet i fysiken ska alltså ”plockas bort” för att ”underlätta” för flickor. Inte nog med att detta är en förfärlig kunskapssyn, det är dessutom kränkande att betrakta flickor som oförmögna eller sämre på att ta till sig kunskap i fysik.

Författaren till rapporten heter Moira von Wright och är numera professor i pedagogik och rektor för Södertörns högskola. När nu en sådan kunskapsteoretisk grundsyn slagit rot i våra högre lärosäten har vi ett problem.

Martin Ingvar  Christer Sturmark  Åsa Wikforss

Postmodern mumbo jumbo på våra universitet

2 Feb, 2019 at 13:08 | Posted in Theory of Science & Methodology | 1 Comment

hur-gar-det-till-inom-vetenskapen_200Fyra viktiga drag är gemensamma för de olika rörelserna:

1 Centrala idéer förklaras inte.

2 Grunderna för en övertygelse anges inte.

3 Framställningen av läran har en språklig stereotypi … samma formuleringar återkommer gång på gång, utan att nyanseras och utan att utvecklas.

4 När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn förekommer. Heidegger, Foucault och Derrida kommer tillbaka, åter och åter …

Till de fyra punkterna vill jag emellertid, helt polemiskt, lägga till en femte:

5 Vederbörande har inte något väsentligt nytt att framföra …

Innehållslösheten måste döljas, och krystade och knepiga formuleringar kommer då med i spelet …

61YybQj5SvLOch ingenstans i den svenska akademin befinner sig en disciplin mer uppenbart på randen av denna antiintellektuella postmoderna avgrund än inom pedagogiken. I inget annat akademiskt ämne har postmodern sanningsrelativism och kvasi-vetenskaplig mumbo jumbo fått en sådan framskjuten position.

Ett tydligt belägg för sakernas bedrövliga tillstånd inom svensk så kallad ‘pedagogisk forskning’ kan man få genom att till exempel läsa artikeln “En pedagogisk relation mellan människa och häst — på väg mot en pedagogisk filosofisk utforskning av mellanrummet” i Pedagogisk Forskning i Sverige:

Med en posthumanistisk ansats belyser och reflekterar jag över hur både människa och häst överskrider sina varanden och hur det öppnar upp ett mellanrum med dimensioner av subjektivitet, kroppslighet och ömsesidighet.

Eller, om man så vill, genom att ta del av bidragen i en nyligen utkommen ‘forskningsantologi’ i pedagogik:

posthumanistisk-pedagogik-teori-undervisning-och-forskningspraktikDen posthumanistiska pedagogiken utmanar oss att producera nya verkligheter där människan inte längre sätter sig själv i centrum. Kropp, materia, djur och natur blir aktiva deltagare när kunskap blir till. Det möjliggör att se lärande och kunskap på ett nytt och annorlunda sätt.

I stället för att fokusera på gränser visar den posthumanistiska pedagogiken hur fruktbart det kan vara att följa oriktade och asymmetriska rörelser åt oförutsägbara och okontrollerbara håll …

Boken ger olika ingångar till posthumanistisk pedagogik, så som filosofi, etik, feminism, poesi, visuell kunskap och dokumentation, men också bokhundar, skolböcker, pennskrin, dataskärmar, monterade djur och ultraljudsbilder.

elite-daily-sleeping-studentOch så säger man att pedagogikämnet är i kris. Undrar varför …

Next Page »

Blog at WordPress.com.
Entries and comments feeds.