Postmodern mumbo jumbo
29 Jan, 2020 at 13:09 | Posted in Theory of Science & Methodology | 3 Comments
Fyra viktiga drag är gemensamma för de olika rörelserna:
- Centrala idéer förklaras inte.
- Grunderna för en övertygelse anges inte.
- Framställningen av läran har en språklig stereotypi …
- När det gäller åberopandet av lärofäder råder samma stereotypi — ett begränsat antal namn återkommer. Heidegger, Foucault, och Derrida kommer tillbaka, åter och åter …
Till de fyra punkterna vill jag emellertid … lägga till en femte:
5. Vederbörande har inte något väsentligen nytt att framföra.
Överdrivet? Elakt? Tja, smaken är olika. Men smaka på den här soppan och försök sen säga att det inte ligger något i den gamle lundaprofessorns karakteristik …
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
Models and evidence in economics
28 Jan, 2020 at 21:36 | Posted in Economics | Comments Off on Models and evidence in economics
Analogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.
One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.
But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’
Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.
The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?
One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’
Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’
In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).
In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?
As far as I can see there some heavy balls out there, but not even one single real economic law.
Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.
Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.
By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.
To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.
The problem articulated by Cartwright is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.
Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they.
In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all int their models.
So why do mainstream economists keep on pursuing this modelling project?
En garanterat icke-pk individ
27 Jan, 2020 at 13:41 | Posted in Varia | 2 CommentsKärlek hymlar han inte ned. Sex barn finns det, han älskar dem på olika vis. Barnbarnen också, men han hänger inte med dem.
– När de är i rummet är de ganska gulliga allihopa. Men avsätta en kväll till att sitta och passa? Då skulle jag bli knäpp. De är för svåra att prata med! Jag delar inte deras intressen. Bolibompa och sån skit. Rita.
Du har ju varit barn själv en gång.
– Ja men jag är inte det längre. Jag tycker bra om barn. Gärna på lite avstånd. Jag gillar att klappa dem på huvudet och peta dem i magen så de skrattar.
När blir de stora nog att prata med då?
– I din ålder.
Jag är ju för fan trettiosex år.
– Ja exakt. Mina barn är i den åldern, dem har jag inga problem med.
Leif GW Persson intervjuad i SDS 9/10 2011
The Swedish for-profit ‘free’ school disaster
27 Jan, 2020 at 11:38 | Posted in Education & School | 3 CommentsNeo-liberals and libertarians have always provided a lot of ideologically founded ideas and ‘theories’ to underpin their Panglossian view on markets. But when they are tested against reality they usually turn out to be wrong. The promised results are simply not to be found. And that goes for for-profit private schools too.
Sweden introduced a voucher-style reform in the 1990s and opened the market to for-profit schools. Since then the performance of the school system has deteriorated. The experiment soon turned out to be a momentous mistake. In Chile — the only other country in the world that has tried the same policy — it turned out as badly as in Sweden, and the country decided a couple of years ago to abandon the experiment. Sweden is now the only country in the world where we accept profit-driven companies to run publicly financed schools.
What’s caused the recent crisis in Swedish education? Researchers and policy analysts are increasingly pointing the finger at many of the choice-oriented reforms that are being championed as the way forward for American schools. While this doesn’t necessarily mean that adding more accountability and discipline to American schools would be a bad thing, it does hint at the many headaches that can come from trying to do so by aggressively introducing marketlike competition to education …
In the wake of the country’s nose dive in the PISA rankings, there’s widespread recognition that something’s wrong with Swedish schooling … Competition was meant to discipline government schools, but it may have instead led to a race to the bottom …
It’s the darker side of competition that Milton Friedman and his free-market disciples tend to downplay: If parents value high test scores, you can compete for voucher dollars by hiring better teachers and providing a better education—or by going easy in grading national tests. Competition was also meant to discipline government schools by forcing them to up their game to maintain their enrollments, but it may have instead led to a race to the bottom as they too started grading generously to keep their students …
It’s a lesson that Swedish parents and students have learned all too well: Simply opening the floodgates to more education entrepreneurs doesn’t disrupt education. It’s just plain disruptive.
Henry M. Levin — distinguished economist and director of the National Center for the Study of Privatization in Education at Teachers College, Columbia University — wrote this a couple of years ago when he reviewed the evidence about the effects of vouchers:
On December 3, 2012, Forbes Magazine recommended for the U.S. that: “…we can learn something about when choice works by looking at Sweden’s move to vouchers.” On March 11 and 12, 2013, the Royal Swedish Academy of Sciences did just that by convening a two day conference to learn what vouchers had accomplished in the last two decades … The following was my verdict:
- On the criterion of Freedom of Choice, the approach has been highly successful. Parents and students have many more choices among both public schools and independent schools than they had prior to the voucher system.
- On the criterion of productive efficiency, the research studies show virtually no difference in achievement between public and independent schools for comparable students. Measures of the extent of competition in local areas also show a trivial relation to achievement. The best study measures the potential choices, public and private, within a particular geographical area. For a 10 percent increase in choices, the achievement difference is about one-half of a percentile. Even this result must be understood within the constraint that the achievement measure is not based upon standardized tests, but upon teacher grades. The so-called national examination result that is also used in some studies is actually administered and graded by the teacher with examination copies available to the school principal and teachers well in advance of the “testing”. Another study found no difference in these achievement measures between public and private schools, but an overall achievement effect for the system of a few percentiles. Even this author agreed that the result was trivial.
- With respect to equity, a comprehensive, national study sponsored by the government found that socio-economic stratification had increased as well as ethnic and immigrant segregation. This also affected the distribution of personnel where the better qualified educators were drawn to schools with students of higher socio-economic status and native students. The international testing also showed rising variance or inequality in test scores among schools. No evidence existed to challenge the rising inequality.Accordingly, I rated the Swedish voucher system as negative on equity.
A recent Swedish study on the effects of school-choice concluded:
The results from the analyses made in this paper confirm that school choice, rather than residential segregation, is a more important factor determining variation in grades than is residential segregation.
The empirical analysis in this paper confirms the PISA-based finding that between-school variance in student performance in the Swedish school system has increased rapidly since 2000. We have also been able to show that this trend towards increasing performance gaps cannot be explained by shifting patterns of residential segregation. A more likely explanation is that increasing possibilities for school choice have triggered a process towards a more unequal school system. A rapid growth in the number of students attending voucher-financed, independent schools has been an important element of this process …
The idea of voucher-based independent school choice is commonly ascribed to Milton Friedman. Friedman’s argument was that vouchers would decrease the role of government and expand the opportunities for free enterprise. He also believed that the introduction of competition would lead to improved school results. As we have seen in the Swedish case, this has not happened. As school choice has increased, differences between schools have increased but overall results have gone down. As has proved to be the case with other neo-liberal ideas, school choice—when tested—has not been able to deliver the results promised by theoretical speculation.
What have we learned from this expensive experiment? School education should be publicly funded and provided by public schools and not by companies driven by profit interests!
For more on my own take on this issue — only in Swedish, sorry — see here and here.
Simone de Beauvoir — a pedophilia supporter?
26 Jan, 2020 at 17:08 | Posted in Politics & Society | 3 Comments
It has to be said that Beauvoir’s interest in these matters was not purely theoretical … She was dismissed from her teaching job in 1943 for “behavior leading to the corruption of a minor.” The minor in question was one of her pupils at a Paris lycée. It is well established that she and Jean-Paul Sartre developed a pattern, which they called the “trio,” in which Beauvoir would seduce her students and then pass them on to Sartre …
Beauvoir’s “Lolita Syndrome” … offers an evangelical defence of the sexual emancipation of the young … Beauvoir posits Bardot as the incarnation of “authenticity” and natural, pure “desire,” with “aggressive” sexuality devoid of any hypocrisy. The author of “The Second Sex” is keen to stress sexual equality and autonomy, but she also insists on the “charms of the ‘nymph’ in whom the fearsome image of the wife and the mother is not yet visible.”
The pretence-of-knowledge syndrome
26 Jan, 2020 at 14:05 | Posted in Economics | 3 Comments
The reaction of human beings to the truly unknown is fundamentally different from the way they deal with the risks associated with a known situation and environment … In realistic, real-time settings, both economic agents and researchers have a very limited understanding of the mechanisms at work … In trying to add a degree of complexity to the current core models, by bringing in aspects of the periphery, we are simultaneously making the rationality assumptions behind that core approach less plausible …
The challenges are big, but macroeconomists can no longer continue playing internal games … I suspect that whatever the solution ultimately is, we will accelerate our convergence to it, and reduce the damage we do along the transition, if we focus on reducing the extent of our pretense-of-knowledge syndrome.
Caballero’s article underlines — especially when it comes to forecasting and implementing economic policies — that the future is inherently unknowable, and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact.
Uncertainty is something that has to be addressed and not only assumed away. To overcome the feeling of hopelessness when confronting ‘unknown unknowns’, it is important — in economics in particular — to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in A Treatise on Probability (1921).
According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we “simply do not know.”
How strange that social scientists and mainstream economists, as a rule, do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for measurable quantities, one puts a blind eye to qualities and looks the other way.
So why do economists, companies and governments continue with the expensive, but obviously worthless, activity of trying to forecast/predict the future?
Some time ago yours truly was interviewed by a public radio journalist working on a series on Great Economic Thinkers. We were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these cocksure economists with their “rigorous” and “precise” mathematical-statistical-econometric models are so wrong again and again — why do they persist wasting time on it?
In a discussion on uncertainty and the hopelessness of accurately modelling what will happen in the real world — in M. Szenberg’s Eminent Economists: Their Life Philosophies — Nobel laureate Kenneth Arrowcomes up with what is probably the most plausible reason:
It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable.
An incident illustrates both uncer-tainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’
Uncertainty in economics
25 Jan, 2020 at 16:42 | Posted in Economics | 3 Comments
Not accounting for uncertainty may result in severe confusion about what we do indeed understand about the economy. In the financial crisis of 2007/2008 the demon has lashed out at this ignorance and challenged the credibility of the whole economic community by laying bare economists’ incapability to prevent the crisis …
Economics itself cannot be regarded a purely analytical science. It has the amazing and exciting property of shaping the object of its own analysis. This feature clearly distinguishes it from physics, chemistry, archaeology and many other sciences. While biologists, chemists, engineers, physicists and many more are very able to transform whole societies by their discoveries and inventions — like Penicillin or the internet — the laws of nature they study remain unaffected by these inventions. In economic, this constancy of the object under study just does not exist.
The financial crisis of 2007-2008 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even made it conceivable?
There are many who have ventured to answer that question. And they have come up with a variety of answers, ranging from the exaggerated mathematization of economics, to irrational and corrupt politicians.
But the root of our problem goes much deeper. It ultimately goes back to how we look upon the data we are handling. In ‘modern’ macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and New ‘Keynesian’ — variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ – if we do not have the ‘true’ model – the whole edifice collapses. And of course it has to. I mean, who honestly believes that we should have access to this mythical Holy Grail, the data-generating process?
‘Modern’ macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.
This is like saying that you are going on a holiday-trip and that you know that the chance the weather being sunny is at least 30%, and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.
But as Keynes convincingly argued in his monumental Treatise on Probability (1921), this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.
In the end this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty-type. The data do not unequivocally single out one decision as the only ‘rational’ one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.
Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.
How much better — how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control — if instead we could just admit that we often simply do not know, and that we have to live with that uncertainty as well as it goes.
Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing — economic disaster.
En kyss med smak av oljeblandad bensin och fett
25 Jan, 2020 at 12:45 | Posted in Varia | Comments Off on En kyss med smak av oljeblandad bensin och fett
Modell och verklighet i nationalekonomi
24 Jan, 2020 at 17:21 | Posted in Economics | Comments Off on Modell och verklighet i nationalekonomiNationalekonomi är mer än någon annan samhällsvetenskap modellorienterad. Det finns många skäl till detta — ämnets historia, ideal hämtade från naturvetenskapen, universalitetsanpråk, viljan att förklara så mycket som möjligt med så lite som möjligt, rigör, precision med mera.
Tillvägagångssättet är i grunden analytiskt — helheten bryts ned i sina beståndsdelar så att det blir möjligt att förklara (reducera) aggregatet (makro) som ett resultat av interaktion mellan delarna (mikro).
Mainstreamekonomer baserar i regel sina modeller på ett antal kärnantaganden (CA) — som i grunden beskriver aktörer som ‘rationella’ — samt ett antal auxiliära antaganden (AA). Tillsammans utgör (CA) och (AA) vad vi skulle kunna kalla ’basmodellen’ (M) för alla mainstreammodeller. Baserat på dessa två uppsättningar av antaganden försöker man förklara och predicera både individuella (mikro) och samhälleliga fenomen (makro).
Kärnantagandena består typiskt av:
CA1 Fullständighet – den rationella aktören förmår alltid jämföra olika alternativ och bestämma vilket hon föredrar
CA2 Transitivitet – om aktören föredrar A framför B, och B framför C, måste hon föredra A framför C
CA3 Icke-mättnad — mer är alltid bättre än mindre
CA4 Maximering av förväntad nytta – i situationer känneteckade av risk maximerar aktören alltid den förväntade nyttan
CA5 Konsistenta ekonomiska jämvikter – olika aktörers handlande är konsistenta och interaktionen dem emellan resulterar i en jämvikt
När man beskriver aktörer som rationella i de här modellerna avser man instrumentell rationalitet, som innebär att aktörer förutsätts välja alternativ som har de bästa konsekvenserna utifrån deras givna preferenser. Hur dessa givna preferenser har uppkommit uppfattas i regel ligga utanför rationalitetsbegreppets ’omfång’ och därför inte heller utgör en del av den ekonomiska teorin som sådan.
Bilden man får av kärnantagandena (’rationella val’) är en rationell aktör med starka kognitiva kapaciteter, som vet vad hon vill, noga överväger sina alternativ och givet sina preferenser väljer vad hon tror har de bästa konsekvenserna för henne. Vägandes de olika alternativen mot varandra gör aktören ett konsistent, rationellt val och agerar utifrån detta.
De auxiliära antagandena (AA) specificerar rums- och tidmässigt vad för typ av interaktion som kan äga rum mellan ‘rationella’ aktörer. Antagandena ger ofta svar på frågor som:
AA1 vilka är aktörerna och var och när interagerar de
AA2 vilka är deras mål och aspirationer
AA3 vilka intressen har de
AA4 vilka är deras förväntningar
AA5 vad för slags handlingsutrymme har de
AA6 vilket slags överenskommelser kan de ingå
AA7 hur mycket och vad för slags information besitter de
AA8 hur interagerar deras handlingar med varandra
Så ‘basmodellen’ för alla mainstream-modeller består av en generell bestämning av vad som (axiomatiskt) utgör optimerande rationella aktörer (CA) samt en mer specifik beskrivning (AA) av i vad för slags situationer som dessa aktörer agerar (vilket innebär att AA fungerar som en restriktion som bestämmer den tilltänkta applikationsdomänen för CA och de därur deduktivt härledda teoremen). Listan över antaganden kan aldrig bli fullständig eftersom det alltid också förekommer ospecificerade ’bakgrundsantaganden’ och opåtalade utelämnanden (typ transktionskostnader, slutningar, o d, ofta baserat på något slags negligerbarhets- och applikationsöverväganden). Förhoppningen är att denna ’tunna’ uppsättning antaganden ska vara tillräcklig för att förklara och predicera ’fylliga’ fenomen i den verkliga, komplexa världen.
The intellectual regress of macroeconomics
21 Jan, 2020 at 17:45 | Posted in Economics | 1 CommentReal business cycle theory — RBC — is one of the theories that has put macroeconomics on a path of intellectual regress for three decades now. And although there are many kinds of useless ‘post-real’ economics held in high regard within mainstream economics establishment today, few — if any — are less deserved than real business cycle theory.
The future is not reducible to a known set of prospects. It is not like sitting at the roulette table and calculating what the future outcomes of spinning the wheel will be. So instead of — as RBC economists do — assuming calibration and rational expectations to be right, one ought to confront the hypothesis with the available evidence. It is not enough to construct models. Anyone can construct models. To be seriously interesting, models have to come with an aim. They have to have an intended use. If the intention of calibration and rational expectations is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without specific applicability is not really deserving of our interest.
Without strong evidence, all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than rather watered-down versions of ‘anything goes’ when it comes to rationality postulates. If one proposes rational expectations one also has to support its underlying assumptions. None is given by RBC economists, which makes it rather puzzling how rational expectations has become the standard modelling assumption made in much of modern macroeconomics. Perhaps the reason is that economists often mistake mathematical beauty for truth.
In the hands of Lucas, Prescott and Sargent, rational expectations have been transformed from an — in-principle — testable hypothesis to an irrefutable proposition. Believing in a set of irrefutable propositions may be comfortable – like religious convictions or ideological dogmas – but it is not science.
So where does this all lead us? What is the trouble ahead for economics? Putting a sticky-price DSGE lipstick on the RBC pig sure won’t do. Neither will — as Paul Romer noticed — just looking the other way and pretend it’s raining:
The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.
Dokumentären som renar min själ (personal)
21 Jan, 2020 at 17:22 | Posted in Varia | Comments Off on Dokumentären som renar min själ (personal)I alla moderna människors liv behövs det tid för andhämtning och reflektion. Och ibland — när alla möjliga och omöjliga måsten och krav från omgivningen bara blir för många och högljudda — kan det vara skönt att dra sig undan lite grand och slå av på takten för en stund.
Alla har vi väl olika sätt att göra det på. Själv brukar jag gå in på Öppet Arkiv och titta på Gubben i stugan — Nina Hedenius underbara dokumentärfilm om den pensionerade skogsarbetaren Ragnars liv i Dalarnas finnskogar.
Enkelt. Vackert. En lisa för själen.
On causality and econometrics
20 Jan, 2020 at 11:45 | Posted in Statistics & Econometrics | Comments Off on On causality and econometrics
The point is that a superficial analysis, which only looks at the numbers, without attempting to assess the underlying causal structures, cannot lead to a satisfactory data analysis … We must go out into the real world and look at the structural details of how events occur … The idea that the numbers by themselves can provide us with causal information is false. It is also false that a meaningful analysis of data can be done without taking any stand on the real-world causal mechanism … These issues are of extreme important with reference to Big Data and Machine Learning. Machines cannot expend shoe leather, and enormous amounts of data cannot provide us knowledge of the causal mechanisms in a mechanical way. However, a small amount of knowledge of real-world structures used as causal input can lead to substantial payoffs in terms of meaningful data analysis. The problem with current econometric techniques is that they do not have any scope for input of causal information – the language of econometrics does not have the vocabulary required to talk about causal concepts.
What Asad Zaman tells us in his splendid set of lectures is that causality in social sciences can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.
Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.
Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.
Blog at WordPress.com.
Entries and Comments feeds.