## The force from above

15 juli, 2018 kl. 16:45 | Publicerat i Varia | Kommentarer inaktiverade för The force from above

## Interview mit Esther Duflo

14 juli, 2018 kl. 09:29 | Publicerat i Economics | Kommentarer inaktiverade för Interview mit Esther Duflo

ZEITmagazin:Ich könnte Ihnen von ähnlichen Absurditäten aus dem deutschen Bildungswesen erzählen.

Duflo:Und bestimmt nicht nur aus den Schulen. Darum haben wir unsere Methode aus den Entwicklungsländern nach Europa und in die USA exportiert. Viele Länder der EU zum Beispiel bezahlen jungen Arbeitslosen ein Training für Bewerbungsgespräche. Wir haben randomisierte Experimente gemacht: Die Arbeitslosen mit Training finden tatsächlich schneller einen Job – aber nur so lange, wie nicht alle Bewerber an einem Ort das Training bekommen. Wenn das Bewerbungstraining also zum Standardprogramm wird, ist es ganz einfach hinausgeworfenes Geld.

ZEITmagazin:Was sollte man stattdessen tun?

Duflo:Die traurige Antwort ist: Wir haben derzeit keine guten Programme, um Arbeitslosen bei der Jobsuche zu helfen.

ZEITmagazin:Und allein mit Ihren Experimenten lassen sich auch keine finden. Denn zunächst zeigen Ihre Versuche, was alles nicht funktioniert.

Duflo:Trotzdem ist die Methode nützlich. Bevor Sie wirkungsvoll helfen können, müssen Sie erst viele Irrwege gehen. Und dabei versteht man die Zusammenhänge mit der Zeit immer besser. So führt ein Experiment zum nächsten, und am Ende kommt mitunter doch eine hilfreiche Lösung heraus.

## Regression analysis — a case of wishful thinking

13 juli, 2018 kl. 18:34 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Regression analysis — a case of wishful thinkingThe impossibility of proper specification is true generally in regression analyses across the social sciences, whether we are looking at the factors affecting occupational status, voting behavior, etc. The problem is that as implied by the conditions for regression analyses to yield accurate, unbiased estimates, you need to investigate a phenomenon that has underlying mathematical regularities – and, moreover, you need to know what they are. Neither seems true. I have no reason to believe that the way in which multiple factors affect earnings, student achievement, and GNP have some underlying mathematical regularity across individuals or countries. More likely, each individual or country has a different function, and one that changes over time. Even if there was some constancy, the processes are so complex that we have no idea of what the function looks like.

Researchers recognize that they do not know the true function and seem to treat, usually implicitly, their results as a good-enough approximation. But there is no basis for the belief that the results of what is run in practice is anything close to the underlying phenomenon, even if there is an underlying phenomenon. This just seems to be wishful thinking. Most regression analysis research doesn’t even pay lip service to theoretical regularities. But you can’t just regress anything you want and expect the results to approximate reality. And even when researchers take somewhat seriously the need to have an underlying theoretical framework – as they have, at least to some extent, in the examples of studies of earnings, educational achievement, and GNP that I have used to illustrate my argument – they are so far from the conditions necessary for proper specification that one can have no confidence in the validity of the results.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions do not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are — strictly seen — invalid.

## In need of medication

13 juli, 2018 kl. 13:42 | Publicerat i Politics & Society | 1 kommentar

In Trump-era US, reality sure beats fiction in the worst way possible, day after day.

## The main reason why almost all econometric models are wrong

13 juli, 2018 kl. 09:33 | Publicerat i Statistics & Econometrics | 3 kommentarerHow come that econometrics and statistical regression analyses still have not taken us very far in discovering, understanding, or explaining causation in socio-economic contexts? That is the question yours truly has tried to answer in an article published in the latest issue of World Economic Association Commentaries:

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data — whether cross-section, time series or panel data — as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes — at least outside of nomological machines like dice and roulette-wheels — are not self-evidently best modelled with probability measures.

When economists and econometricians — often uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice. If you cannot show that data satisfies

allthe conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations.Statistical — and econometric — patterns should never be seen as anything other than possible clues to follow. Behind observable data, there are real structures and mechanisms operating, things that are — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.

Statistics cannot establish the truth value of a fact. Never has. Never will.

## Allt — och lite till — du vill veta om kausalitet

12 juli, 2018 kl. 18:00 | Publicerat i Theory of Science & Methodology | 1 kommentarRolf Sandahls och Gustav Jakob Peterssons *Kausalitet: i filosofi, politik och utvärdering* är en synnerligen välskriven och läsvärd genomgång av de mest inflytelserika teorierna om kausalitet som används inom vetenskapen idag.

Tag och läs!

I den positivistiska (hypotetisk-deduktiva, deduktiv-nomologiska) förklaringsmodellen avser man med förklaring en underordning eller härledning av specifika fenomen ur universella lagbundenheter. Att förklara en företeelse (explanandum) är detsamma som att deducera fram en beskrivning av den från en uppsättning premisser och universella lagar av typen ”Om A, så B” (explanans). Att förklara innebär helt enkelt att kunna inordna något under en bestämd lagmässighet och ansatsen kallas därför också ibland ”covering law-modellen”. Men teorierna ska inte användas till att förklara specifika enskilda fenomen utan för att förklara de universella lagbundenheterna som ingår i en hypotetisk-deduktiv förklaring. Den positivistiska förklaringsmodellen finns också i en svagare variant. Det är den probabilistiska förklaringsvarianten, enligt vilken att förklara i princip innebär att visa att sannolikheten för en händelse B är mycket stor om händelse A inträffar. I samhällsvetenskaper dominerar denna variant. Ur metodologisk synpunkt gör denna probabilistiska relativisering av den positivistiska förklaringsansatsen ingen större skillnad.

Den ursprungliga tanken bakom den positivistiska förklaringsmodellen var att den skulle (1) ge ett fullständigt klargörande av vad en förklaring är och visa att en förklaring som inte uppfyllde dess krav i själva verket var en pseudoförklaring, (2) ge en metod för testning av förklaringar, och (3) visa att förklaringar i enlighet med modellen var vetenskapens mål. Man kan uppenbarligen på goda grunder ifrågasätta alla anspråken.

En viktig anledning till att denna modell fått sånt genomslag i vetenskapen är att den gav sken av att kunna förklara saker utan att behöva använda ”metafysiska” kausalbegrepp. Många vetenskapsmän ser kausalitet som ett problematiskt begrepp, som man helst ska undvika att använda. Det ska räcka med enkla, observerbara storheter. Problemet är bara att angivandet av dessa storheter och deras eventuella korrelationer inte förklarar något alls. Att fackföreningsrepresentanter ofta uppträder i grå kavajer och arbetsgivarrepresentanter i kritstrecksrandiga kostymer förklarar inte varför ungdomsarbetslösheten i Sverige är så hög idag. Vad som saknas i dessa ”förklaringar” är den nödvändiga adekvans, relevans och det kausala djup varförutan vetenskap riskerar att bli tom science fiction och modellek för lekens egen skull.

Många samhällsvetare tycks vara övertygade om att forskning för att räknas som vetenskap måste tillämpa någon variant av hypotetisk-deduktiv metod. Ur verklighetens komplicerade vimmel av fakta och händelser ska man vaska fram några gemensamma lagbundna korrelationer som kan fungera som förklaringar. Inom delar av samhällsvetenskapen har denna strävan att kunna reducera förklaringar av de samhälleliga fenomen till några få generella principer eller lagar varit en viktig drivkraft. Med hjälp av några få generella antaganden vill man förklara vad hela det makrofenomen som vi kallar ett samhälle utgör. Tyvärr ger man inga riktigt hållbara argument för varför det faktum att en teori kan förklara olika fenomen på ett enhetligt sätt skulle vara ett avgörande skäl för att acceptera eller föredra den. Enhetlighet och adekvans är inte liktydigt.

## Hard and soft science — a flawed dichotomy

11 juli, 2018 kl. 19:08 | Publicerat i Theory of Science & Methodology | 1 kommentarThe distinctions between hard and soft sciences are part of our culture … But the important distinction is really not between the hard and the soft sciences. Rather, it is between the hard and the easy sciences. Easy-to-do science is what those in physics, chemistry, geology, and some other fields do. Hard-to-do science is what the social scientists do and, in particular, it is what we educational researchers do. In my estimation, we have the hardest-to-do science of them all! We do our science under conditions that physical scientists find intolerable. We face particular problems and must deal with local conditions that limit generalizations and theory building-problems that are different from those faced by the easier-to-do sciences …

Huge context effects cause scientists great trouble in trying to understand school life … A science that must always be sure the myriad particulars are well understood is harder to build than a science that can focus on the regularities of nature across contexts …

Doing science and implementing scientific findings are so difficult in education because humans in schools are embedded in complex and changing networks of social interaction. The participants in those networks have variable power to affect each other from day to day, and the ordinary events of life (a sick child, a messy divorce, a passionate love affair, migraine headaches, hot flashes, a birthday party, alcohol abuse, a new principal, a new child in the classroom, rain that keeps the children from a recess outside the school building) all affect doing science in school settings by limiting the generalizability of educational research findings. Compared to designing bridges and circuits or splitting either atoms or genes, the science to help change schools and classrooms is harder to do because context cannot be controlled.

Amen!

When applying deductivist thinking to economics, mainstream economists set up their easy-to-do ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be real-world relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not, and one of the main reasons for that is that context matters. When addressing real-world systems, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in an easy-to-do science like physics, but a poor guide for action in real-world systems in which concepts and entities are without clear boundaries and continually interact and overlap.

## Trump’s handshake game

11 juli, 2018 kl. 08:32 | Publicerat i Varia | Kommentarer inaktiverade för Trump’s handshake game

## What are axiomatizations good for?

10 juli, 2018 kl. 19:55 | Publicerat i Economics | 2 kommentarerAxiomatic decision theory was pioneered in the early 20th century by Ramsey (1926) and de Finetti (1931,1937), and achieved remarkable success in shaping economic theory … A remarkable amount of economic research is now centered around axiomatic models of decision …

What have these axiomatizations done for us lately? What have we gained from them? Are they leading to advances in economic analysis, or are they perhaps attracting some of the best minds in the field to deal with difficult problems that are of little import? Why is it the case that in other sciences, such as psychology, biology, and chemistry, such axiomatic work is so rarely found? Are we devoting too much time for axiomatic derivations at the expense of developing theories that fit the data?

This paper addresses these questions … Section 4 provides our response, namely that axiomatic derivations are powerful rhetorical devices …

‘Powerful rhetorical devices’? What an impressive achievement indeed …

Some of us have for years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

Within mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Studying mathematics and logic is interesting and fun. It sharpens the mind. In pure mathematics and logic , we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

Mathematical axiomatic systems lead to analytic truths, which do not require empirical verification, since they are true by virtue of definitions and logic. It is a startling discovery of the twentieth century that sufficiently complex axiomatic systems are undecidable and incomplete. That is, the system of theorem and proof can never lead to ALL the true sentences about the system, and ALWAYS contain statements which are undecidable – their truth values cannot be determined by proof techniques. More relevant to our current purpose is that applying an axiomatic hypothetico-deductive system to the real world can only be done by means of a mapping, which creates a model for the axiomatic system. These mappings then lead to assertions about the real world which require empirical verification. These assertions (which are proposed scientific laws) can NEVER be proven in the sense that mathematical theorems can be proven …

Many more arguments can be given to explain the difference between analytic and synthetic truths, which corresponds to the difference between mathematical and scientific truths … The scientific method arose as a rejection of the axiomatic method used by the Greeks for scientific methodology. It was this rejection of axiomatics and logical certainty in favour of empirical and observational approach which led to dramatic progress in science. However, this did involve giving up the certainties of mathematical argumentation and learning to live with the uncertainties of induction. Economists need to do the same – abandon current methodology borrowed from science and develop a new methodology suited for the study of human beings and societies.

## The core problem with ‘New Keynesian’ macroeconomics

10 juli, 2018 kl. 11:51 | Publicerat i Economics | 3 kommentarerWhereas the Great Depression of the 1930s produced Keynesian economics, and the stagflation of the 1970s produced Milton Friedman’s monetarism, the Great Recession has produced no similar intellectual shift.

This is deeply depressing to young students of economics, who hoped for a suitably challenging response from the profession. Why has there been none?

Krugman’s answer is typically ingenious: the old macroeconomics was, as the saying goes, “good enough for government work” … Krugman is a New Keynesian, and his essay was intended to show that the Great Recession vindicated standard New Keynesian models. But there are serious problems with Krugman’s narrative …

The New Keynesian models did not offer a sufficient basis for maintaining Keynesian policies once the economic emergency had been overcome, they were quickly abandoned …

The problem for New Keynesian macroeconomists is that they fail to acknowledge radical uncertainty in their models, leaving them without any theory of what to do in good times in order to avoid the bad times. Their focus on nominal wage and price rigidities implies that if these factors were absent, equilibrium would readily be achieved …

Without acknowledgement of uncertainty, saltwater economics is bound to collapse into its freshwater counterpart. New Keynesian “tweaking” will create limited political space for intervention, but not nearly enough to do a proper job.

Skidelsky’s article shows why we all ought to be sceptic of the pretences and aspirations of ‘New Keynesian’ macroeconomics. So far it has been impossible to see that it has yielded very much in terms of *realist *and* relevant* economic knowledge. And — as if that wasn’t enough — there’s nothing new or Keynesian about it!

‘New Keynesianism’ doesn’t have its roots in Keynes. It has its intellectual roots in Paul Samuelson’s ill-founded ‘neoclassical synthesis’ project, whereby he thought he could save the ‘classical’ view of the market economy as a (long run) self-regulating market clearing equilibrium mechanism, by adding some (short run) frictions and rigidities in the form of sticky wages and prices.

But — putting a sticky-price lipstick on the ‘classical’ pig sure won’t do. The ‘New Keynesian’ pig is still neither Keynesian nor new.

The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that ‘New Keynesians’ cannot give supportive evidence for their considering it fruitful to analyze macroeconomic structures and events as the aggregated result of optimizing representative actors. After having analyzed some of its ontological and epistemological foundations, yours truly cannot but conclude that ‘New Keynesian’ macroeconomics, on the whole, has not delivered anything else than ‘as if’ unreal and irrelevant models.

The purported strength of New Classical and ‘New Keynesian’ macroeconomics is that they have firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing ‘forward-looking’ individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim — has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if these economists want to resurrect the omniscient Walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world.

And then, of course, there is that weird view on unemployment that makes you wonder on which planet those ‘New Keynesians’ live …

## What instrumental variables analysis is all about

9 juli, 2018 kl. 18:11 | Publicerat i Statistics & Econometrics | 2 kommentarer

## Aghni Parthene

9 juli, 2018 kl. 09:41 | Publicerat i Varia | Kommentarer inaktiverade för Aghni Parthene

## Om vi hade en dag (personal)

9 juli, 2018 kl. 00:01 | Publicerat i Varia | Kommentarer inaktiverade för Om vi hade en dag (personal)

## USA today

8 juli, 2018 kl. 09:03 | Publicerat i Politics & Society | 1 kommentar

The reckless, untruthful, outrageous, incompetent & undignified buffoon Donald Trump is debasing the nation day after day.

A grandmother — Liz DeCou — gets arrested in California for attempting to deliver toys and books to migrant children separated from their parents at the border.

Sickening to see how decent people are being treated.

## The randomistas revolution

7 juli, 2018 kl. 10:54 | Publicerat i Statistics & Econometrics | 2 kommentarerIn his new history of experimental social science — *Randomistas: How radical researchers are changing our world* — Andrew Leigh gives an introduction to the RCT (randomized controlled trial) method for conducting experiments in medicine, psychology, development economics, and policy evaluation. Although it mentions there are critiques that can be waged against it, the author does not let that shadow his overwhelmingly enthusiastic view on RCT.

Among mainstream economists, this uncritical attitude towards RCTs has become standard. Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the performance of a group of people (A) is affected by a specific ‘treatment’ (B). How can we extrapolate/generalize to new samples outside the original population? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream economists’ RCTs and ‘experiments.’

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Most ‘randomistas’ underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works *somewhere* for someone is no warranty for us to believe it to work for us *here* or even that it works *generally*. RCTs are simply not the best method for all questions and in all circumstances. And insisting on using only *one* tool often means using the *wrong* tool.

## Econometrics cannot establish the truth value of a fact. Never has. Never will.

6 juli, 2018 kl. 14:08 | Publicerat i Statistics & Econometrics | 1 kommentarThere seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

Assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Econometrics supplies dramatic cautionary examples in which complexmodellingg has failed miserably in important applications …

Yes, indeed, econometrics fails miserably over and over again. One reason why it does, is that the error term in the regression models used is thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is, as a rule, nothing to ensure identifiability:

With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, ”because I say so” seems like a pretty convincing answer to any question about its properties.

Nowadays it has almost become a self-evident truism among economists that you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling. So legions of mathematical-statistical theorems are proved — and heaps of fiction are being produced, masquerading as science. The rigour of the econometric modelling and the far-reaching assumptions they are built on is frequently not supported by data.

Econometrics is basically a deductive method. Given the assumptions, it delivers deductive inferences. The problem, of course, is that we almost never know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.

Econometrics cannot establish the truth value of a fact. Never has. Never will.

## The illusion of certainty

4 juli, 2018 kl. 22:37 | Publicerat i Statistics & Econometrics | 3 kommentarer

Blogga med WordPress.com.

Entries och kommentarer feeds.