The shocking truth about econometric ‘precision’ and ‘rigour’

28 Jun, 2022 at 09:39 | Posted in Statistics & Econometrics | Leave a comment

Mixtape | Scott Cunningham Leverage is a measure of the degree to which a single observation on the right-hand-side variable takes on extreme values and is influential in estimating the slope of the regression line. A concentration of leverage in even a few observations can make coefficients and standard errors extremely volatile and even bias robust standard errors towards zero, leading to higher rejection rates.

To illustrate this problem, Young (2019) went through a simple exercise. He collected over fifty experimental (lab and field) articles from the American Economic Association’s flagship journals: American Economic ReviewAmerican Economic Journal: Applied, and American Economic Journal: Economic Policy. He then reanalyzed these papers, using the authors’ models, by dropping one observation or cluster and reestimating the entire model, repeatedly. What he found was shocking:

With the removal of just one observation, 35% of 0.01-significant reported results in the average paper can be rendered insignificant at that level. Conversely, 16% of 0.01-insignificant reported results can be found to be significant at that level.

The Gray Ghost (personal)

27 Jun, 2022 at 17:32 | Posted in Varia | Leave a comment


For my daughter Tora and son David — with whom, when they were just little kids, yours truly spent hours and hours watching this​ series back in the 1990s.
You were my heroes then.
You still are.

How to tame inflation

27 Jun, 2022 at 13:20 | Posted in Economics | 5 Comments

.

Regeringen och Riksbanken hanterar inflationen fel

26 Jun, 2022 at 11:31 | Posted in Economics | Leave a comment

Vad beror den ökande inflationen på?

Is inflation on its way? – Swiss Life GroupMax Jerneck (MJ): Den beror till stor del på ökade priser på energi och livsmedel, som orsakas av saker som hur elmarknaden är reglerad och torka samt kriget i Ukraina. Även priserna på möbler och andra varor, och grundläggande komponenter och insatsvaror som halvledare och stål spelar in. Under pandemin minskade efterfrågan på tjänster samtidigt som den snabbt ökade på varor, vilket tillverkare och leverantörskedjor hade svårt att hantera. Och det skedde i en världsekonomi som under ett årtionde hade präglats av låg efterfrågan och av bristande investeringar i ny kapacitet. Problemen kan nog väntas kvarstå ett tag till, inte minst för att Kina fortfarande stänger ned fabriker och hamnar med hårda covid­restriktioner …

Vilket är enligt dig det bästa sättet att komma till rätta med inflationen?

MJ: Det bästa vore att behandla alla orsaker med skräddarsydda lösningar: om dyra drivmedel är problemet borde man se till att folk åkte mindre bil, att man sänkte hastigheterna på vägarna, och gjorde kollektivtrafiken billigare, och på längre sikt påskyndade elektrifiering, vilket i sin tur kräver investeringar i batterier, gruvor, och så vidare. Det går också att använda mer allmänna verktyg såsom höjda skatter och räntor, men de fungerar bäst mot inflation som uppstår på grund av överhettad efterfrågan, snarare än utbudsproblem …

Vad anser du om regeringens och Riksbankens hantering av problemet hittills?

MJ: Regeringens åtgärder har mestadels bestått av kortsiktiga nödlösningar, som kanske minskar kostnaderna på kort sikt men inte gör något åt grundproblemet utan kanske snarare förvärrar det. Jag tänker på bidrag till bil- och husägare så att de kan fortsätta driva upp priserna på el och bensin. Riksbanken vill höja räntan. Man är öppen med att det inte åtgärdar inflationens orsaker, såsom ökade energipriser, utan skälet sägs vara att man vill hindra inflationen från att bita sig fast. Den typen av analys lägger stor vikt vid idén om inflationsförväntningar, som gör inflationen självförstärkande. Man vill undvika en löne- och pris­spiral … Allmänna räntehöjningar är ett trubbigt verktyg som minskar inflationen genom att exempelvis öka kostnaderna för att bo, så man kan fråga sig hur klokt det är. De dämpar också efterfrågan genom att minska investeringarna. Jag tror att mer traditionell kreditstyrning, bort från överhettade branscher och till de branscher som behöver investeringar, är ett mer träffsäkert verktyg.

Flamman

Och precis som i övriga världen handlar dagens svenska inflation också delvis om att en del företag och kapitalägare vill passa på och öka sina vinstmarginaler utan att det i någon egentlig mening föreligger reala kostnadsökningar som ‘berättigar’ detta. Den typen av i ‘smyg’ höjda priser blir allt lättare att genomföra i takt med att inflationsförväntningarna nu stiger. Som alltid på marknaden är det de resurssvaga som i sista hand får stå för notan …

Riksbanken har i princip bara ett enda verktyg att ta till — räntan. Och visst kan man använda den ‘hammaren’ för att försöka råda bot på prisstegringsspiralen. Men är det något historien har lärt oss så är det att det sannolikt bara lyckas till priset av ökad arbetslöshet och försämrad välfärd. Inflationen i dag är inte i första hand ett efterfrågeproblem. Det handlar mer om utbudet. Och ett förfelat användande av medicin kan tyvärr leda till att patienten bara blir ännu sjukare …

The Law of Demand

23 Jun, 2022 at 11:45 | Posted in Economics | 2 Comments

Mainstream economics is usually considered to be very ‘rigorous’ and ‘precise.’ And yes, indeed, it’s certainly full of ‘rigorous’ and ‘precise’ statements like “the state of the economy will remain the same as long as it doesn’t change.” Although ‘true,’ this is, however — like most other analytical statements — neither particularly interesting nor informative.

As is well known, the law of demand is usually tagged with a clause that entails numerous interpretation problems: the ceteris paribus clause. In the strict sense this must thus at least be formulated as follows to be acceptable to the majority of theoreticians: ceteris paribus – that is, all things being equal – the demanded quantity of a consumer good is a monotone-decreasing function of its price …

Wissenschaftstheorie: Hans Albert feiert 100. Geburtstag - Forschung & LehreIf the factors that are to be left constant remain undetermined, as not so rarely happens, then the law of demand under question is fully immunized to facts, because every case which initially appears contrary must, in the final analysis, be shown to be compatible with this law. The clause here produces something of an absolute alibi, since, for every apparently deviating behavior, some altered factors can be made responsible. This makes the statement untestable, and its informational content decreases to zero.

One might think that it is in any case possible to avert this situation by specifying the factors that are relevant for the clause. However, this is not the case. In an appropriate interpretation of the clause, the law of demand that comes about will become, for example, an analytic proposition, which is, in fact, ​true for logical reasons, but which is thus precisely for this reason not informative …

Various widespread formulations of the law of demand contain an interpretation of the clause that does not result in a tautology, but that has another weakness. The list of the factors to be held constant includes, among other things, the structure of the needs of the purchasing group in question. This leads to a difficulty connected with the identification of needs. As long as there is no independent test for the constancy of the structures of needs, any law that is formulated in this way has an absolute ‘alibi’. Any apparent counter case can be traced back to a change in the needs, and thus be discounted. Thus, in this form, the law is also immunized against empirical facts. To counter this situation, it is in fact necessary to dig deeper into the problem of needs and preferences; in many cases, however, this is held to be unacceptable, because it would entail crossing the boundaries into social psychology.

Hans Albert

In mainstream economics there’s — still — a lot of talk about ‘economic laws.’ The crux of these laws — and regularities — that allegedly do exist in economics, is that they only hold ceteris paribus. That fundamentally means that these laws/regularities only hold when the right conditions are at hand for giving rise to them. Unfortunately, from an empirical point of view, those conditions are only at hand in artificially closed nomological models purposely designed to give rise to the kind of regular associations that economists want to explain. But, really, since these laws/regularities do not exist outside these ‘socio-economic machines,’ what’s the point in constructing thought experimental models showing these non-existent laws/regularities? When the almost endless list of narrow and specific assumptions necessary to allow the ‘rigorous’ deductions are known to be at odds with reality, what good do these models do?

Deducing laws in theoretical models is of no avail if you cannot show that the models — and the assumptions they build on — are realistic representations of what goes on in real-life.

Conclusion? Instead of restricting our methodological endeavours to building ever more rigorous and precise deducible models, we ought to spend much more time improving our methods for choosing models!

Propensity scores — bias-reduction gone awry

23 Jun, 2022 at 08:28 | Posted in Statistics & Econometrics | Leave a comment

.

Hegel in 60 minuten

22 Jun, 2022 at 18:14 | Posted in Theory of Science & Methodology | Leave a comment

.

The inflation lie

21 Jun, 2022 at 14:02 | Posted in Economics | 4 Comments

.

Mainstream economics — the art of building fantasy worlds

20 Jun, 2022 at 15:50 | Posted in Economics | 3 Comments

Mainstream macroeconomic models standardly assume things like rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time, the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

How to Build a Fantasy Economy - YouTubeBehavioural and experimental economics — not to speak of psychology — show beyond doubt that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other economic participants. And how about the homogeneity assumption? And if all actors are the same — why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions — so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

Avoiding logical inconsistencies is crucial in all science. But it is not enough. Just as important is avoiding factual inconsistencies. And without showing — or at least warranted arguing — that the assumptions and premises of their models are in fact true, mainstream economists aren’t really reasoning, but only playing games. Formalistic deductive ‘Glasperlenspiel’ can be very impressive and seductive. But in the realm of science, it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Instead of making the model the message, I think we are better served by economists who more than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

Bayesian absurdities

20 Jun, 2022 at 15:23 | Posted in Statistics & Econometrics | 1 Comment

Making Decisions 2e: Amazon.co.uk: Lindley, Dennis V.: 9780471908081: BooksIn other words, if a decision-maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd.

So leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.

To get the Bayesian probability calculus going you sometimes have to assume strange things — so strange that you should perhaps start wondering if maybe there is something wrong with your theory …

Added: For those interested in these questions concerning the reach and application of statistical theories, do read Sander Greenland’s insightful comment:

My take is that the quoted passage is a poster child for what’s wrong with statistical foundations for applications. Mathematics only provides contextually void templates for what might be theories if some sensible mapping can be found between the math and the application context. Just as with frequentist and all other statistical “theories”, Bayesian mathematical theory (template) works fine as a tool when the problem can be defined in a very small world of an application in which the axioms make contextual sense under the mapping and the background information is not questioned. There is no need for leaving any probability on “green cheese” if you aren’t using Bayes as a philosophy, for if green cheese is really found, the entire contextual knowledge base is undermined and all well-informed statistical analyses sink with it.

The problems often pointed out for econometrics are general ones of statistical theories, which can quickly degenerate into math gaming and are usually misrepresented as scientific theories about the world. Of course, with a professional sales job to do, statistics has encouraged such reification through use of deceptive labels like “significance”, “confidence”, “power”, “severity” etc. for what are only properties of objects in mathematical spaces (much like identifying social group dynamics with algebraic group theory or crop fields with vector field theory). Those stat-theory objects require extraordinary physical control of unit selection and experimental conditions to even begin to connect to the real-world meaning of those conventional labels. Such tight controls are often possible with inanimate materials (although even then they can cost billions of dollars to achieve, as with large particle colliders). But they are infrequently possible with humans, and I’ve never seen them approached when whole societies are the real-world target, as in macroeconomics, sociology, and social medicine. In those settings, at best our analyses only provide educated guesses about what will happen as a consequence of our decisions.

That’s life

19 Jun, 2022 at 22:20 | Posted in Varia | Leave a comment

.

Evidence-based economics — the fundamentals

19 Jun, 2022 at 09:52 | Posted in Economics | Leave a comment

Many economists still think that “evidence” is only of one kind, i.e. statistical/econometric analysis. Whilst this is important, it is not enough on its own. One reason for its privileged position may be that it is typically contrasted with “anecdotal evidence”, which is unreliable. But the truth is richer than that.

News - Evidence-Based Economics - LMU MunichIt is true that basing one’s thinking about the economy on one or more stories carries the danger that one will just favour the narrative that suits one’s preconceptions. Any item of evidence needs to be representative of the underlying reality in some way – in fact this is true also of statistical analyses. And subjective bias (wishful thinking) needs to be avoided, whatever the type of evidence. In addition, any kind of evidence is fallible, so that caution is required. This applies as much to statistical analyses as to any other type – in medicine, it is commonplace to find that the results of epidemiological studies fail to be replicated – the rule of thumb is that a plurality of studies, as with cigarette smoking and lung cancer, is required before one accepts a finding as truly established.

The history of science shows clearly that secure knowledge derives not only from iteration between evidence and theory, but also that it typically depends on a variety of types of evidence – the more diverse the better. The implication for economics is that econometrics needs to be augmented with other approaches. A particularly valuable one is comparative economic history – highlighting the similarities and the differences between the experiences of different economies. Other important methods include behavioural and experimental economics, field trials, randomised controlled trials, institutional analysis, survey analysis, etc.

Ideally, the evidence base should encompass evidence both of the “difference-making” variety – e.g. that lung cancer rates are far higher among cigarette smokers than in non-smokers – and evidence that throws light on the mechanisms or capacities involved.

Michael Joffe

The Fed is making a big mistake

19 Jun, 2022 at 09:27 | Posted in Economics | Leave a comment

.

The geometry of Bayes theorem

17 Jun, 2022 at 10:50 | Posted in Statistics & Econometrics | 1 Comment

.

An informative visualization of a theorem that shows how to update probabilities — calculating conditional probabilities — when new information/evidence becomes available.

But …

Although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. Bayesian statistics is one thing, and Bayesian epistemology is something else. Science is not reducible to betting, and scientific inference is not a branch of probability theory. It always transcends mathematics. The unfulfilled dream of constructing an inductive logic of probabilism — the Bayesian Holy Grail — will always remain unfulfilled.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. That probabilities may work for expressing uncertainty when we pick balls from an urn, does not automatically make it relevant for making inferences in science. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on often vague and subjective personal beliefs? People have a lot of beliefs, and when they are plainly wrong, we shall not do any calculations whatsoever on them. We simply reject them. Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as many Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I strongly doubt it.

Mindless statistics

17 Jun, 2022 at 09:13 | Posted in Statistics & Econometrics | 1 Comment

Rationality for Mortals: How People Cope with Uncertainty (Evolution and  Cognition Series): Amazon.co.uk: Gigerenzer, Gerd: 9780195328981: BooksKnowing the contents of a toolbox, of course, requires statistical thinking, that is, the art of choosing a proper tool for a given problem. Instead, one single procedure that I call the “null ritual” tends to be featured in texts and practiced by researchers. Its essence can be summarized in a few lines:

The null ritual:
1. Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research hypothesis or of any alternative substantive hypotheses.
2. Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p < 0.05, p < 0.01, or p < 0.001 (whichever comes next to the obtained p-value).
3. Always perform this procedure …

The routine reliance on the null ritual discourages not only statistical thinking but also theoretical thinking. One does not need to specify one’s hypothesis, nor any challenging alternative hypothesis … The sole requirement is to reject a null that is identified with “chance.” Statistical theories such as Neyman–Pearson theory and Wald’s theory, in contrast, begin with two or more statistical hypotheses …

We know but often forget that the problem of inductive inference has no single solution. There is no uniformly most powerful test, that is, no method that is best for every problem. Statistical theory has provided us with a toolbox with effective instruments, which require judgment about when it is right to use them … Judgment is part of the art of statistics.

To stop the ritual, we also need more guts and nerves. We need some pounds of courage to cease playing along in this embarrassing game. This may cause friction with editors and colleagues, but it will in the end help them to enter the dawn of statistical thinking.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: