## Mainstream economics — a form of brain damage

3 Oct, 2022 at 09:43 | Posted in Economics | 1 Comment.

It is difficult to understand why mainstream economists keep on using their unreal and irrelevant models! Sure, you get academic accolades and give the impression of having something deep and ‘scientific’ to say, but that should count for nothing if you’re in the truth business. As long as that kind of modelling output doesn’t come with the accompanying warning text “NB! This is model-based results based on tons of more or less unsubstantiated assumptions,” we should keep on scrutinising and criticising it.

Yours truly appreciates scientists like David Suzuki. With razor-sharp intellects, they immediately go for the essentials. They have no time for bullshit. And neither should we.

## Tory trickle-down mumbo jumbo

29 Sep, 2022 at 16:34 | Posted in Varia | 4 Comments.

Absolutely gobsmacking … This MP certainly experiences bad luck trying to think …

## Free trade delusions

29 Sep, 2022 at 13:51 | Posted in Economics | 10 CommentsWe must ask why economists still ignore the obvious reality that application of their standard free trade model failed to generate broad-based income gains. Why do many still turn a blind eye to the mounting evidence of the social, economic, and human costs of the globalization experiment? Some were genuinely misled by the fancy algebra. But many know their models are irrelevant. They were seduced by the surprising willingness of political leaders to believe their sophistries and appoint them to positions of money, power, and influence … Paul Samuelson, himself a lifelong skeptic of free trade, once said that economics advances funeral by funeral. Old economists find it hard to give up on the theories that made their careers.

I will give the final word to John Maynard Keynes, who was also a free trade skeptic:

“Free trade assume that if you throw men out of work in one direction you re-employ them in another. As soon as that link is broken the whole of the free trade argument brakes down.”

Thought-provoking and interesting. But I think Ferry misses the most powerful argument against the Ricardian free trade paradigm — what counts today is not *comparative* advantage, but *absolute* advantage.

Since Ricardo’s days, the assumption of internationally immobile factors of production has been made totally untenable in our globalised world. Our modern corporations maximize their profits by moving capital and technologies to where it is cheapest to produce. So we’re actually in a situation today where absolute — not comparative — advantages rule the roost when it comes to free trade.

And in that world, what is good for corporations is not necessarily good for nations.

## Riksbankens usla prognoser

28 Sep, 2022 at 14:35 | Posted in Economics | 1 CommentNationalekonomer hävdar ofta — vanligtvis med hänvisning till Milton Friedman och hans metodologiska individualism — att det inte gör så mycket om de antaganden som deras modeller bygger på är realistiska eller ej. Vad som betyder något är om de prognoser som modellerna resulterar i visar sig vara riktiga eller ej.

**Om det verkligen är så, är den enda slutsats vi kan dra att dessa modeller och prognoser hör hemma i papperskorgen. För oj vad fel de har haft!**

Riksbanken har sedan 2007 använt sig av så kallade räntebanor som prognosmedel. Genomslaget har blivit stort — trots att det är lätt att konstatera att prognoserna genomgående slagit rejält fel. Som de flesta andra prognosmakare missade Riksbanken både finanskrisen år 2007 och dess allvarliga effekter. Men också under de senaste åren har Riksbankens ränte- och inflationsprognoser visat sig vara usla.

Inte minst under åren 2021 och 2022 har Riksbankens inflationsprognoser fått en allt sämre träffsäkerhet. Under de här åren har prognoserna i genomsnitt avvikit från utfallet med mer än en procentenhet på tre månaders sikt. Klart underkänt.

För några år sedan bjöd yours truly in fondförvaltaren Alf Riple — med bakgrund som chefsanalytiker på Nordea och rådgivare på norska Finansdepartementet — att föreläsa på min kurs *Finanskriser – orsaker, förlopp och konsekvenser.* Enligt Alf fanns det ingen anledning att vara speciellt förvånad över Riksbankens usla prognoser:

Vad är värst, en dålig prognos eller ingen prognos? Svaret är enkelt. Så fort du exponeras för en prognos är du i en sämre position än du var innan …

Expertprognoser gör med all sannolikhet mer skada än nytta. Det är därför det lönar sig att snabbt bläddra förbi tidningsartiklar med rubriker som ‘Så kommer börsen gå i år’ …

Tänk dig att du har som jobb att sköta ditt företags valutaväxlingar … Du måste bestämma att antingen säkra växelkursen redan nu, eller vänta tills beloppet anländer och växla till den kurs som gäller då … Som tur är har du analytikernas dollarprognoser till hjälp. De gör det inte ett dugg lättare att förutspå dollarkursen. Men de kan hjälpa dig ändå.

Om du lyckas göra rätt spelar inte analyserna någon större roll. Men om dollarn faller som en sten och du har valt att

intesäkra växelkursen, kommer företagsledningen att vilja veta varför du har sumpat bort företagets pengar … Du kan dra en lång historia om historiska valutatrender, ekonomisk tillväxt, betalningbalans och ränteskillnader. Till slut kommer alla att hålla med om att du agerade rätt mot bakgrund av den information du hade på förhand.Analyserna gör att du kommer undan. Särskilt de som hade mest fel … Prognoserna har inget ekonomiskt värde, vare sig för företaget eller samhället. Värdet är att de räddar ditt skinn.

Med andra ord — dessa arrogant självsäkra ekonomer med sina ‘rigorösa’ och ‘precisa’ matematisk-statistiska-ekonometriska modeller har genomgående fel. Och det är alla vi andra som får betala för det!

## A different way to solve quadratic equations

28 Sep, 2022 at 12:25 | Posted in Statistics & Econometrics | Leave a comment.

This one is for Linnea, my youngest daughter, who has now begun studying mathematics at my university 🙂

## Why exam schools may reduce achievement

27 Sep, 2022 at 11:16 | Posted in Education & School | Leave a comment.

## Women’s fight for freedom

24 Sep, 2022 at 10:12 | Posted in Politics & Society | Leave a comment.

Courage is a capability to confront fear, as when in front of the powerful and mighty, not to step back, but to stand up for one’s rights not to be humiliated or abused in any way by the rich and powerful.

Courage is to do the right thing in spite of danger and fear. To keep on even if opportunities to turn back are given. Like in the great stories. The ones where people have lots of chances of turning back — but don’t.

Dignity, a better life, or justice and rule of law, are things worth fighting for. Not to step back — in spite of confronting the mighty and powerful — creates courageous acts that stay in our memories and mean something — as when women today in Iran and around the world burn their headscarves and cut their hair in protest of the Iranian government’s restrictions on social freedoms.

## Liz Truss’ trickle-down economics

22 Sep, 2022 at 22:45 | Posted in Economics | 6 CommentsU.K. Prime Minister Liz Truss has said she’s ready to make “unpopular decisions” such as tax cuts and boosting bonuses for wealthy bankers to grow the economy, even though they obviously benefit the wealthiest more than the poor …

Truss and her conservative government are to give U.K. corporations and shareholders a gift. The only trickle-down to workers to be going on is probably best described in the picture below …

## Science — the need for causal explanation

22 Sep, 2022 at 16:29 | Posted in Theory of Science & Methodology | Leave a commentMany journal editors request authors to avoid causal language, and many observational researchers, trained in a scientific environment that frowns upon causality claims, spontaneously refrain from mentioning the C-word (“causal”) in their work …

The proscription against the C-word is harmful to science because causal inference is a core task of science, regardless of whether the study is randomized or nonrandomized. Without being able to make explicit references to causal effects, the goals of many observational studies can only be expressed in a roundabout way. The resulting ambiguity impedes a frank discussion about methodology because the methods used to estimate causal effects are not the same as those used to estimate associations. Confusion then ensues at the most basic levels of the scientific process and, inevitably, errors are made …

We all agree: confounding is always a possibility and therefore association is not necessarily causation. One possible reaction is to completely ditch causal language in observational studies. This reaction, however, does not solve the tension between causation and association; it just sweeps it under the rug …

Without causally explicit language, the means and ends of much of observational research get hopelessly conflated … Carefully distinguishing between causal aims and associational methods is not just a matter of enhancing scientific communication and transparency. Eliminating the causal–associational ambiguity has practical implications for the quality of observational research too.

Highly recommended reading!

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations. Still, we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think the likeliest explanation is the best. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitutes actual scientific reasoning and inferences.

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proof. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do not produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived theorems and the real world, well, then we haven’t really obtained the causation we are looking for.

## Statistical models and the assumptions on which they build

21 Sep, 2022 at 19:47 | Posted in Statistics & Econometrics | Leave a commentEvery method of statistical inference depends on a complex web of assumptions about how data were collected and analyzed, and how the analysis results were selected for presentation. The full set of assumptions is embodied in a statistical model that underpins the method … Many problems arise however because this statistical model often incorporates unrealistic or at best unjustified assumptions …

The difficulty of understanding and assessing underlying assumptions is exacerbated by the fact that the statistical model is usually presented in a highly compressed and abstract form—if presented at all. As a result, many assumptions go unremarked and are often unrecognized by users as well as consumers of statistics. Nonetheless, all statistical methods and interpretations are premised on the model assumptions; that is, on an assumption that the model provides a valid representation of the variation we would expect to see across data sets, faithfully reflecting the circumstances surrounding the study and phenomena

occurring within it.

If anything, the common abuse of statistical tests underlines how important it is not to equate science with statistical calculation. All science entails human judgment, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of statistics is actually zero — even though you’re making valid statistical inferences! Statistical models are no substitutes for doing real science. Or as a famous German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

We should never forget that the underlying parameters we use when performing statistical tests are *model constructions*. And if the model is wrong, the value of our calculations is nil. As ‘shoe-leather researcher’ David Freedman wrote in *Statistical Models and Causal Inference*:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

## Support Vector Machines (student stuff)

20 Sep, 2022 at 10:42 | Posted in Statistics & Econometrics | Leave a comment.

## Avoiding statistical ‘dichotomania’

19 Sep, 2022 at 11:49 | Posted in Statistics & Econometrics | 2 CommentsWe are calling for a stop to the use of P values in the conventional, dichotomous way — to decide whether a result refutes or supports a scientific hypothesis …

The rigid focus on statistical significance encourages researchers to choose data and methods that yield statistical significance for some desired (or simply publishable) result, or that yield statistical non-significance for an undesired result, such as potential side effects of drugs — thereby invalidating

conclusions …Again, we are not advocating a ban on P values, confidence intervals or other statistical measures — only that we should not treat them categorically. This includes dichotomization as statistically significant or not, as well as categorization based on other statistical measures such as Bayes factors.

One reason to avoid such ‘dichotomania’ is that all statistics, including P values and confidence intervals, naturally vary from study to study, and often do so to a surprising degree. In fact, random variation alone can easily lead to large disparities in P values, far beyond falling just to either side of the 0.05 threshold …

We must learn to embrace uncertainty. One practical way to do so is to rename confidence intervals as ‘compatibility intervals’ and interpret them in a way that avoids overconfidence. Specifically, we recommend that authors describe the practical implications of all values inside the interval, especially the observed effect (or point estimate) and the limits. In doing so, they should remember that all the values between the interval’s limits are reasonably compatible with the data, given the statistical assumptions used to compute the interval. Therefore, singling out one particular value (such as the null value) in the interval as ‘shown’ makes no sense.

In its standard form, a significance test is not the kind of ‘severe test’ that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypotheses. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since it can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

Most importantly — we should never forget that the underlying parameters we use when performing significance tests are *model constructions*. Our P values mean next to nothing if the model is wrong.

Blog at WordPress.com.

Entries and Comments feeds.