Leva mitt liv

18 december, 2018 kl. 14:42 | Publicerat i Varia | Kommentarer inaktiverade för Leva mitt liv

 

Why bloggers are so negative

17 december, 2018 kl. 19:13 | Publicerat i Varia | 3 kommentarer

negative-people-negative-1bci5m Rather than getting into a discussion of whether blogs, or academic sociology, or movie reviews, should be more positive or negative, let’s get into the more interesting question of Why.

Why is negativity such a standard response?

1. Division of labor. Within social science, sociology’s “job” is to confront us with the bad news, to push us to study inconvenient truths. If you want to hear good news, you can go listen to the economists …

2. Efficient allocation of resources. Where can we do the most good? Reporting positive news is fine, but we can do more good by focusing on areas of improvement …

3. Status. Sociology doesn’t have the prestige of economics (more generally, social science doesn’t have the prestige of the natural sciences); blogs have only a fraction of the audience of the mass media (and we get paid even less for blogging then they get paid for their writing); and movie reviewers, of course, are nothing but parasites on the movie industry …

4. Urgency … As a blogger, I might not bother saying much about a news article that was well reported, because the article itself did a good job of sending its message. But it might seem more urgent to correct an error …

5. Man bites dog. Failures are just more interesting to write about, and to read about, than successes …

And, yes, I see the irony that this post, which is all about why sociologists and bloggers are so negative, has been sparked by a negative remark made by a sociologist on a blog. And I’m sure you will have some negative things to say in the comments. After all, the only people more negative than bloggers, are blog commenters!

Andrew Gelman

Why statistical significance is worthless in science

17 december, 2018 kl. 14:33 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Why statistical significance is worthless in science

pvaluessuckThere are at least around 20 or so common misunderstandings and abuses of p-values and NHST [Null Hypothesis Significance Testing]. Most of them are related to the definition of p-value … Other misunderstandings are about the implications of statistical significance.

Statistical significance does not mean substantive significance: just because an observation (or a more extreme observation) was unlikely had there been no differences in the population does not mean that the observed differences are large enough to be of practical relevance. At high enough sample sizes, any difference will be statistically significant​ regardless of effect size.

Statistical non-significance does not entail equivalence: a failure to reject the null hypothesis is just that. It does not mean that the two groups are equivalent, since statistical non-significance can be due to low sample size.

Low p-value does not imply large effect sizes: because p-values depend on several other things besides effect size, such as sample size and spread.

It is not the probability of the null hypothesis: as we saw, it is the conditional probability of the data, or more extreme data, given the null hypothesis.

It is not the probability of the null hypothesis given the results: this is the fallacy of transposed conditionals as p-value is the other way around, the probability of at least as extreme data, given the null.

It is not the probability of falsely rejecting the null hypothesis: that would be alpha, not p.

It is not the probability that the​ results are a statistical fluke: since the test statistic is calculated under the assumption that all deviations from the null is due to chance. Thus, it cannot be used to estimate that probability of a statistical fluke since it is already assumed to be 100%.

Rejection null hypothesis is not confirmation of causal mechanism: you can imagine a great number of potential explanations for deviations from the null. Rejecting the null does not prove a specific one. See the above example with suicide rates.

NHST promotes arbitrary data dredging (“p-value fishing”): if you test your entire dataset and does not attain statistical significance, it is tempting to test a number of subgroups. Maybe the real effect occurs in me, women, old, young, whites, blacks, Hispanics, Asians, thin, obese etc.? More likely, you will get a number of spurious results that appear statistically significant but are really false positives. In the quest for statistical significance, this unethical behavior is common.

Debunking Denialism

ad11As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models! If you run a regression and get significant values (p < .05) on the coefficients, that only means that if the model is right and the values on the coefficients are null, it would be extremely unlikely to get those​ low p-values. But — one of the possible reasons for the result, a reason you can never dismiss, is that your model simply is wrong!

The present excessive reliance on significance​ testing in science is disturbing​ and should be fought. But it is also important to put significance testing abuse in perspective. The real problem in today’s social sciences is not significance testing per se. No, the real problem has to do with the unqualified and mechanistic application of statistical methods to real-world phenomena without often having even the slightest idea of how the assumptions behind​ the statistical models condition and severely limit the value of the inferences made.

When the herd turns

16 december, 2018 kl. 12:54 | Publicerat i Economics | Kommentarer inaktiverade för When the herd turns

 

Annie Lööf — en unken nyliberal som borde veta hut

16 december, 2018 kl. 12:49 | Publicerat i Politics & Society | Kommentarer inaktiverade för Annie Lööf — en unken nyliberal som borde veta hut

1annieAtt välkända nyliberaler som Alan Greenspan och Paul Ryan vurmar för den iskalla egoismens översteprästinnan Ayn Rand och hennes övermänniskoideal är kanske inte så förvånande. Men att Annie Lööf också gör det är kanske mer anmärkningsvärt.

I Lööfs ögon är Rand “en av 1900-talets största tänkare”. I andras en av 1900-talets mest vämjeliga personer.

Att fru Lööf ohöljt hyllar en psykopat som Ayn Rand och en diktatorkramare som Margaret Thatcher är klandervärt i sig. Men huvudskälet till att jag år efter år fortsatt kritisera denna floskulösa politikerbroiler beror främst på hennes huvudlösa nyliberala argumentation och motionerande om att exempelvis

  • Införa plattskatt (lägre skatt för höginkomsttagare)
  • Avskaffa lagen om anställningsskydd
  • Inskränka strejkrätten
  • Införa marknadshyror
  • Sälja ut SvT och SR
  • Få Sverige att gå med i NATO
  • Bygga ut kärnkraften

Annie Lööf och andra nyliberaler har länge traskat patrull och lovprisat en amerikansk modell med avsaknad av regleringar och välfärdshöjande kollektivavtal på arbetsmarknaden. Jo, man tackar! Visst. I USA ser vi kanske inte så många exempel på det som Lööf et consortes kallar fackliga “övergrepp,” men desto mer av inkomst- och förmögenhetsojämlikhet. Något som i hög grad också bidragit till den ekonomiska kräftgången.

Som all modern forskning på området visar, hämmas tillväxt och välfärd av ojämlikhet. I stället för att låta enfald få ersätta analysförmåga och besinning borde oförnuftets marknadsmatadorer vara ytterst självkritiska och fundera över hur man så länge kunnat torgföra tankeludd för vilket det helt saknas vetenskaplig grund.

Nyliberal dumdryghet med idoler och förebilder som Ayn Rand och Margaret Thatcher borde inte ge röster i 2000-talets Sverige. Låt oss hoppas att verkligheten snart börjar komma ikapp vår egen Rand-Thatcher. Det är hög tid för ett uppvaknande ur den nyliberala mardröm denna politiska broiler och klyschmakare lyckats dra ner det en gång så stolta centerpartiet i.

Till dem som likt Annie Lööf utifrån unkna ideologiska bevekelsegrunder vill demontera den svenska modellens välfärdsskapande strukturer vill nog många med Fabian Månsson utbrista: “Vet hut, vet sjudubbelt hut.”

The capital controversy

16 december, 2018 kl. 11:50 | Publicerat i Economics | 3 kommentarer

joanThe production function has been a powerful instrument of miseducation. The student of economic theory is taught to write Q = f(L, K) where L is a quantity of labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labor; he is told something about the index-number problem in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units K is measured. Before he ever does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.

Joan Robinson The Production Function and the Theory of Capital (1953)

Karl-Bertil — räddaren i nöden

16 december, 2018 kl. 11:42 | Publicerat i Varia | Kommentarer inaktiverade för Karl-Bertil — räddaren i nöden

 

La grande bellezza

15 december, 2018 kl. 22:45 | Publicerat i Varia | Kommentarer inaktiverade för La grande bellezza

 

The most dangerous equation in the world

15 december, 2018 kl. 17:12 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för The most dangerous equation in the world

wainer Failure to take sample size into account and inferring causality from outliers can lead to incorrect policy actions. For this reason, Howard Wainer refers to the formula for the standard​ deviation of the mean the ”most dangerous equation​ in the world.” For example, in the 1990s the Gates Foundation and other nonprofits advocated breaking up schools based on evidence that the best schools were small. To see the flawed reasoning, imagine that schools come in two sizes — small schools with 100 students and large schools with 1600 students — and that students scores at both types of schools are drawn from the same distribution with a mean score of 100 and a standard deviation of 80. At small schools, the standard​ deviation of the mean equals 8. At large schools, the standard deviation of the mean equals 2.

If we assign the label ‘high-performing’ to schools with means above 110 and the label ‘exceptional’ to schools with means above 120, then​ only small schools will meet either threshold. For the small schools, an average score of 110 is 1.25 standard deviations above the mean; such events occur about 10% of the time. A mean score of 120 is 2.5 standard deviations above the mean … When we do these same calculations for large schools, we find that the ‘high-performing’ threshold lies 5 standard deviations above the mean and the ‘exceptional’ threshold lies 10 standard deviations above the mean. Such events would, in practice​, never occur. Thus, the fact that the very best schools are smaller is not evidence that smaller schools perform better.

Scott Page

Brian Arthur and the ‘El Farol​ Problem’

15 december, 2018 kl. 15:38 | Publicerat i Economics | Kommentarer inaktiverade för Brian Arthur and the ‘El Farol​ Problem’

 

Field

15 december, 2018 kl. 15:03 | Publicerat i Varia | Kommentarer inaktiverade för Field

 

Rom i regnet

14 december, 2018 kl. 16:20 | Publicerat i Varia | Kommentarer inaktiverade för Rom i regnet


Svante är kung!

Preskriberade romanser

14 december, 2018 kl. 16:11 | Publicerat i Varia | Kommentarer inaktiverade för Preskriberade romanser

 

Satsa på järnväg — inte fossil utbyggnad

14 december, 2018 kl. 15:56 | Publicerat i Politics & Society | Kommentarer inaktiverade för Satsa på järnväg — inte fossil utbyggnad

svd.jpgVärlden står inför akuta klimatproblem samtidigt som det säkerhetspolitiska läget blir alltmer spänt med uppblossande handelskonflikter och minskande respekt för folkrätten. Det ökar riskerna med vårt oljeberoende och vår känsliga infrastruktur. Preem AB och Trafikverket har ett stort ansvar för att minska oljeförbrukningen och koldioxidutsläppen från transportsektorn och för rikets säkerhet. Men man agerar tyvärr tvärtom …

En robust järnväg kan inte nås via nedläggning av järnväg. Vid en analys av alla hotade banor med 1800-talsstandard visar det sig dessutom att en modernisering av dem skulle förbättra Trafikverkets ekonomi på underhållssidan med upp till 1 miljard kronor årligen … Men Trafikverket klarar tydligen inte av att sköta och utveckla landets järnvägssystem. Ett illavarslande tecken på detta är också införandet av EU:s nya signal- och säkerhetssystem ERTMS …

I stället för olönsamma megaprojekt som ERTMS, Ostlänken och Västlänken med tillhörande stora koldioxidutsläpp kan oglamorösa snabba vardagsåtgärder på hela bannätet ge robustare järnväg med mångdubblad kapacitet. Bevarande av elbanor, elektrifiering av dieselbanor och utbyggnad av dubbelspår ihop med utvecklad tågtrafik kan ge kraftigt sänkta koldioxidutsläpp från transportsektorn och minskat oljeberoende. Landets säkerhetsläge skulle då också förbättras.

Jan Du Rietz   Hans Albin Larsson   Lars P Syll

Why all models are wrong

13 december, 2018 kl. 17:53 | Publicerat i Economics, Theory of Science & Methodology | 11 kommentarer

moModels share three common characteristics: First, they simplify, stripping away unnecessary details, abstracting from reality, or creating anew from whole cloth. Second, they formalize, making precise definitions. Models use mathematics, not words … Models create structures within which we can think logically … But the logic comes at a cost, which leads to their third characteristic: all models are wrong … Models are wrong because they simplify. They omit details. By considering many models, we can overcome the narrowing of rigor by crisscrossing the landscape of the possible.

To rely on a single  model is hubris. It invites disaster … We need many models to make sense of complex systems.

Yes indeed. To rely on a single mainstream economic theory and its models is hubris.  It certainly does invite disaster. To make sense of complex economic phenomena we need many theories and models. We need pluralism. Pluralism both in theories and methods.

Using ‘simplifying’ mathematical tractability assumptions — rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not think their tractability assumptions make for good and realist models, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

The final court of appeal for models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, model building is little more than hand-waving that give us rather a little warrant for making inductive inferences from models to the real world.

Mainstream economists construct closed formalistic-mathematical theories and models for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their ‘laboratories’ they hope they can perform ‘thought experiments’ and observe how these factors operate on their own and without impediments or confounders.

Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in mainstream economic models are rather based on formalistic mathematical tractability. In the models they often appear as unrealistic ‘tractability’ assumptions, usually playing a decisive role in getting the deductive machinery to deliver precise’ and ‘rigorous’ results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.

If the ultimate criteria for success of a deductivist system are to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Real-world economic systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.

What is wrong with mainstream economics is not that it employs models per se. What is wrong is that it employs poor models. They — and the tractability assumptions on which they to a large extent build on — are poor because they do not bridge to the real world in which we live. And — as Page writes — ”if a model cannot explain, predict, or help us reason, we must set it aside.”

Disconfirming rational expectations

13 december, 2018 kl. 11:12 | Publicerat i Economics | Kommentarer inaktiverade för Disconfirming rational expectations

56238100Empirical efforts at testing the correctness of the rational expectations hypothesis have resulted in a series of empirical studies that have more or less concluded that it is not consistent with the facts. In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:

it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.

And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:

More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality … If taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As I have tried to show elsewhere — in On the use and misuse of theories and models in economics and Rational expectations: a fallacious foundation for macroeconomics in a non-ergodic world — there is really no support for this conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crises and the decisions and choices real people make, it is high time to dump macroeconomic models building on representative actors and rational expectations microfoundations in the pseudo-science dustbin.

Day of glory

12 december, 2018 kl. 18:14 | Publicerat i Varia | 1 kommentar

 

Oíche Chiúin

11 december, 2018 kl. 21:22 | Publicerat i Varia | Kommentarer inaktiverade för Oíche Chiúin

 

Econometrics — analysis with incredible​ certitude​

10 december, 2018 kl. 18:41 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Econometrics — analysis with incredible​ certitude​

9780199679348There have been over four decades of econometric research on business cycles …

But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective …

The wide conviction of the superiority of the methods of the science has converted the econometric community largely to a group of fundamentalist guards of mathematical rigour … So much so that the relevance of the research to business cycles is reduced to empirical illustrations. To that extent, probabilistic formalisation has trapped econometric business cycle research in the pursuit of means at the expense of ends.

The limits of econometric forecasting have, as noted by Qin, been critically pointed out many times before. Trygve Haavelmo assessed the role of econometrics — in an article from 1958 — and although mainly positive of the ”repair work” and ”clearing-up work” done, Haavelmo also found some grounds for despair:

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the ”laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

Maintaining that economics is a science in the ‘true knowledge’ business, I remain a sceptic of the pretences and aspirations of econometrics. The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it ‘fruitful to believe’ in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are skating on thin ice.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is an incredible hope for which there, really, is no other ground than hope itself.

Wald War II

10 december, 2018 kl. 14:42 | Publicerat i Statistics & Econometrics | Kommentarer inaktiverade för Wald War II

 

« Föregående sidaNästa sida »

Blogga med WordPress.com.
Entries och kommentarer feeds.