## Inference to the best explanation

21 Dec, 2018 at 15:41 | Posted in Theory of Science & Methodology | 15 Comments

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition in which the main task of science is studying the structure of reality.

Science is made possible by the fact that there are structures that are durable and independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. Contrary to positivism, yours truly would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts, but rather to identify and explain the underlying structure and forces that produce the observed events.

Given that what we are looking for is to be able to explain what is going on in the world we live in, it would — instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliative deductive reasoning, as in mainstream economic theory — be so much more fruitful and relevant to apply inference to the best explanation.

## Deborah Mayo on statistical significance testing

20 Dec, 2018 at 16:30 | Posted in Statistics & Econometrics | Comments Off on Deborah Mayo on statistical significance testing

Although yours truly appreciate much of Mayo’s philosophical-statistical work, it is essential to remember that her qualified use of ‘severe testing’ actually is pretty far from the usual day to day practice of significance testing among applied social scientists today. If, however, statistics users stuck to Mayo’s ‘severe testing,’ there would be less reason to criticize the modern frequentist position on statistical inference.

## Big data — Poor science

20 Dec, 2018 at 16:09 | Posted in Statistics & Econometrics | 2 CommentsAlmost everything we do these days leaves some kind of data trace in some computer system somewhere. When such data is aggregated into huge databases it is called “Big Data”. It is claimed social science will be transformed by the application of computer processing and Big Data. The argument is that social science has, historically, been “theory rich” and “data poor” and now we will be able to apply the methods of “real science” to “social science” producing new validated and predictive theories which we can use to improve the world.

What’s wrong with this? … Firstly what is this “data” we are talking about? In it’s broadest sense it is some representation usually in a symbolic form that is machine readable and processable. And how will this data be processed? Using some form of machine learning or statistical analysis. But what will we find? Regularities or patterns … What do such patterns mean? Well that will depend on who is interpreting them …

Looking for “patterns or regularities” presupposes a definition of what a pattern is and that presupposes a hypothesis or model, i.e. a theory. Hence big data does not “get us away from theory” but rather requires theory before any project can commence.

What is the problem here? The problem is that a certain kind of approach is being propagated within the “big data” movement that claims to not be a priori committed to any theory or view of the world. The idea is that data is real and theory is not real. That theory should be induced from the data in a “scientific” way.

I think this is wrong and dangerous. Why? Because it is not clear or honest while appearing to be so. Any statistical test or machine learning algorithm expresses a view of what a pattern or regularity is and any data has been collected for a reason based on what is considered appropriate to measure. One algorithm will find one kind of pattern and another will find something else. One data set will evidence some patterns and not others. Selecting an appropriate test depends on what you are looking for. So the question posed by the thought experiment remains “what are you looking for, what is your question, what is your hypothesis?”

Ideas matter. Theory matters. Big data is not a theory-neutral way of circumventing the hard questions. In fact it brings these questions into sharp focus and it’s time we discuss them openly.

## Well-regarded economics journals publishing rubbish

19 Dec, 2018 at 16:21 | Posted in Economics | Comments Off on Well-regarded economics journals publishing rubbishIn a new paper,

Andrew Chang, an economist at the Federal Reserve andPhillip Li, an economist with the Office of the Comptroller of the Currency, describe their attempt to replicate 67 papers from 13 well-regarded economics journals …Their results? Just under half, 29 out of the remaining 59, of the papers could be qualitatively replicated (that is to say, their general findings held up, even if the authors did not arrive at the exact same quantitative result). For the other half whose results could not be replicated, the most common reason was “missing public data or code” …

H.D. Vinod, an economics professor at Fordham University … noted that … caution could be outweighed by the sheer amount of work it takes to clean up data files in order to make them reproducible.“It’s human laziness,” he said. “There’s all this work involved in getting the data together” …

Bruce McCullough, said he thought the authors’ definition of what counted as replication – achieving the same qualitative, as opposed to quantitative, results – was far too generous. If a paper’s conclusions are correct, he argues, one should be able to arrive at the same numbers using the same data.“What these journals produce is not science,” he said. “People should treat the numerical results as if they were produced by a random number generator.”

## Chicago follies (XXXIII)

18 Dec, 2018 at 19:25 | Posted in Economics | 2 CommentsAt the University of Chicago, where I went to graduate school, they sell a t-shirt that says “that’s all well and good in practice, but how does it work in theory?” That ode to nerdiness in the ivory tower captures the state of knowledge about rising wealth inequality, both its causes and its consequences. Economic models of the distribution of wealth tend to assume that it is “stationary.” In other words, some people become wealthier and others become poorer, but as a whole it stays pretty much the same. Yet both of those ideas are empirically wrong: Individual mobility within the wealth distribution is low, and the distribution has become much more unequal over the past several decades …

Economists typically highlight individual or inter-generational mobility within the wealth distribution as both a reason not to care that the distribution itself is unequal and as an argument that having wealthy parents (or not) doesn’t matter that much for children’s outcomes. In fact, an influential model by the late Gary Becker and Nigel Tomes, both of the University of Chicago , predicts that accumulated wealth reduces income inequality because parents who love all their children equally allocate their bequests to compensate for their stupid children’s likely lower earnings potential in the labor market. According to those two authors, families redistribute from the smart to the dumb, and therefore, by implication, governments don’t have to redistribute from the rich to the poor.

But as Thomas Piketty and numerous other scholars point out, those reasons not to care about wealth inequality are not empirically valid. There’s scant evidence that parents leave larger inheritances to stupid children. Nor is there much evidence that native ability is the major determinant of earnings in the labor market or other life outcomes. The weakness of these explanations gets to a much larger question, one of the most important (and unanswered) ones in economics: Why are some people rich while others are poor? What economists are just finding out (while others have known for awhile now) is, essentially, “because their parents were.”

## Why bloggers are so negative

17 Dec, 2018 at 19:13 | Posted in Varia | 3 CommentsRather than getting into a discussion of whether blogs, or academic sociology, or movie reviews, should be more positive or negative, let’s get into the more interesting question of Why.

Why is negativity such a standard response?

1.

Division of labor. Within social science, sociology’s “job” is to confront us with the bad news, to push us to study inconvenient truths. If you want to hear good news, you can go listen to the economists …2.

Efficient allocation of resources. Where can we do the most good? Reporting positive news is fine, but we can do more good by focusing on areas of improvement …3.

Status. Sociology doesn’t have the prestige of economics (more generally, social science doesn’t have the prestige of the natural sciences); blogs have only a fraction of the audience of the mass media (and we get paid even less for blogging then they get paid for their writing); and movie reviewers, of course, are nothing but parasites on the movie industry …4.

Urgency… As a blogger, I might not bother saying much about a news article that was well reported, because the article itself did a good job of sending its message. But it might seem more urgent to correct an error …5.

Man bites dog. Failures are just more interesting to write about, and to read about, than successes …And, yes, I see the irony that this post, which is all about why sociologists and bloggers are so negative, has been sparked by a negative remark made by a sociologist on a blog. And I’m sure you will have some negative things to say in the comments. After all, the only people more negative than bloggers, are blog commenters!

## Why statistical significance is worthless in science

17 Dec, 2018 at 14:33 | Posted in Statistics & Econometrics | Comments Off on Why statistical significance is worthless in scienceThere are at least around 20 or so common misunderstandings and abuses of p-values and NHST [Null Hypothesis Significance Testing]. Most of them are related to the definition of p-value … Other misunderstandings are about the implications of statistical significance.

Statistical significance does not mean substantive significance:just because an observation (or a more extreme observation) was unlikely had there been no differences in the population does not mean that the observed differences are large enough to be of practical relevance. At high enough sample sizes, any difference will be statistically significant regardless of effect size.

Statistical non-significance does not entail equivalence:a failure to reject the null hypothesis is just that. It does not mean that the two groups are equivalent, since statistical non-significance can be due to low sample size.

Low p-value does not imply large effect sizes:because p-values depend on several other things besides effect size, such as sample size and spread.

It is not the probability of the null hypothesis:as we saw, it is the conditional probability of the data, or more extreme data, given the null hypothesis.

It is not the probability of the null hypothesis given the results:this is the fallacy of transposed conditionals as p-value is the other way around, the probability of at least as extreme data, given the null.

It is not the probability of falsely rejecting the null hypothesis:that would be alpha, not p.

It is not the probability that the results are a statistical fluke:since the test statistic is calculated under the assumption that all deviations from the null is due to chance. Thus, it cannot be used to estimate that probability of a statistical fluke since it is already assumed to be 100%.

Rejection null hypothesis is not confirmation of causal mechanism:you can imagine a great number of potential explanations for deviations from the null. Rejecting the null does not prove a specific one. See the above example with suicide rates.

NHST promotes arbitrary data dredging (“p-value fishing”):if you test your entire dataset and does not attain statistical significance, it is tempting to test a number of subgroups. Maybe the real effect occurs in me, women, old, young, whites, blacks, Hispanics, Asians, thin, obese etc.? More likely, you will get a number of spurious results that appear statistically significant but are really false positives. In the quest for statistical significance, this unethical behavior is common.

As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are *model constructions*. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models! If you run a regression and get significant values (p < .05) on the coefficients, that only means that *if* the model is right and the values on the coefficients are null, it would be extremely unlikely to get those low p-values. But — one of the possible reasons for the result, a reason you can *never *dismiss, is that your model simply is wrong!

The present excessive reliance on significance testing in science is disturbing and should be fought. But it is also important to put significance testing abuse in perspective. The real problem in today’s social sciences is not significance testing *per se*. No, the real problem has to do with the unqualified and mechanistic application of statistical methods to real-world phenomena without often having even the slightest idea of how the assumptions behind the statistical models condition and severely limit the value of the inferences made.

## When the herd turns

16 Dec, 2018 at 12:54 | Posted in Economics | Comments Off on When the herd turns

## Annie Lööf — en unken nyliberal som borde veta hut

16 Dec, 2018 at 12:49 | Posted in Politics & Society | Comments Off on Annie Lööf — en unken nyliberal som borde veta hutAtt välkända nyliberaler som Alan Greenspan och Paul Ryan vurmar för den iskalla egoismens översteprästinnan Ayn Rand och hennes övermänniskoideal är kanske inte så förvånande. Men att Annie Lööf också gör det är kanske mer anmärkningsvärt.

I Lööfs ögon är Rand “en av 1900-talets största tänkare”. I andras en av 1900-talets mest vämjeliga personer.

Att fru Lööf ohöljt hyllar en psykopat som Ayn Rand och en diktatorkramare som Margaret Thatcher är klandervärt i sig. Men huvudskälet till att jag år efter år fortsatt kritisera denna floskulösa politikerbroiler beror främst på hennes huvudlösa nyliberala argumentation och motionerande om att exempelvis

- Införa plattskatt (lägre skatt för höginkomsttagare)
- Avskaffa lagen om anställningsskydd
- Inskränka strejkrätten
- Införa marknadshyror
- Sälja ut SvT och SR
- Få Sverige att gå med i NATO
- Bygga ut kärnkraften

Annie Lööf och andra nyliberaler har länge traskat patrull och lovprisat en amerikansk modell med avsaknad av regleringar och välfärdshöjande kollektivavtal på arbetsmarknaden. Jo, man tackar! Visst. I USA ser vi kanske inte så många exempel på det som Lööf et consortes kallar fackliga “övergrepp,” men desto mer av inkomst- och förmögenhetsojämlikhet. Något som i hög grad också bidragit till den ekonomiska kräftgången.

Som all modern forskning på området visar, hämmas tillväxt och välfärd av ojämlikhet. I stället för att låta enfald få ersätta analysförmåga och besinning borde oförnuftets marknadsmatadorer vara ytterst självkritiska och fundera över hur man så länge kunnat torgföra tankeludd för vilket det helt saknas vetenskaplig grund.

Nyliberal dumdryghet med idoler och förebilder som Ayn Rand och Margaret Thatcher borde inte ge röster i 2000-talets Sverige. Låt oss hoppas att verkligheten snart börjar komma ikapp vår egen Rand-Thatcher. Det är hög tid för ett uppvaknande ur den nyliberala mardröm denna politiska broiler och klyschmakare lyckats dra ner det en gång så stolta centerpartiet i.

Till dem som likt Annie Lööf utifrån unkna ideologiska bevekelsegrunder vill demontera den svenska modellens välfärdsskapande strukturer vill nog många med Fabian Månsson utbrista: “Vet hut, vet sjudubbelt hut.”

## The capital controversy

16 Dec, 2018 at 11:50 | Posted in Economics | 3 CommentsThe production function has been a powerful instrument of miseducation. The student of economic theory is taught to write Q = f(L, K) where L is a quantity of labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labor; he is told something about the index-number problem in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units K is measured. Before he ever does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.

Joan RobinsonThe Production Function and the Theory of Capital(1953)

## Karl-Bertil — räddaren i nöden

16 Dec, 2018 at 11:42 | Posted in Varia | Comments Off on Karl-Bertil — räddaren i nöden

## The most dangerous equation in the world

15 Dec, 2018 at 17:12 | Posted in Statistics & Econometrics | Comments Off on The most dangerous equation in the worldFailure to take sample size into account and inferring causality from outliers can lead to incorrect policy actions. For this reason, Howard Wainer refers to the formula for the standard deviation of the mean the “most dangerous equation in the world.” For example, in the 1990s the Gates Foundation and other nonprofits advocated breaking up schools based on evidence that the best schools were small. To see the flawed reasoning, imagine that schools come in two sizes — small schools with 100 students and large schools with 1600 students — and that students scores at both types of schools are drawn from the same distribution with a mean score of 100 and a standard deviation of 80. At small schools, the standard deviation of the mean equals 8. At large schools, the standard deviation of the mean equals 2.

If we assign the label ‘high-performing’ to schools with means above 110 and the label ‘exceptional’ to schools with means above 120, then only small schools will meet either threshold. For the small schools, an average score of 110 is 1.25 standard deviations above the mean; such events occur about 10% of the time. A mean score of 120 is 2.5 standard deviations above the mean … When we do these same calculations for large schools, we find that the ‘high-performing’ threshold lies 5 standard deviations above the mean and the ‘exceptional’ threshold lies 10 standard deviations above the mean. Such events would, in practice, never occur. Thus, the fact that the very best schools are smaller is not evidence that smaller schools perform better.

## Brian Arthur and the ‘El Farol Problem’

15 Dec, 2018 at 15:38 | Posted in Economics | Comments Off on Brian Arthur and the ‘El Farol Problem’

Blog at WordPress.com.

Entries and comments feeds.