Universitet och högskolor släpper igenom sämre studenter

16 March, 2018 at 12:48 | Posted in Education & School | 7 Comments

kalKrisen i den svenska skolan har nått högskolan. I dagens Kaliber berättar universitetslärare om studenter med låga förkunskaper, om krav som sänks och om ett ersättningssystem som ger mer pengar ju fler studenter som godkänns …

Lars Pålsson Syll är professor i samhällskunskap och undervisar bland annat blivande lärare i statistik. Sedan han började undervisa för 30 år sedan har antalet högskolestudenter mer än fördubblats. Lars Pålsson Syll poängterar att det fortfarande finns lysande studenter, men att genomsnittet tycks ha sjunkit.

– De kommer med mindre i bagaget än vad jag och andra universitetslärare förväntar oss, de har gått i skolan, kraven för att man ska komma in på lärarutbildningen och bli samhällslärare är att man har matte B, samtidigt märker man att de inte verkar förstå ens elementa som de borde ha lärt sig i matematiken redan på högstadiet.

Hur hanterar ni det här? Är det då fler som blir underkända eller sjunker kraven, eller vad händer?

– Ja, om man skulle vara politiskt korrekt skulle man helst svara att vi tacklar det genom att underkänna fler studenter, men verkligheten är nog inte riktigt så panglossiansk, det är inte så vi ofta gör, utan jag tror, även om vi inte medvetet gör det så blir det nog så att får vi ett sämre material att jobba med så anpassar man ju kraven lite grann efter det materialet. Det är inte så rimligt att tro att en lärare vill underkänna 9 av 10 studenter och hålla kvar vid en kravnivå som vi hade för kanske 20-30 år sedan. Men det är klart att det är ingen som säger att vi sänka kraven för att få igenom de här studenterna, men de facto tror jag att vi gör det, och ett av de oroande skälen till att vi gör det är ju också den typen av ersättningssystem som vi har inom högskola, där genomströmningen av studenter syns i vår budget.



Science’s brightest star just went out

15 March, 2018 at 20:31 | Posted in Varia | 2 Comments

Stephen Hawking

Abduction — the induction that constitutes the essence​ of scientific reasoning

15 March, 2018 at 17:15 | Posted in Theory of Science & Methodology | 3 Comments

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmesFollowing the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (e only counts as evidence in relation to a specific hypothesis H) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of cours, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

Keynes and econometrics

15 March, 2018 at 12:23 | Posted in Statistics & Econometrics | Leave a comment

29f98bbf-47d2-45c8-840e-2be65f36be25After the 1920s, the theoretical and methodological approach to economics deeply changed … A new generation of American and European economists developed Walras’ and Pareto’s mathematical economics. As a result of this trend, the Econometric Society was founded in 1930 …

In the late 1930s, John Maynard Keynes and other economists objected to this recent “mathematizing” approach … At the core of Keynes’ concern laid the question of methodology.

Maria Alejandra Madi

Keynes’ comprehensive critique of econometrics and the assumptions it is built around — completeness, measurability, indepencence, homogeneity, and linearity — is still valid today.

Most work in econometrics is made on the assumption that the researcher has a theoretical model that is ‘true.’ But — to think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, it is a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

deb6e811f2b49ceda8cc2a2981e309f39e3629d8ae801a7088bf80467303077bEvery econometric model constructed is misspecified. There are always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables.  Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

The theoretical conditions that have to be fulfilled for econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although econometrics have become the most used quantitative methods in economics today, it’s still a fact that the inferences made from them are as a rule invalid.

Econometrics is basically a deductive method. Given the assumptions it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.

Adornokritik von Links

14 March, 2018 at 18:19 | Posted in Politics & Society | Leave a comment


Die Waffe der Kritik kann allerdings die Kritik der Waffen nicht ersetzen

Ricardian equivalence — nothing but total horseshit!

14 March, 2018 at 17:05 | Posted in Economics | 5 Comments

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent since debt financing must be repaid with interest, and agents — equipped with ‘rational expectations’ — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.



In the standard mainstream consumption model — used in DSGE macroeconomic modelling — people are basically portrayed as treating time as a dichotomous phenomenon  today and the future — when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form

ct + cf/(1+r) = ft + yt + yf/(1+r),

where ct is consumption today, cf is consumption in the future, ft is holdings of financial assets today, yt is labour incomes today, yf is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form

U = u(ct) + au(cf),

where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when

u'(ct) = a(1+r)u'(cf).

This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as

1/ct = a(1+r)(1/cf),


cf/ct = a(1+r).

This importantly implies that according to the mainstream consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes since wealth — ft + yt + yf/(1+r) — has to be interpreted as present discounted value net of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged.

That the theory does not fit the facts we already knew.

Now Jonathan Parker has summarized a series of studies empirically testing the theory, reconfirming how out of line with reality is Ricardian equivalence.

This only, again, underlines that there is, of course, no reason for us to believe in that fairy-tale. Or, as Nobel laureate Joseph Stiglitz has it:

Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.

Trump’s new torturer-in-chief

14 March, 2018 at 16:51 | Posted in Politics & Society | 1 Comment


Oss grannar emellan

14 March, 2018 at 14:24 | Posted in Varia | 1 Comment


Jordan Peterson on responsibility and political correctness

14 March, 2018 at 09:35 | Posted in Politics & Society | 4 Comments


Non-ergodicity and the poverty of kitchen sink modeling

13 March, 2018 at 11:21 | Posted in Statistics & Econometrics | 1 Comment


When I present this argument … one or more scholars say, “But shouldn’t I control for everything I can in my regressions? If not, aren’t my coefficients biased due to excluded variables?” This argument is not as persuasive as it may seem initially. First of all, if what you are doing is misspecified already, then adding or excluding other variables has no tendency to make things consistently better or worse … The excluded variable argument only works if you are sure your specification is precisely correct with all variables included. But no one can know that with more than a handful of explanatory variables.
piled-up-dishes-in-kitchen-sinkStill more importantly, big, mushy linear regression and probit equations seem to need a great many control variables precisely because they are jamming together all sorts of observations
that do not belong together. Countries, wars, racial categories, religious preferences, education levels, and other variables that change people’s coefficients are “controlled” with dummy variables that are completely inadequate to modeling their effects. The result is a long list of independent variables, a jumbled bag of nearly unrelated observations, and often a hopelessly bad specification with meaningless (but statistically significant with several
asterisks!) results.

A preferable approach is to separate the observations into meaningful subsets—internally compatible statistical regimes … If this can’t be done, then statistical analysis can’t be done. A researcher claiming that nothing else but the big, messy regression is possible because, after all, some results have to be produced, is like a jury that says, “Well, the evidence was weak, but somebody had to be convicted.”

Christopher H. Achen

the-only-function-of-economic-forecasting-is-to-make-astrology-look-respectable-quote-1The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not conform with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule ‘we simply do not know.’

Ergodicity is a technical term used by statisticians to reflect the idea that we can learn something about the future by looking at the past. It is an idea that is essential to our use of probability models to forecast the future and it is the failure of economic systems to display this property that makes our forecasts so fragile.

Roger Farmer


« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.