Why everything we know about modern economics is wrong

19 Dec, 2020 at 20:12 | Posted in Economics | 7 Comments

The proposition is about as outlandish as it sounds: Everything we know about modern economics is wrong. And the man who says he can prove it doesn’t have a degree in economics. But Ole Peters is no ordinary crank. A physicist by training, his theory draws on research done in close collaboration with the late Nobel laureate Murray Gell-Mann, father of the quark …

His beef is that all too often, economic models assume something called “ergodicity.” That is, the average of all possible outcomes of a given situation informs how any one person might experience it. But that’s often not the case, which Peters says renders much of the field’s predictions irrelevant in real life. In those instances, his solution is to borrow math commonly used in thermodynamics to model outcomes using the correct average …

If Peters is right — and it’s a pretty ginormous if — the consequences are hard to overstate. Simply put, his “fix” would upend three centuries of economic thought, and reshape our understanding of the field as well as everything it touches …

Peters asserts his methods will free economics from thinking in terms of expected values over non-existent parallel universes and focus on how people make decisions in this one. His theory will also eliminate the need for the increasingly elaborate “fudges” economists use to explain away the inconsistencies between their models and reality.

  .                                                                                                                          Brandon Kochkodin / BlombergQuint

sfiOle Peters’ fundamental critique of (mainstream) economics involves arguments about ergodicity and the all-important difference between time averages and ensemble averages. These are difficult concepts that many students of economics have problems with understanding. So let me just try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student you probably would because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different ‘parallel universes’ where the independent outcomes are the numbers from one to six, and then weigh them using their stochastic probability distribution. Calculating the expected value of the gamble — the ensemble average — by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average — imagining one real universe where the six different but dependent outcomes occur consecutively — we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

From a mathematical point of view, you can  (somewhat non-rigorously) describe the difference between ensemble averages and time averages as a difference between arithmetic averages and geometric averages. Tossing a fair coin and gaining 20% on the stake (S) if winning (heads) and having to pay 20% on the stake (S) if losing (tails), the arithmetic average of the return on the stake, assuming the outcomes of the coin-toss being independent, would be [(0.5*1.2S + 0.5*0.8S) – S)/S]  = 0%. If considering the two outcomes of the toss not being independent, the relevant time average would be a geometric average return of squareroot [(1.2S *0.8S)]/S – 1 = -2%.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150 and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility that I have raised before (e. g.  here).

When applied to the mainstream theory of expected utility, one thinks in terms of ‘parallel universe’ and asks what is the expected return of an investment, calculated as an average over the ‘parallel universe’? In our coin tossing example, it is as if one supposes that various ‘I’ are tossing a coin and that the loss of many of them will be offset by the huge profits one of these ‘I’ does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the ‘non-parallel universe’ in which we live.

Time averages give a more realistic answer, where one thinks in terms of the only universe we actually live in and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and ‘black swans’ are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked to a multiplicative process (as e. g. investment returns with ‘compound interest’) which is basically non-ergodic.


Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe they could conceivably have been refilled, is of little comfort to those who live in the one and only possible world that we call the real world.


19 Dec, 2020 at 09:48 | Posted in Politics & Society | Comments Off on Känguru-Comics
Marc-Uwe fragt sich wem es egal sein konnte, ob Trump gewählt werden würde oder nicht

What can RCTs tell us?

16 Dec, 2020 at 17:22 | Posted in Economics | 1 Comment

Using Randomised Controlled Trials in Education (BERA/SAGE Research Methods  in Education): Amazon.co.uk: Connolly, Paul, Biggart, Andy, Miller, Dr  Sarah, O'hare, Liam, Thurston, Allen: 9781473902831: BooksWe seek to promote an approach to RCTs that is tentative in its claims and that avoids simplistic generalisations about causality and replaces these with more nuanced and grounded accounts that acknowledge uncertainty, plausibility and statistical probability …

Whilst promoting the use of RCTs in education we also need to be acutely aware of their limitations … Whilst the strength of an RCT rests on strong internal validity, the Achilles heel of the RCT is external validity … Within education and the social sciences a range of cultural conditions is likely to influence the external validity of trial results across different contexts. It is precisely​ for this reason that qualitative components of an evaluation, and particularly the development of plausible accounts of generative mechanisms are so important …

Highly recommended reading.

Nowadays it is widely believed among mainstream economists that the scientific value of randomisation — contrary to other methods — is totally uncontroversial and that randomised experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomisation does not guarantee anything.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like last year’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

‘Nobel prize’ winners like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

Fuzzy RDD & IV (student stuff)

16 Dec, 2020 at 10:04 | Posted in Statistics & Econometrics | Comments Off on Fuzzy RDD & IV (student stuff)


The elite school illusion

15 Dec, 2020 at 19:28 | Posted in Economics | 1 Comment


A great set of lectures — but yours truly still warns his students that regression-based averages is something we have reasons to be cautious about.

Suppose we want to estimate the average causal effect of a dummy variable (T) on an observed outcome variable (O). In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

O = α + βT + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ (T=1) may have causal effects equal to -100 and those ‘not treated’ (T=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

The heterogeneity problem does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of OLS estimates that economists produce every year.

COVID19 and causal inference (wonkish)

14 Dec, 2020 at 19:06 | Posted in Statistics & Econometrics | 1 Comment


MMT perspectives on rising interest rates

14 Dec, 2020 at 12:24 | Posted in Economics | 34 Comments

The Bank of England is today wholly-owned by the UK government, and no other body is allowed to create UK pounds. It can create digital pounds in the payments system that it runs, thus marking up and down the accounts of banks, the government and other public institutions. It also acts as the bank of the government, facilitating its payments. The Bank of England also determines the bank rate, which is the interest rate it pays to commercial banks that hold money (reserves) at the Bank of England …

The Great Unwind: What Will Rising Interest Rates Mean for Bank Risk  Exposures? The interest rate that the UK government pays is a policy variable determined by the Bank of England. Furthermore, it is not the Bank of England’s remit to bankrupt the government that owns it. The institutional setup ensures that the Bank of England supports the liquidity and solvency of the government to the extent that it becomes an issuer of currency itself. Selling government bonds, it can create whatever amount of pounds it deems necessary to fulfil its functions. Given that the Bank of England stands ready to purchase huge amounts of gilts on the secondary market (for “used” gilts), it is clear to investors that gilts are just as good as reserves. There is no risk of default …

The government of the UK cannot “run out of money”. When it spends more into the economy than it collects through taxes, a “public deficit” is produced. This means that the private sector saves a part of its monetary income which it has not spent on paying taxes (yet). When the government spends less than it collects in taxes, a “public surplus” results. This reduces public debt. That public debt to GDP ratio can be heavily influenced by GDP growth, which explains the fall in the public debt to GDP ratio in the second half of the 20th century …

So, do rising interest rates in the future create a problem for the UK government? No. The Bank of England is the currency issuer. There is nothing that stops it from paying what HM Treasury instructs it to pay. Gilts can be issued in this process as an option. The government’s ability to pay is not put into doubt since the Bank of England acts as a lender of last resort, offering to buy up gilts on the market so that the price of gilts can never crash. Higher interest rates cannot bankrupt the UK government.

Dirk Ehnts

One of the main reasons behind the lack of understanding that mainstream economists repeatedly demonstrate when it comes to these policy issues is related to the loanable funds theory and the view that governments — in analogy with individual households — have to have income before they can spend. This is, of course, totally wrong. Most governments nowadays are monopoly issuers of their own currencies, not users.

The loanable funds theory is in many regards nothing but an approach where the ruling rate of interest in society is — pure and simple — conceived as nothing else than the price of loans or credit, determined by supply and demand in the same way as the price of bread and butter on a village market. In the traditional loanable funds theory the amount of loans and credit available for financing investment is constrained by how much saving is available. Saving is the supply of loanable funds, investment is the demand for loanable funds and assumed to be negatively related to the interest rate.

There are many problems with this theory.

Loanable funds theory essentially reduces modern monetary economies to something akin to barter systems — something they definitely are not. As emphasised especially by Minsky, to understand and explain how much investment/loaning/ crediting is going on in an economy, it’s much more important to focus on the working of financial markets than staring at accounting identities like S = Y – C – G. The problems we meet on modern markets today have more to do with inadequate financial institutions than with the size of loanable-funds-savings.

A further problem in the traditional loanable funds theory is that it assumes that saving and investment can be treated as independent entities. This is seriously wrong.  There are always (at least) two parts in an economic transaction. Savers and investors have different liquidity preferences and face different choices — and their interactions usually only take place intermediated by financial institutions. This, importantly, also means that there is no “direct and immediate” automatic interest mechanism at work in modern monetary economies. What this ultimately boils down to is that what happens at the microeconomic level — both in and out of equilibrium —  is not always compatible with the macroeconomic outcome. The fallacy of composition has many faces — loanable funds is one of them.

All real economic activities nowadays depend on a functioning financial machinery. But institutional arrangements, states of confidence, fundamental uncertainties, asymmetric expectations, the banking system, financial intermediation, loan granting processes, default risks, liquidity constraints, aggregate debt, cash flow fluctuations, etc., etc. — things that play decisive roles in channeling​ money/savings/credit — are more or less left in the dark in modern formalisations of the loanable funds theory. Thanks to MMT that kind of evasion of the real policy issues we face today, are now met with severe questioning and justified critique.

Comment juguler la pandémie

14 Dec, 2020 at 11:26 | Posted in Economics | Comments Off on Comment juguler la pandémie

Comment juguler la pandémie de COVID-19 et relancer en même temps  l'économie ?Les Etats-Unis, le Royaume-Uni et les pays européens dans leur ensemble … continuent d’être confrontés aux résurgences répétées d’une circulation virale en croissance exponentielle. Ils peuvent parvenir à juguler la pandémie en suivant l’exemple des pays qui ont réussi à le faire.

Mais pour les citoyens, attendre une réponse officielle déjà trop retardée serait un jeu de dupes. N’attendez pas que vos gouvernements agissent. Confinez votre famille. Persuadez vos écoles qu’elles doivent revenir à l’enseignement à distance. Suivez vos offices religieux en ligne. Faites-vous livrer vos repas à domicile. Evitez les bars, les clubs, les gymnases, les cafés et restaurants. Convainquez vos amis, voisins, collègues de travail et membres de votre paroisse de faire de même. Exhortez chacun à comprendre que le port du masque et le respect de la distanciation sociale sont des services publics dans l’intérêt du bien commun.

Phillip Alvelda  Thomas Ferguson  John C. Mallery / Le Monde

Polyglot celebrity

13 Dec, 2020 at 14:21 | Posted in Economics | Comments Off on Polyglot celebrity

Yours truly knows nine (and decently speaks four) languages, but this guy actually speaks seven languages. Impressive!

The greatest Christmas duet ever recorded

13 Dec, 2020 at 11:12 | Posted in Varia | Comments Off on The greatest Christmas duet ever recorded


Cui bono?

13 Dec, 2020 at 10:48 | Posted in Politics & Society | Comments Off on Cui bono?
Wenn Verschwörungstheoretiker immer "cui bono?" ("wer profitiert") fragen, weiß man sofort: Leute mit Kindern sind es nicht

How to argue with silly economists

13 Dec, 2020 at 10:31 | Posted in Economics | 1 Comment

argueIn the increasingly contentious world of pop economics, you … may find yourself in an argument with an economist. And when this happens, you should be prepared, because many of the arguments that may seem at first blush to be very powerful and devastating are, in fact, pretty weak tea …

Principle 1: Credentials are not an argument.

Example: “You say Theory X is wrong…but don’t you know that Theory X is supported by Nobel Prize winners A, B, and C, not to mention famous and distinguished professors D, E, F, G, and H?”

Suggested Retort: Loud, barking laughter …

Principle 2: “All theories are wrong” is false.

Example: “Sure, Theory X fails to forecast any variable of interest or match important features of the data. But don’t you know that all models are wrong? I mean, look at Newton’s Laws…THOSE ended up turning out to be wrong, ha ha ha.”

Suggested Retort: Empty an entire can of Silly String onto anyone who says this. (I carry Silly String expressly for this purpose.)

Alternative Suggested Retort: “Yeah, well, when your theory is anywhere near as useful as Newton’s Laws, come back and see me, K?” …

Principle 3: “We have theories for that” is not good enough.

Example: “How can you say that macroeconomists have ignored Phenomenon X? We have theories in which X plays a role! Several, in fact!”

Suggested Retort: “Then how come no one was paying attention to those theories before Phenomenon X emerged and slapped us upside the head?”

Reason You’re Right:  … If the profession doesn’t have a good way to choose which theories to apply and when, then simply having a bunch of theories sitting around gathering dust is a little pointless …

There are, of course, a lot more principles than these … The set of silly things that people can and will say to try to beat an interlocutor down is, well, very large. But I think these seven principles will guard you against much of the worst of the silliness.

Noah Smith

Rasism på svensk arbetsmarknad

12 Dec, 2020 at 11:57 | Posted in Politics & Society | 3 Comments

Tydlig diskriminering av afrosvenskar på svensk arbetsmarknad |  Länsstyrelsen Stockholm Vi studerar effekter av erfarenhet från ett gigjobb genom ett s.k. korrespondens-experiment. Vi skickade under 2018 omkring 10 000 fiktiva ansökningar till 3 300 jobbannonser hämtade från Arbetsförmedlingens sida Platsbanken. Genom att slumpmässigt variera arbetslivserfarenheten i ansökningarna och räkna andelen positiva svar från arbetsgivarna, kan vi isolera effekten av olika typer av arbetslivserfarenhet på sannolikheten att bli kallad till intervju för ett jobb. Våra fiktiva jobbsökare (alla unga män) hade antingen varit arbetssökande under det senaste året, haft ett gigjobb eller haft ett traditionellt jobb. Vi varierade dessutom namnen på ansökningarna mellan svensk- eller arabiskklingande, för att kunna undersöka om gigerfarenhet värderas olika för dessa grupper …

Våra resultat tyder på att erfarenhet från gigjobb värderas positivt av arbetsgivare, men att erfarenhet från traditionella jobb värderas högre, för sökande med svenskklingande namn … I linje med tidigare studier finner vi omfattande etnisk diskriminering – andelen positiva svar för sökande med arabiskklingande namn är omkring hälften så hög som för sökande med svenskklingande namn. Vidare finner vi att arbetsgivare inte över huvud taget värderar arbetslivserfarenhet för denna grupp. Våra resultat ger alltså inget stöd till hypotesen att gigjobb kan hjälpa minoriteter att få fotfäste på arbetsmarknaden.

Adrian Adermon & Lena Hensvik / IFAU

Den här läsvärda och intressanta rapporten från IFAU är ett uttryck för en ny trend inom nationalekonomin sedan ett par decennier där man i allt större utsträckning kommit att intressera sig för experiment och — inte minst — hur dessa ska designas för att om möjligt ge svar på frågor om orsakssammanhang och policyeffekter. En vanlig utgångspunkt är den av främst Neyman och Rubin utarbetade ‘kontrafaktiska ansatsen’, som ofta presenteras och diskuteras med utgångspunkt i exempel på randomiserade kontrollstudier, naturliga experiment, ‘difference in difference’, matchning, ‘regression discontinuity’, m m.

Genomgående brukar mainstream ekonomer — i likhet med nutida ‘randomistas’ — ställa sig positiva till den här utvecklingen av nationalekonomins verktygslåda. Eftersom yours truly — i likhet med t ex Nancy Cartwright och Angus Deaton — inte är odelat positiv till randomiseringsansatsen kan det kanske vara intressant för läsaren att här ta del av några av mina kritikpunkter

En påtaglig begränsning med kontrafaktiska randomiseringsdesigner är att de bara ger oss svar på hur ‘behandlingsgrupper’ i genomsnitt skiljer sig från ‘kontrollgrupper’. Låt mig ta ett exempel för att belysa hur begränsande detta faktum kan vara:

Ibland hävdas det bland skoldebattörer och politiker att friskolor skulle vara bättre än kommunala skolor. De sägs leda till bättre resultat. För att ta reda på om det verkligen förhåller sig så väljs slumpmässigt ett antal högstadieelever ut som får skriva ett prov. Resultatet skulle då kunna bli: Provresultat = 20 + 5*T, där T=1 om eleven går i friskola, och T=0 om eleven går i kommunal skola. Detta skulle innebära att man får bekräftat antagandet — friskoleelever har i genomsnitt 5 poäng högre resultat än elever på kommunala skolor. Nu är ju politiker (förhoppningsvis) inte dummare än att de är medvetna om att detta statistiska resultat inte kan tolkas i kausala termer eftersom elever som går på friskolor typiskt inte har samma bakgrund (socio-ekonomiskt, utbildningsmässigt, kulturellt etc) som de som går på kommunala skolor (relationen skolform-resultat är ‘confounded’ via ‘selection bias’). För att om möjligt få ett bättre mått på skolformens kausala effekter väljer politiker föreslå att man via lottning — ett klassikt exempel på randomiseringsdesign vid ‘naturliga experiment’ — gör det möjligt för 1000 högstadieelever att bli antagna till en friskola. ‘Vinstchansen’ är 10%, så 100 elever får denna möjlighet. Av dessa antar 20 erbjudandet att gå i friskola. Av de 900 lotterideltagare som inte ‘vinner’ väljer 100 att gå i friskola. Lotteriet uppfattas ofta av skolforskare som en ’instrumentalvariabel’ och när man så genomför analysen visar sig resultatet bli: Provresultat = 20 + 2*T. Detta tolkas standardmässigt som att man nu har fått ett kausalt mått på hur mycket bättre provresultat högstadieelever i genomsnitt skulle få om de istället för att gå på kommunala skolor skulle välja att gå på friskolor. Men stämmer det? Nej! Om inte alla skolelever har exakt samma provresultat (vilket väl får anses vara ett väl långsökt ‘homogenitetsantagande’) så gäller den angivna genomsnittliga kausala effekten bara de elever som väljer att gå på friskola om de ’vinner’ i lotteriet, men som annars inte skulle välja att gå på en friskola (på statistikjargong kallar vi dessa ’compliers’). Att denna grupp elever skulle vara speciellt intressant i det här exemplet är svårt att se med tanke på att den genomsnittliga kausala effekten skattad med hjälp av instrumentalvariabeln inte säger någonting alls om effekten för majoriteten (de 100 av 120 som väljer en friskola utan att ha ‘vunnit’ i lotteriet) av de som väljer att gå på en friskola.

Slutsats: forskare måste vara mycket mer försiktiga med att tolka ‘genomsnittsskattningar’ som kausala. Verkligheten uppvisar en hög grad av heterogenitet. Och då säger oss ‘genomsnittsparametrar’ i regel nästintill inget alls!

Att randomisera betyder idealt att vi uppnår ortogonalitet (oberoende) i våra modeller. Men det innebär inte att vi i verkliga experiment när vi randomiserar uppnår detta ideal. Den ‘balans’ som randomiseringen idealt ska resultera i går inte att ta för given när idealet omsättas i verklighet. Här måste man argumentera och kontrollera att ’tilldelningsmekanismen’ verkligen är stokastisk och att ‘balans’ verkligen uppnåtts!

Även om vi accepterar begränsningen i att bara kunna säga något om genomsnittliga kausala effekter (‘average treatment effects’) föreligger ett annat teoretiskt problem. Ett idealt randomiserat experiment förutsätter att man först väljer (‘selection’) ett antal personer från en slumpmässigt vald population och sedan delar in (‘assignment’) dessa peersoner slumpmässigt i en ‘behandlingsgrupp’ respektive ‘kontrollgrupp’. Givet att man lyckas genomföra både ‘selection’ och ‘assignment’ slumpmässigt kan man visa tt förväntningsvärdet av utfallsskillnaderna mellan de båda grupperna är den genomsnittliga kausala effekten i populationen. Kruxet är bara att de experiment som genomförs nästan aldrig bygger på att deltagare i experiment är valda ur en slumpmässig population! I de flesta fall startas experiment för att det i en given population (exemplevis skolelever eller arbetssökande i landet X) föreligger ett problem av något slag som man vill åtgärda. Ett idealt randomiserat experiment förutsätter att både ‘selection’ och ‘ assignment’ är randomiserade — detta innebär att i princip inga av de empiriska resultat som randomiseringsföreträdare idag  så ivrigt prisar håller i strikt matematisk-statistisk mening. Att det bara talas om randomisering i ‘assignment’fasen är knappast någon tillfällighet. När det gäller ‘som om’ randomisering i ‘naturliga experiment’ tillkommer dessutom det trista — men ofrånkomliga — faktum att det alltid kan föreligga beroende mellan de undersökta variablerna och icke-observerbara faktorer i feltermen, vilket aldrig går att testa!

Ett annat påtagligt och stort problem är att forskare som använder sig av de här på randomisering grundade forskningsstrategierna genomgående för att nå ‘exakta’ och ‘precisa’ resultat ställer upp problemformuleringar som inte alls är de vi verkligen skulle vilja få svar på. Designen blir huvudsaken och bara man får mer eller mindre snillrika experiment på plats tror man sig kunna dra långtgående slutsatser om både kausalitet och att kunna generalisera experimentutfallen till större populationer. Tyvärr inebär detta oftast att den här typen av forskning får en negativ förskjutning bort från intressanta och viktiga problem till att istället prioritera metodval. Design och forskningsplanering är viktigt, men forskningens trovärdighet handlar ändå i grund och botten om kunna ge svar på relevanta frågor vi som både medborgare och forskare vill få svar på.

O helga natt

11 Dec, 2020 at 17:38 | Posted in Varia | Comments Off on O helga natt


Universitetslärare — folk som jobbar gratis i det tysta

11 Dec, 2020 at 13:15 | Posted in Education & School | Comments Off on Universitetslärare — folk som jobbar gratis i det tysta

Universitet och högskolor är genomsyrade av en kultur där vi jobbar gratis — med forskning, med undervisning och med samverkan. Vi gör det för att vi är dedikerade och brinner för vårt jobb och för att vi vill ge våra studenter en bra utbildning. Men den viktigaste orsaken är att det är helt oundvikligt om man vill ha minsta chans att göra karriär och få forskningstid …

När är det OK att jobba gratis som frilansare? Är det någonsin OK? Medan vi väntar på forskningstid i tjänsten och ett vettigt antal timmar till all undervisning är mitt förslag att vi börjar med att synliggöra hur situationen ser ut …

Eller så kan vi skriva att vi har slutat att spela detta spel. Här vill jag uppmana SULF att ta täten och samla information från alla medlemmar och att påpeka i varenda individuell löneförhandling hur situationen faktiskt ser ut.

Majken Jul Sørensen / Universitetsläraren

Bra rutet! Vi är många professorer och lektorer som fått mer än nog. Yours truly har arbetat vid olika av landets universitet och högskolor i mer än 40 år och kan bara bekräfta bilden som Jul Sørensen ger. Det är inget mindre än en skandal att detta i det tysta fått fortgå under så lång tid. Och inte minst är det häpnadsväckande hur lite ett uddlöst fack — både centralt och lokalt — gjort för att här stå upp för universitetslärares och forskares rätt till rimliga arbetsvillkor.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: