Är hög statsskuld — verkligen — problemet?

17 Apr, 2021 at 10:49 | Posted in Economics | Leave a comment

.

När depressionen drabbade 1930-talets industrivärld visade sig den ekonomiska teorin inte vara till någon större hjälp att komma ur situationen. Den engelske nationalekonomen John Maynard Keynes såg behovet av att utveckla en ny teori som på bröt mot den etablerade sanningen. I The General Theory of Employment, Interest and Money (1936) presenterade han sitt alternativ.

Vad som behövs nu är upplyst handling grundad på relevant och realistisk ekonomisk teori av det slag som Keynes står för.

Den överhängande faran är att vi inte får fart på konsumtion och kreditgivning. Förtroende och effektiv efterfrågan måste återupprättas.

Ett av de grundläggande feltänken i dagens diskussion om statsskuld och budgetunderskott är att man inte skiljer på skuld och skuld. Även om det på makroplanet av nödvändighet är så att skulder och tillgångar balanserar varandra, så är det inte oväsentligt vem som har tillgångarna och vem som har skulderna.

Länge har man varit motvillig att öka de offentliga skulderna eftersom ekonomiska kriser i mångt och mycket fortfarande uppfattas som förorsakad av för mycket skulder. Men det är här fördelningen av skulder kommer in. Om staten i ett läge med risk för skulddeflation och likviditetsfällor ‘lånar’ pengar för att bygga ut järnvägar, skola och hälsovård, så är ju de samhälleliga kostnaderna för detta minimala eftersom resurserna annars legat oanvända. När hjulen väl börjar snurra kan både de offentliga och de privata skulderna betalas av. Och även om detta inte skulle uppnås fullt ut så förbättras det ekonomiska läget därför att låntagare med dåliga balansräkningar ersätts med de som har bättre.

I stället för att ”värna om statsfinanserna” bör man se till att värna om samhällets framtid. Problemet med en statsskuld i en situation med nästintill negativa räntor inte är att den är för stor, utan för liten.

Bernard Madoff — la plus grande pyramide de Ponzi de l’histoire

16 Apr, 2021 at 10:59 | Posted in Economics | Leave a comment

L’auteur du plus grand scandale financier du XXe siècle s’est éteint, mercredi 14 avril, à 82 ans. Il était parvenu à extorquer des fortunes à ses clients tout en jouant sur la pusillanimité des régulateurs.

A Bernie Madoff Type Ponzi Scheme Right Here in Orange County | by Ronald  Larson | LinkedInBernard Madoff avait développé à partir des années 1960 une société de courtage devenue, au fil des ans, l’une des plus importantes et dynamiques de la Bourse de New York. Mais il avait créé en parallèle une société d’investissement destinée à faire prospérer la fortune de clients choisis : stars du cinéma, des lettres, et même de la finance. Avec une promesse irrésistible : un rendement moyen continu de 15 % par an sur une très longue période. De 1990 à 2008, en dépit des aléas des marchés, aucune année négative n’a troublé l’horizon. Comment ? Mystère bien gardé.

Les vieux financiers savent que, quand on vous promet à la fois la sécurité d’un bon du Trésor et le rendement exceptionnel de la Bourse, c’est qu’il y a anguille sous roche. Et l’anguille était en l’occurrence une gigantesque pyramide de Ponzi, où l’argent des nouveaux investisseurs finançait la rétribution des clients en cours.

Certains, pourtant, se sont inquiétés de la montée en puissance d’une machine aussi spectaculaire. A trois reprises, entre 2001 et 2005, le financier Harry Markopolos a alerté la SEC. Aucune enquête n’a été menée, en dépit d’indices confondants. De grandes banques ont fermé les yeux. Tout le monde était content, l’eau était claire, alors pourquoi remuer la vase ? Il a fallu attendre la crise de 2008 et la demande de retrait de fonds des clients pour s’apercevoir qu’ils avaient disparu.

Philippe Escande / Le Monde

Stephanie Kelton läxar upp Sveriges finansminister

15 Apr, 2021 at 22:48 | Posted in Economics | 2 Comments

Underskottsmyten – verbal förlagEn av USA:s mest uppmärksammade och omdebatterade nationalekonomer, Stephanie Kelton, kritiserar nu finansminister Magdalena Andersson. Enligt Kelton hade regeringen i vårbudgeten kunnat välja att låna ännu mer för att få fart på ekonomin.

När finansministern på torsdagen presenterade vårbudgeten pratade hon åter igen om att regeringen har sparat i ladorna och därför kan satsa nu under pandemin. Hon har också sagt att de lånade pengarna måste betalas tillbaka och ladorna fyllas igen. Men det här är ett helt felaktigt sätt att resonera och Sverige kan göra mycket mer, säger hon till SVT.

– Det är en självpåtagen restriktion, som förhindrar det ekonomiska välmåendet. Du håller dig själv gisslan, säger Stephanie Kelton.

SvT Nyheter

Yours truly har under ett par års tid nu frågat sig varför vi i det här landet har en regering som inte vågar satsa kraftfullt på en offensiv finanspolitik och låna mer. Inte minst mot bakgrund av de historiskt låga räntorna är det ett gyllene tillfälle att satsa på investeringar inom infrastruktur, vård, skola och välfärd.

Tyvärr verkar det som om Magdalena Andersson har rejäla kunskapsluckor. Lite ‘functional finance’ och MMT kanske inte skulle skada. Eller varför inte ta och läsa Keltons nyligen till svenska översatta bok?

Ett lands statsskuld är sällan en orsak till ekonomisk kris, utan snarare ett symtom på en kris som sannolikt blir värre om inte underskotten i de offentliga finan­serna får öka.

Den ­svenska utlandsskulden är historiskt låg. Statsskulden ligger idag på lite över 25 procent av BNP. Med tanke på de stora utmaningar som Sverige står inför i coronavirusets kölvatten är fortsatt tal om “ansvar” för statsbudgeten minst sagt oansvarigt. I stället för att prata om “sparande i ladorna” och att ”värna om statsfinanserna” bör en ansvarsfull rege­ringen se till att värna om samhällets framtid. Det är kontraproduktivt att föra en ekonomisk politik med syfte att minska statsskulden. Det minst sagt bedrövligt när en regering inte insett att problemet med en statsskuld i en situation med nästintill negativa räntor inte är att den är för stor, utan för liten.

Att staten nu under coronaåret behövt spendera mer för att hålla igång ekonomin innebär inte att den måste spara framöver för att få “balans” i ekonomin. Pengarna tar inte slut. Och behöver vi mer för att hålla igång de ekonomiska hjulen är det bara att “trycka” nya.

Vad många politiker och mediala så kallade experter inte verkar (vilja) förstå är att det finns en avgörande skillnad mellan privata och offentliga skulder. Om en individ försöker spara och dra ner på sina skulder, så kan det mycket väl vara rationellt. Men om alla försöker göra det, blir följden att den aggregerade efterfrågan sjunker och arbetslösheten riskerar ökar.

En enskild individ måste alltid betala sina skulder. Men en stat kan alltid betala tillbaka sina gamla skulder med nya skulder. Staten är inte en individ. Statliga skulder är inte som privata skulder. En stats skulder är väsentligen en skuld till den själv, till dess medborgare.

När regeringen för lite drygt ett år sedan bestämde sig för att skjuta till nya miljarder och låta statsskulden öka för att få fart på ekonomin under och efter coronaepidemin, trodde nog en del att vi stod inför ett paradigmskifte. Tyvärr visar det sig nu, som så många gånger förut — när väl den värsta krisen är över, är det ‘business as usual’ och Göran Perssons mantra ”den som är satt i skuld är icke fri” dammas av igen.

Men en statsskuld är varken bra eller dålig. Den ska vara ett medel att uppnå två övergripande makroekonomiska mål — full sysselsättning och prisstabilitet. Vad som är ‘heligt’ är inte att ha en balanserad budget eller att hålla nere statsskulden. Om idén om ‘sunda’ statsfinanser leder till ökad arbetslöshet och instabila priser borde det vara självklart att den överges. ‘Sunda’ statsfinanser är osunt.

Debunking the trickle-down myth

15 Apr, 2021 at 15:09 | Posted in Economics | Leave a comment

.

What we see happen in the US and many other countries is deeply disturbing. The rising inequality is outrageous — not the least since it has to a large extent to do with income and wealth increasingly being concentrated in the hands of a very small and privileged elite.

Societies where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implode. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed.

Barfuß Am Klavier (personal)

13 Apr, 2021 at 11:50 | Posted in Economics | 1 Comment


jag på einstein-berlin1988
Yours truly. Café Einstein, Berlin 1988. Photo by Bengt Nilsson.

Filtering nonsense economics

13 Apr, 2021 at 09:30 | Posted in Economics | 4 Comments

Study claims that the presence of bad smells make people more opposed to  gay marriageFollowing the greatest economic depression since the 1930s, Robert Solow in 2010 gave a prepared statement on “Building a Science of Economics for the Real World” for a hearing in the U. S. Congress. According to Solow modern macroeconomics has not only failed at solving present economic and financial problems, but is “bound” to fail. Building microfounded macromodels on “assuming the economy populated by a representative agent” — consisting of “one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever” — do not pass the smell test: does this really make sense? Solow surmised that a thoughtful person “faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.”

Conclusion: an economic theory or model that doesn’t pass the real world smell-test is just silly nonsense that doesn’t deserve our attention and therefore belongs in the dustbin.

Microfounded macroeconomic DSGE models immediately come to mind.

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As I tried to show in my paper Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world — there is really no support for that conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to replace macroeconomic models building on representative actors and rational expectations-microfoundations with more realist and relevant macroeconomic thinking.

If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

Whereas some theoretical models can be immensely useful in developing intuitions, in essence a theoretical model is nothing more than an argument that a set of conclusions follows from a given set of assumptions. Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. Is the story behind the model one that captures what is important or is it a fiction that has little connection to what we see in practice? Have important factors been omitted? Are economic agents assumed to be doing things that we have serious doubts they are able to do? These questions and others like them allow us to filter out models that are ill suited to give us genuine insights. To be taken seriously models should pass through the real world filter.

Paul Pfleiderer

Pfleiderer’s perspective may be applied to many of the issues involved when modelling complex and dynamic economic phenomena. Let me take just one example — simplicity.

When it comes to modelling I do see the point often emphatically made for simplicity among economists and econometricians — but only as long as it doesn’t impinge on our truth-seeking. “Simple” macroeconom(etr)ic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconom(etr)ics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fullfil its tasks. Maintaining that economics is a science in the “true knowledge” business, I remain a skeptic of the pretences and aspirations of  “simple” macroeconom(etr)ic models and theories. So far, I can’t really see that e. g. “simple” microfounded models have yielded very much in terms of realistic and relevant economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they – considered “simple” or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.

Constructing simple macroeconomic models somehow seen as “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimising using fictitious idealisations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their “simple” models are stable in the sense that they do not change when exported to their “target systems”, they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.

That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.

As scientists we have to get our priorities right. Ontological under-labouring has to precede epistemology.

Tony Lawson and the nature of heterodox economics

9 Apr, 2021 at 18:17 | Posted in Economics | 4 Comments

Lawson believes that there is a ‘coherent core’ of heterodox economists who employ methods that are consistent with the social ontology they implicitly advance. However, Lawson also acknowledges that many also use mathematical modelling, a method that presupposes a social ontology that is in severe tension with it. Therefore, I repeat, Lawson proposes that heterodox economists in fact exist in two groups, those who use methods consistent with the social ontology they are committed to, and those who do not. But all are heterodox economists.

Lawson’s hope is that by making the kind of social ontology presupposed by mathematical modelling clear, heterodox economists will increasingly review the legitimacy of the modelling approach. However, Lawson still considers those who make such a methodological mistake to be heterodox economists. For they still, he argues, are committed to the social ontology he defends and always reveal it in some way in their analyses or pronouncements …

Professor Tony Lawson on Economics & Social Ontology in Economics: past,  present and future. An interview project on Vimeo In recent years, Lawson has been increasingly frustrated by the continued use of mathematical modelling by heterodox economists, as well as by movements towards its increased usage. An argument made by such heterodox economists is that the problem identified by Lawson lies not with mathematical modelling per se but with the sort of mathematical methods used. They argue that poor mathematical modelling has been the problem and that better, more complex, models will be able to capture the reality of human existence.

Lawson clearly regards that methodological argument to be mistaken. For, as stated above, he finds that even complex mathematical models presuppose a closed system. However, he maintains that the social reality that such researchers reveal themselves to implicitly accept is at least quite similar to that which he defends. Their concern with being realistic, for one, speaks volumes. Therefore, these researchers should, he believes, still be distinguished from the mainstream …

Lawson does not argue for excluding mathematical models. Rather, as with all other methods, they should only be applied in conditions in which their use is appropriate, though admittedly Lawson does, as an empirical matter, assess the occurrence of the latter to be relatively rare. His stance is not anti-mathematical method but anti-mismatch of method and context of application … What Lawson does argue for regarding practice is an explicit, systematic and sustained ontological awareness, which he believes can only improve the methodological choices of heterodox economists.

Yannick Slade-Caffarel

If scientific progress in economics lies in our ability to tell ‘better and better stories’ one would, of course, expect economics journals being filled with articles supporting the stories with empirical evidence confirming the predictions. However, the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though explicit discussion, argumentation and justification on the subject aren’t considered to be required.

In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in modern — and as Lawson argues, both in mainstream and heterodox — economics. Applying it to real-world open systems immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyse confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science, we argue and try to substantiate our beliefs and hypotheses with reliable evidence. Propositional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Reasoning in economics

9 Apr, 2021 at 10:22 | Posted in Economics | 3 Comments

Reasoning: Amazon.co.uk: Scriven, Michael: 9780070558823: BooksReasoning is the process whereby we get from old truths to new truths, from the known to the unknown, from the accepted to the debatable … If the reasoning starts on firm ground, and if it is itself sound, then it will lead to a conclusion which we must accept, though previously, perhaps, we had not thought we should. And those are the conditions that a good argument must meet; true premises and a good inference. If either of those conditions is not met, you can’t say whether you’ve got a true conclusion or not.

Mainstream economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these ‘analogue-economy models’ rather than engineering things happening in real economies.

Mainstream economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. The one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics, is a scientific cul-de-sac. To have valid evidence is not enough. What economics needs is sound evidence — evidence based on arguments that are valid in form and with premises that are true.

Avoiding logical inconsistencies is crucial in all science. But it is not enough. Just as important is avoiding factual inconsistencies. And without showing — or at least presented with a warranted argument — that the assumptions and premises of their models are in fact true, mainstream economists aren’t really reasoning, but only playing games. Formalistic deductive ‘Glasperlenspiel’ can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Dune Mosse

9 Apr, 2021 at 09:55 | Posted in Economics | Leave a comment

.

Un viaggio in fondo ai tuoi occhi “dai d’illusi smammai” /

Un viaggio in fondo ai tuoi occhi solcherò / Dune Mosse …

Dentro una lacrima / E verso il sole / Voglio gridare amore  /

Uuh, non ne posso più  / Vieni t’imploderò /

A rallentatore, e … / E nell’immenso morirò!

Questa canzone è un’opera d’arte. Musica d’altissimo livello. Meravigliosa!

Why ergodicity matters

2 Apr, 2021 at 11:04 | Posted in Economics | 8 Comments

.

Paul Samuelson once famously claimed that the ‘ergodic hypothesis’ is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

In economics ergodicity is often mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent. So, if nothing else, ergodicity is an important concept for understanding one of the deep fundamental flaws of mainstream economics.

Let’s say we have a stationary process. That does not — as Adamou shows in the video — guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average.

Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Instead of arbitrarily assuming that people have a certain type of utility function — as in mainstream theory — time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage — and risks — creates extensive and recurrent systemic crises.

The methods economists bring to their research

31 Mar, 2021 at 18:40 | Posted in Economics | 2 Comments

There are other sleights of hand that cause economists problems. In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

rcResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time: what happens when a region is hit with a bad harvest. The particular exogenous shock used in the research may not be representative; for example, income shortfalls not caused by water scarcity can have different effects on conflict than rainfall-related shocks.

So, economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

Dani Rodrik / Project Syndicate

Nowadays it is widely believed among mainstream economists that the scientific value of randomisation — contrary to other methods — is totally uncontroversial and that randomised experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomisation does not guarantee anything.

As Rodrik notes, ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like 2019’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that randomized controlled trials, RCTs, cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

‘Nobel prize’ winners like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not. So, as Rodrik has it:

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

Why do economists never mention power?

29 Mar, 2021 at 22:49 | Posted in Economics | 8 Comments

Trumpian trickle down | LARS P. SYLLThe intransigence of Econ 101 points to a dark side of economics — namely that the absence of power-speak is by design. Could it be that economics describes the world in a way that purposely keeps the workings of power opaque? History suggests that this idea is not so far-fetched …

The key to wielding power successfully is to make control appear legitimate. That requires ideology. Before capitalism, rulers legitimised their power by tying it to divine right. In modern secular societies, however, that’s no longer an option. So rather than brag of their God-like power, modern corporate rulers use a different tactic; they turn to economics — an ideology that simply ignores the realities of power. Safe in this ideological obscurity, corporate rulers wield power that rivals, or even surpasses, the kings of old.

Are economists cognisant of this game? Some may be. Most economists, however, are likely just clever people who are willing to delve into the intricacies of neoclassical theory without ever questioning its core tenets. Meanwhile, with every student who gets hoodwinked by Econ 101, the Rockefellers of the world happily reap the benefits.

Blair Fix

The vanity of deductivity

29 Mar, 2021 at 22:27 | Posted in Economics | Leave a comment

41EofxYHtBLModelling by the construction of analogue economies is a widespread technique in economic theory nowadays … As Lucas urges, the important point about analogue economies is that everything is known about them … and within them the propositions we are interested in ‘can be formulated rigorously and shown to be valid’ … For these constructed economies, our views about what will happen are ‘statements of verifiable fact.’

The method of verification is deduction … We are however, faced with a trade-off: we can have totally verifiable results but only about economies that are not real …

How then do these analogue economies relate to the real economies that we are supposed to be theorizing about? … My overall suspicion is that the way deductivity is achieved in economic models may undermine the possibility … to teach genuine truths about empirical reality.

Learning from econophysics’ mistakes

27 Mar, 2021 at 11:19 | Posted in Economics | 13 Comments

What is Econophysics and what does Econophysicists do?By appealing to statistical mechanics, econophysicists hypothesize that we can explain the workings of the economy from simple first principles. I think that is a mistake.

To see the mistake, I’ll return to Richard Feynman’s famous lecture on atomic theory. Towards the end of the talk, he observes that atomic theory is important because it is the basis for all other branches of science, including biology:

“The most important hypothesis in all of biology, for example, is that everything that animals do, atoms do. In other words, there is nothing that living things do that cannot be understood from the point of view that they are made of atoms acting according to the laws of physics.

Richard Feynman, Lectures on Physics

I like this quote because it is profoundly correct. There is no fundamental difference (we believe) between animate and inanimate matter. It is all just atoms. That is an astonishing piece of knowledge.

It is also, in an important sense, astonishingly useless. Imagine that a behavioral biologist complains to you that baboon behavior is difficult to predict. You console her by saying, “Don’t worry, everything that animals do, atoms do.” You are perfectly correct … and completely unhelpful.

Your acerbic quip illustrates an important asymmetry in science. Reduction does not imply resynthesis. As a particle physicist, Richard Feynman was concerned with reduction — taking animals and reducing them to atoms. But to be useful to our behavioral biologist, this reduction must be reversed. We must take atoms and resynthesize animals.

The problem is that this resynthesis is over our heads … vastly so. We can take atoms and resynthesize large molecules. But the rest (DNA, cells, organs, animals) is out of reach. When large clumps of matter interact for billions of years, weird and unpredictable things happen. That is what physicist Philip Anderson meant when he said ‘more is different’ …

The ultimate goal of science is to understand all of this structure from the bottom up. It is a monumental task. The easy part (which is still difficult) is to reduce the complex to the simple. The harder part is to take the simple parts and resynthesize the system. Often when we resynthesize, we fail spectacularly.

Economics is a good example of this failure. To be sure, the human economy is a difficult thing to understand. So there is no shame when our models fail. Still, there is a philosophical problem that hampers economics. Economists want to reduce the economy to ‘micro-foundations’ — simple principles that describe how individuals behave. Then economists want to use these principles to resynthesize the economy. It is a fool’s errand. The system is far too complex, the interconnections too poorly understood.

I have picked on econophysics because its models have the advantage of being exceptionally clear. Whereas mainstream economists obscure their assumptions in obtuse language, econophysicists are admirably explicit: “we assume humans behave like gas particles”. I admire this boldness, because it makes the pitfalls easier to see.

By throwing away ordered connections between individuals, econophysicists make the mathematics tractable. The problem is that it is these ordered connections — the complex relations between people — that define the economy. Throw them away and what you gain in mathematical traction, you lose in relevance. That’s because you are no longer describing the economy. You are describing an inert gas.

Blair Fix

Interesting blog post. Building an analysis (mostly) on tractability assumptions is a dangerous thing to do. And that goes for both mainstream economics and econophysics. Why would anyone listen to policy proposals that are based on foundations that deliberately misrepresent actual behaviour?

Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimisation frequently argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. They also often maintain that there are no methodologically coherent alternatives to microfoundations modelling. That allegation is, of course, difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and ‘New Keynesian’ economists are pursuing are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductive reasoning is the hallmark of science tout court.

How economic orthodoxy protects its dominant position

25 Mar, 2021 at 15:29 | Posted in Economics | 1 Comment

John Bryan Davis (2016) has offered a persuasive account of the way an economic orthodoxy protects its dominant position. Traditional ‘reflexive domains’ for judging research quality — the theory-evidence nexus, the history and philosophy of economics — are pushed aside. Instead, research quality is assessed through journal ranking systems. This is highly biased towards the status quo and reinforces stratification: top journals feature articles by top academics at top institutions, top academics and institutions are those who feature heavily in top journals.

mainstreampluralismBecause departmental funding is so dependent on journal scores, career advancement is often made on the basis of these rankings — they are not to be taken lightly. It is not that competition is lacking, but it is confined to those who slavishly accept the paradigm, as defined by the gatekeepers — the journal editors. In this self-referential system it is faithful adherence to a preconceived notion of ‘good economics’ that pushes one ahead.

Robert Skidelsky

straight-jacket

The only economic analysis that mainstream economists accept is the one that takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All models and theories that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your models — not using (mathematical) models at all is considered totally unthinkable — and apply them to whatever you want — as long as you do it within the mainstream approach and its modeling strategy.

If you do not follow that particular mathematical-deductive analytical formalism you’re not even considered doing economics. ‘If it isn’t modeled, it isn’t economics.’

That isn’t pluralism.

That’s a methodological reductionist straightjacket.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.