Attending economics seminars — a total waste of time!

30 Apr, 2023 at 13:35 | Posted in Economics | 3 Comments

ECONOMICS... | /r/MemeEconomy | Know Your MemeVisiting economics conferences and seminars, the sessions usually start with the presentation of mathematical-statistical models building on assumptions somewhat analogous to “let us assume that people are green and descending from Mars” — and then long technical discussions follow on how good these models are at making us better understand contemporary societies and economies.

Yours truly finds it extremely hard to see we should at all pay attention to and waste time on modelling societies and economies assuming things like ‘rational expectations’, ‘representative agents’, calculable uncertainties, ergodicity, ‘common knowledge’, ‘infinitely-lived dynasties’, ‘modern-macro Euler equations’,  ‘overlapping generations of fixed size’, ‘wages determined by Nash bargaining’, ‘endogenous job openings’, etc., etc.

Without those patently absurd assumptions, the models presented and discussed deliver nothing. The assumptions matter crucially, and not even extreme idealizations in the form of invoking non-existent entities such as ‘hypothetical super populations or ‘actors maximizing expected utility’ delivers. Change only one of the assumptions and something completely different may happen.

mirowski_200_170_s_c1 Thirty years ago one of my intellectual heroes, Phil Mirowski, was invited to give a speech on themes from his book More Heat than Light at my old economics department in Lund, Sweden. All the mainstream professors were there. Their theories and model assumptions were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defence. Being at a nonplus, one of them, in total desperation, finally asked: “But what shall we do then?”

Yes indeed — what shall they do? Because, sure, they have to do something. Before they come up with something better than the present fictional mainstream economics storytelling there is little reason for any of us to attend their seminars and conferences.

The econometric dream-world

30 Apr, 2023 at 09:50 | Posted in Statistics & Econometrics | Comments Off on The econometric dream-world

Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, he also found some grounds for despair:

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

Haavelmo-intro-2-125397_630x210There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

Another of the founding fathers of modern probabilistic econometrics, Ragnar Frisch, shared Haavelmo’s doubts on the applicability of econometrics:

sp9997db.hovedspalteI have personally always been skeptical of the possibility of making macroeconomic predictions about the development that will follow on the basis of given initial conditions … I have believed that the analytical work will give higher yields – now and in the near future – if they become applied in macroeconomic decision models where the line of thought is the following: “If this or that policy is made, and these conditions are met in the period under consideration, probably a tendency to go in this or that direction is created”.

Ragnar Frisch

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort to provide a justification for the credibility of the assumptions on which they erect their building, it will not fulfil its tasks. There is a gap between its aspirations and its accomplishments. Without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining economics should be a science in the ‘true knowledge’ business, yours truly remains a sceptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever-higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that the legions of probabilistic econometricians who give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population, are skating on thin ice. After years having analyzed ts ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered ‘truth,’ nor robust forecasts.

Alone together

29 Apr, 2023 at 08:49 | Posted in Varia | Comments Off on Alone together

.

Twist in my sobriety

28 Apr, 2023 at 13:38 | Posted in Varia | Comments Off on Twist in my sobriety

.

What is this thing called Bayesianism?

27 Apr, 2023 at 09:05 | Posted in Theory of Science & Methodology | 1 Comment

What is This Thing Called Science?: Amazon.co.uk: Chalmers: 0884953604436:  BooksA major, and notorious, problem with this approach, at least in the domain of science, concerns how to ascribe objective prior probabilities to hypotheses. What seems to be necessary is that we list all the possible hypotheses in some domain and distribute probabilities among them, perhaps ascribing the same probability to each employing the principal of indifference. But where is such a list to come from? It might well be thought that the number of possible hypotheses in any domain is infinite, which would yield zero for the probability of each and the Bayesian game cannot get started. All theories have zero probability and Popper wins the day. How is some finite list of hypotheses enabling some objective distribution of nonzero prior probabilities to be arrived at? My own view is that this problem is insuperable, and I also get the impression from the current literature that most Bayesians are themselves coming around to this point of view.

Chalmers is absolutely right here in his critique of ‘objective’ Bayesianism, but I think it could actually be extended to also encompass its ‘subjective’ variety.

A classic example — borrowed from Bertrand Russell — may perhaps be allowed to illustrate the main point of the critique:

Assume you’re a Bayesian turkey and hold a nonzero probability belief in hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ theorem

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately, for every day that goes by, the traditional Christmas dinner also gets closer and closer …

The nodal point here is — of course — that although Bayes’ theorem is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions.

Bayesian probability calculus is far from the automatic inference engine that its protagonists maintain it is. Where do the priors come from? Wouldn’t it be better in science if we did some scientific experimentation and observation if we are uncertain, rather than starting to make calculations based on people’s often vague and subjective personal beliefs? Is it, from an epistemological point of view, really credible to think that the Bayesian probability calculus makes it possible to somehow fully assess people’s subjective beliefs? And are — as most Bayesians maintain — all scientific controversies and disagreements really possible to explain in terms of differences in prior probabilities? I’ll be dipped!

French hamburger dilemma

25 Apr, 2023 at 19:06 | Posted in Varia | Comments Off on French hamburger dilemma

.

Causal assumptions in need of careful justification

24 Apr, 2023 at 15:22 | Posted in Statistics & Econometrics | Comments Off on Causal assumptions in need of careful justification

Dags! Dags everywhere! - Buzz and Woody (Toy Story) Meme | Make a Meme As is brilliantly attested by the work of Pearl, an extensive and fruitful theory of causality can be erected upon the foundation of a Pearlian DAG. So, when we can assume that a certain DAG is indeed a Pearlian DAG representation of a system, we can apply that theory to further our causal understanding of the system. But this leaves entirely untouched the vital questions: when is a Pearlian DAG representation of a system appropriate at all?; and, when it is, when can a specific DAG D be regarded as filling this rôle? As we have seen, Pearlian representability requires many strong relationships to hold between the behaviours of the system under various kinds of interventions.

Causal discovery algorithms … similarly rely on strong assumptions … The need for such assumptions chimes with Cartwright’s maxim “No causes in, no causes out”, and goes to refute the apparently widespread belief that we are in possession of a soundly-based technology for drawing causal conclusions from purely observational data, without further assumptions …

In my view, the strong assumptions needed even to get started with causal interpretation of a DAG are far from self-evident as a matter of course, and whenever such an interpretation is proposed in a real-world context these assumptions should be carefully considered and justified. Without such justification, why should we have any faith at all in, say, the application of Pearl’s causal theory, or in the output of causal discovery algorithms?

But what would count as justification? … It cannot be conducted entirely within a model, but must, as a matter of logic, involve consideration of the interpretation of the terms in the model in the real world.

Philip Dawid

Mainstream economics — slipping from the model of reality to the reality of the model

24 Apr, 2023 at 11:02 | Posted in Economics | Comments Off on Mainstream economics — slipping from the model of reality to the reality of the model

A couple of years ago, Paul Krugman had a piece up on his blog arguing that the ‘discipline of modelling’ is a sine qua non for tackling politically and emotionally charged economic issues:

economist-nakedIn my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

So when ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’

Yours truly is, to say the least, far from convinced. The alarm in my brain is that this, rather than being helpful for understanding real-world economic issues, is more of an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Let me just give one example to illustrate my point.

In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have a comparative advantage and importing goods in which they have a comparative disadvantage.

Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. At the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would a fortiori mostly export goods that used the abundant factors of production and import goods that mostly used factors of production that were scarce.

The Heckscher-Ohlin theorem — as do the elaborations on it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — besides the standard market-clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. Building theories and models on unjustified patently ridiculous assumptions we know to be false, does not deliver real science. Science fiction is not science.

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real-world situations. Like so many other mainstream mathematical models taught to economics students today, this theorem has very little to do with the real world.

From a methodological point of view, one can, of course, also wonder, how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods, anything logically follows.

Some people have trouble with the fact that by allowing false assumptions mainstream economists can generate whatever conclusions they want in their models. But that’s really nothing very deep or controversial. What I’m referring to is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.

poppWhilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.

Using false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follow by deduction — but are of course factually totally wrong. Models and theories building on that kind of reasoning are nothing but a pointless waste of time.

Désenchantée

23 Apr, 2023 at 21:02 | Posted in Varia | Comments Off on Désenchantée

.

Republican family values

23 Apr, 2023 at 16:32 | Posted in Politics & Society | Comments Off on Republican family values

POLITICO on Twitter: ""I felt like Joe Biden," Donald Trump joked,  retelling a time he wanted to hug and kiss a general after being told the  military could defeat ISIS far ahead

‘New Keynesian’ DSGE models — unparalleled spectacular failures

23 Apr, 2023 at 08:37 | Posted in Economics | Comments Off on ‘New Keynesian’ DSGE models — unparalleled spectacular failures

The problem of the DSGE-models (and more generally of rational expectations macroeconomic models) is that they assume extraordinary cognitive capabilities of individual agents …

aa-model-train-repair-584x300The fact that the assumption of rational expectations is implausible does not necessarily mean that models using such an assumption cannot be powerful tools in making empirical predictions. The problem, however, is that rational expectations macroeconomic model make systematically wrong predictions, in particular about the speed with which prices adjust. This empirical failure could have led the profession of macroeconomists to drop the model and to look for another one. Instead, macroeconomists decided to stick to the rational expectations model but to load it with a series of ad-hoc repairs that were motivated by a desire to improve its fit …

The success of the DSGE-model has much to do with the story it tells about how the macroeconomy functions. This is a story in which rationality of superbly informed and identical agents reigns … We have questioned this story … No individual can ever hope to understand and to process the full complexity of the world in which he lives. That’s why markets are so important. They are institutions that efficiently aggregate the diverse bits of information stored in individual brains …

Paul De Grauwe

De Grauwe’s critique of the repair shop treatment of DSGE modelling convincingly shows that ‘New Keynesian’ tweaking of DSGE models will not do the job. Why? Basically, they do not acknowledge​ genuine real-world uncertainty, and without this acknowledgement,​ the job will not be done appropriately.

It also underscores the necessity of being sceptical of the pretences and aspirations of ‘New Keynesian’ macroeconomics. So far it has been impossible to see that it has yielded very much in terms of realist and relevant economic knowledge. And — as if that wasn’t enough — there’s nothing new or Keynesian about it!

counterfeit‘New Keynesianism’ doesn’t have its roots in Keynes. It has its intellectual roots in Paul Samuelson’s ill-founded ‘neoclassical synthesis’ project, whereby he thought he could save the ‘classical’ view of the market economy as a (long run) self-regulating market clearing equilibrium mechanism, by adding some (short run) frictions and rigidities in the form of sticky wages and prices.

But — putting a sticky-price lipstick on the ‘classical’ pig sure will not do. The ‘New Keynesian’ pig is still neither Keynesian nor new.

The rather one-sided emphasis on usefulness and its concomitant instrumentalist justification cannot hide that ‘New Keynesians’ cannot give supportive evidence for their considering it fruitful to analyze macroeconomic structures and events as the aggregated result of optimizing representative actors. After having analyzed some of its ontological and epistemological foundations, yours truly cannot but conclude that ‘New Keynesian’ macroeconomics — on the whole, and not only regarding its repaired DSGE models — has not delivered anything else than ‘as if’ unreal and irrelevant models.

The purported strength of New Classical and ‘New Keynesian’ macroeconomics is that they have firm anchorage in preference-based microeconomics, especially the decisions taken by inter-temporal utility maximizing ‘forward-looking’ individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations — without ever presenting either ontological or epistemological justifications for this claim — has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if these economists want to resurrect the omniscient Walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world. A model of the world — as De Grauwe puts it — “that like, in the socialist models of the past, assumes an all-knowing individual, who can compute the optimal plans and set the optimal prices.”​

Den svenska ekonomijournalistikens erbarmlighet

21 Apr, 2023 at 17:04 | Posted in Politics & Society | 1 Comment

Affärsbankerna är de opartiska expertbedömarna av den ekonomiska politiken. Men ska man väcka uppmärksamhet ska man vara näringslivets tankesmed.

Det är de två mest självklara slutsatserna när svenska medier i måndags rapporterade om vårbudgeten.

Role of economic journalism in development | theindependentbd.comI en TT-artikel som publicerades redan under förmiddagen intervjuas Nordeas chefsekonom Annika Winsth och Swedbanks chefsekonom Mattias Persson, samt, sist i texten, dessutom Konjunkturinstitutets generaldirektör … Expertisen är hyggligt nöjd, visar det sig …

Att det ser ut så här är inga nyheter. När tidningen Flamman 2019 granskade vilka ekonomer som intervjuats i 100 artiklar i pressen kom sex av tio från näringslivet, framför allt från arbetsgivarorganisationer, banker och försäkringsbolag. Det var tiofalt fler än från facken och deras närstående organisationer …

Det finns inget oskyldigt i detta. Genom kombinationen av mediernas eviga kärlek till bankekonomer och skickliga interventioner från Svenskt Näringsliv och Timbro, som uppenbarligen legat i startgroparna för detta, så förskjuts hela diskussionen åt höger. Vad skulle en ekonomhistoriker som Lovisa Broström eller en ekonom som Lars Pålsson Syll ha att säga om strategin att möta en galopperande inflation med enbart räntehöjningar? Det saknas inte kritiska röster — de syns bara inte …

Istället står vi nu här med en borgerlighet som både innehar regeringsmakten och dominerar rapporteringen av hur denna regeringsmakt utövas. Resultatet blir att den fråga som jag tar med mig från denna nyhetscykel är: ska man sänka skatterna nu eller sen?

Petter Larsson / Aftonbladet

Det är svårt att inte hålla med — sällan eller aldrig har man anledning vara nöjd med ekonomijournalistiken i press, radio och television. Ytterst få ekonomijournalister och börsanalytiker tar sitt jobb på allvar och bedriver ekonomisk journalistik värd namnet. De flesta av oss har därför sedan länge gett upp. Synd bara att allmänheten ska behöva nöja sig med det tyckmyckentrutade nonsens näringsliv och borgerlighet helt oemotsagt får basunera ut i medierna vecka efter vecka.

The importance of not equating science with statistical calculations

21 Apr, 2023 at 12:27 | Posted in Statistics & Econometrics | Comments Off on The importance of not equating science with statistical calculations

50 Common Misconceptions in the World of Software Development | Hacker NoonAll science entails human judgment, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of statistics is actually zero — even though you’re making valid statistical inferences! Statistical models are no substitutes for doing real science. Or as a famous German philosopher famously wrote 150 years ago:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

We should never forget that the underlying parameters we use when performing statistical tests are model constructions. And if the model is wrong, the value of our calculations is nil. As mathematical statistician and ‘shoe-leather researcher’ David Freedman wrote in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

All of this, of course, does also apply when we use statistics in economics. Most work in econometrics and regression analysis is — still — made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve has to do with measurement and observation.

When things sound too​ good to be true, they usually aren’t. And that goes for econometrics too. The snag is that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them​ is not​ only a belief without support but a belief impossible to support.

The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we don’t know the correct way to functionally specify the relationships between the variables.

Every regression model constructed is misspecified. There is​ always an endless list of possible variables to include and endless possible ways to specify the relationships between them. So every applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter values is nothing but a dream.

overconfidenceIn order to draw inferences from data as described by econometric texts, it is necessary to make whimsical assumptions. The professional audience consequently and properly withholds belief until an inference is shown to be adequately insensitive to the choice of assumptions. The haphazard way we individually and collectively study the fragility of inferences leaves most of us unconvinced that any inference is believable. If we are to make effective use of our scarce data resource, it is therefore important that we study fragility in a much more systematic way. If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods …

Ed Leamer

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables.  Parameter values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are invalid.

Econometrics — and regression analysis — is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics and regression analysis.

Stephanie Kelton till Stockholm

20 Apr, 2023 at 18:11 | Posted in Economics | Comments Off on Stephanie Kelton till Stockholm

Jubileumskonferens: Katalys 10 år! | Biljetter | Stockholm | Politik |  Billetto — Sweden

När tankesmedjan Katalys nu firar tioårsjubileum har man passande nog valt att bjuda in författaren till en av årtiondet största megahit inom ekonomiområdet — Underskottsmyten — Stephanie Kelton. Har ni vägarna förbi Stockholm den 6 maj tycker jag definitivt ni ska lägga ett par timmar på att besöka ABF-huset!

Yours truly har under flera års tid nu frågat sig varför vi i det här landet har  begåvats med regeringar som inte vågar satsa kraftfullt på en offensiv finanspolitik och låna mer, inte minst under perioder då vi hade historiskt låga räntor och det var gyllene tillfällen att satsa på investeringar inom infrastruktur, vård, skola och välfärd.

Underskottsmyten – verbal förlagAv de återkommande uttalandena i media de senaste veckorna att döma, verkar det tyvärr som om vår nuvarande finansminister, Elisabeth Svantesson, har  rejäla kunskapsluckor. Lite ‘functional finance’ och MMT kanske inte skulle skada. Eller varför inte ta och läsa Keltons numera till svenska översatta bok?

Ett lands statsskuld är sällan en orsak till ekonomisk kris, utan snarare ett symtom på en kris som sannolikt blir värre om inte underskotten i de offentliga finan­serna får öka.

I stället för att prata om “sparande i ladorna” och att ”värna om statsfinanserna” bör en ansvarsfull rege­ringen se till att värna om samhällets framtid. 

Vad många politiker och mediala så kallade experter inte verkar (vilja) förstå är att det finns en avgörande skillnad mellan privata och offentliga skulder. Om en individ försöker spara och dra ner på sina skulder, så kan det mycket väl vara rationellt. Men om alla försöker göra det, blir följden att efterfrågan sjunker och arbetslösheten riskerar ökar.

En enskild individ måste alltid betala sina skulder. Men en suverän stat med egen valuta kan alltid betala tillbaka sina gamla skulder med nya skulder. Staten är inte en individ. Statliga skulder är inte som privata skulder. En stats skulder är väsentligen en skuld till den själv, till dess medborgare.

En statsskuld är varken bra eller dålig. Den ska vara ett medel att uppnå två övergripande makroekonomiska mål — full sysselsättning och prisstabilitet. Vad som är ‘heligt’ är inte att ha en balanserad budget eller att hålla nere statsskulden. Om idén om ‘sunda’ statsfinanser leder till ökad arbetslöshet och instabila priser borde det vara självklart att den överges. ‘Sunda’ statsfinanser är osunt.

Budgetunderskott är inte Sveriges problem idag. Och att fortsatt prata om att “spara i ladorna” är bara ren dumhet. Som min gamle mentor Sven Grassman uttryckte det: “uppoffringar och återhållen konsumtion leder inte till  framsteg.”

You Want It Darker

20 Apr, 2023 at 15:09 | Posted in Varia | Comments Off on You Want It Darker

.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.