The inherent epistemological limitation of econometric testing

30 Jun, 2023 at 09:24 | Posted in Statistics & Econometrics | 5 Comments

Behind the Model eBook by Peter Spiegler - EPUB Book | Rakuten Kobo United  StatesTo understand the relationship between economic data and economic phenomena, it is helpful first to be clear about what we mean by each of these terms. Following Jim Woodward (1989), we can characterize “phenomena” as features of our experience that we take to be “relatively stable” and “which are potential objects of explanation and prediction by general theory.” The phenomena themselves are in general not directly observable, and so in order to investigate claims about them, we require some observable representation. Data play this role. And although it is a crucial role, it is a supporting rather than a starring role. As Woodward suggests, “data are typically not viewed as potential objects of explanation by or derivation from general theory; indeed, they typically are of no theoretical interest except insofar as they constitute evidence” for claims about the phenomena. Data are simply matrices of numbers. Economically speaking, characterizing the internal relations of a matrix of numbers is not of inherent interest. It only becomes so when we claim that the numbers represent in some way actual phenomena of interest.

What is the nature of this representation? Data are, in a sense, meant to be a quantitative crystallization of the phenomena. In order to determine what will count as data for a particular phenomenon or set of phenomena, one must specify particular observable and quantifiable features of the world that can capture the meaning of the phenomena adequately for the purposes of one’s particular inquiry …

Inferences about the data are inferences about model objects and are therefore a part of the model narrative. We can validly interpret such inferences about the data as possible inferences about the underlying social phenomena only to the extent that we have established the plausibility of a homomorphic relationship between the data and the aspects of the underlying phenomena they are meant to represent. This homomorphism requirement, then, is an extension of the essential compatibility requirement: in empirical modeling exercises, the requirement of essential compatibility between model and target includes a requirement of homomorphism between data and target (because the data are a part of the model) …

Econometricians are, of course, well aware of the importance of the relationship between the data and the underlying phenomena of interest. In the literature, this relationship is generally couched in terms of a data-generating process (DGP) … If we were to be able to perceive the true DGP in its entirety, we would essentially know the complete underlying structure whose observable precipitates are the data. Our only evidence of the DGP, however, is the data …

It is important to note, however, that characterizing pieces of the data-generating process is an intra-model activity. It reveals the possible mathematical structure underlying a matrix of numbers, and it is properly judged according to (and only according to) the relevant rules of mathematics. In contrast, the requirement that a relation of homomorphism exist between the data and the underlying phenomena is concerned with the relationship between model and target entities. The extent to which data satisfy this requirement in any given case cannot be determined through econometric analysis, nor does econometric analysis obviate the need to establish that the requirement is met. On the contrary, the results of an econometric analysis of a given data set — i.e. the characterization of a piece of its DGP — can be validly interpreted as providing epistemic access to the target only if it is plausible that a relation of homomorphism holds between the data and the aspects of the target they ostensibly represent.

Econometrics is supposed to be able to test economic theories. But to serve as a testing device you have to make many assumptions, many of which cannot be tested or verified. To make things worse, there are also rarely strong and reliable ways of telling us which set of assumptions is preferred. Trying to test and infer causality from data you have to rely on assumptions such as disturbance terms being ‘independent and identically distributed’; functions being additive, linear, and with constant coefficients; parameters being’ ‘invariant under intervention; variables being ‘exogenous’, ‘identifiable’, ‘structural and so on. Unfortunately, we are seldom or never informed of where that kind of ‘knowledge’ comes from, beyond referring to the economic theory that one is supposed to test.

That leaves us in the awkward position of admitting that if the assumptions made do not hold, the inferences, conclusions and testing outcomes econometricians come up with simply do not follow the data and statistics they use.

The central question is ‘How do we learn from empirical data?’ But we have to remember that the value of testing hinges on our ability to validate the — often unarticulated — assumptions on which the testing models build. If the model is wrong, the test apparatus simply gives us fictional values. There is always a risk that one puts a blind eye to some of those non-fulfilled technical assumptions that actually make the testing results — and the inferences we build on them — unwarranted. Econometric testing builds on the assumption that the hypotheses can be treated as hypotheses about (joint) probability distributions and that economic variables can be treated as if pulled out of an urn as a random sample. Most economic phenomena are nothing of the kind.

Most users of the econometric toolbox seem to have a built-in blindness to the fact that mathematical-statistical modelling in social sciences is inherently incomplete since it builds on the presupposition that the model properties are — without serious argumentation or warrant — assumed to also apply to the intended real-world target systems studied. Many of the processes and structures that we know play essential roles in the target systems do not show up — often for mathematical-statistical tractability reasons — in the models. The bridge between model and reality is failing. Valid and relevant information is unrecognized and lost, making the models harmfully misleading and largely irrelevant if our goal is to learn, explain or understand anything about actual economies and societies. Without giving strong evidence for an essential compatibility between model and reality the analysis becomes nothing but a fictitious storytelling of questionable scientific value.

It is difficult to find any hard evidence that econometric testing has been able to exclude any economic theory. If we are to judge econometrics based on its capacity of eliminating invalid theories, it has not been a very successful business.

The façade of precision in mainstream economics

29 Jun, 2023 at 11:11 | Posted in Economics | 1 Comment

[Jevons] is a man of some ability, but he seems to me to have a mania for encumbering questions with useless complications, and with a notation implying the existence of greater precision in the data than the questions admit of. 

John Stuart Mill

Fixation on constructing models — “implying the existence of greater precision in the data than the questions admit of” — showing the certainty of logical entailment — realiter simply collapsing the necessary ontological gap between model and reality — has since the days of Jevons and the marginalist revolution been detrimental to the development of a relevant and realist economics. Insisting on formalistic (mathematical) modelling forces the economist to give up on realism and substitute axiomatics for real-world relevance. The price for rigour and precision is far too high for anyone who is ultimately interested in using economics to pose and (hopefully) answer real-world questions and problems.

The Poverty of Fictional Storytelling in Mainstream Economics: Syll, Lars  P.: 9781911156635: Amazon.com: BooksThis deductivist orientation is the main reason behind the difficulty that mainstream economics has in terms of understanding, explaining, and predicting what takes place in our societies. But it has also given mainstream economics much of its discursive power — at least as long as no one starts asking tough questions on the veracity of — and justification for — the assumptions on which the deductivist foundation is erected.  Asking these questions is an important ingredient in a sustained critical effort at showing how nonsensical the embellishing of a smorgasbord of models is founded on wanting (and often hidden) methodological foundations.

The mathematical-deductivist straitjacket used in mainstream economics presupposes atomistic closed systems — i.e., something that we find very little of in the real world, a world significantly at odds with an (implicitly) assumed logic world where deductive entailment rules the roost. Ultimately then, the failings of modern mainstream economics have its root in a deficient ontology. The kind of formal-analytical and axiomatic-deductive mathematical modelling that makes up the core of mainstream economics is hard to make compatible with a real-world ontology. It is also the reason why so many critics find mainstream economic analysis patently and utterly unrealistic and irrelevant.

If we want theories and models to confront reality, there are obvious limits to what can be said rigorously in economics. In the deductivist approach, model consistency trumps coherence with the real world. That is surely getting the priorities wrong. Creating models for their own sake is not an acceptable scientific aspiration — impressive-looking formal-deductive (mathematical) models should never be mistaken for truth.

To construct and use an economic model you have to start by establishing that the phenomena modeled are ontologically compatible with the model. The rigour and precision in models have a devastatingly important trade-off: the higher the level of rigour and precision, the smaller the range of real-world applications. So the more mainstream economists insist on formal logic validity, the less they have to say about the real world. And to think we solve the problem by reforms to mathematical modelling is like looking for a spoon when what is needed is a knife.

Håll mitt hjärta

29 Jun, 2023 at 01:07 | Posted in Varia | Comments Off on Håll mitt hjärta

.

California Sun

28 Jun, 2023 at 17:52 | Posted in Varia | 2 Comments

.

Although it’s forty years now since I was a research student at the ​University of California, on a day like this, I sure wish I was back in Redondo Beach, Ocean Park, Venice Beach …

The lack of theory in social experiments

28 Jun, 2023 at 17:40 | Posted in Economics | Comments Off on The lack of theory in social experiments

Jason Collins discusses a paper by Milkman et al. that presented “a megastudy testing 54 interventions to increase the gym visits of 61,000 experimental participants” …

On the relation of experiment and theory in scientific investigations |  Download Scientific DiagramCollins’s discussion seems reasonable to me. In particular, I agree with his big problem about the design of this “mega-study,” which is that there’s all sorts of rigor in the randomization and analysis plan, but no rigor at all when it comes to deciding what interventions to test.

Unfortunately, this is standard practice in policy analysis! Indeed, if you look at a statistics book, including mine, you’ll see lots and lots on causal inference and estimation, but nothing on how to come up with the interventions to study in the first place …

What are those 54 interventions, anyway? Just some things that a bunch of well-connected economists wanted to try out. Well-connected economists know lots of things, but maybe not so much about motivating people to go to the gym.

A related problem is variation: These treatments, even when effective, are not simply push-button-X-and-then-you-get-outcome-Y. Effects will be zero for most people and will be highly variable among the people for whom effects are nonzero. The result is that the average treatment effect will be much smaller than you expect. This is not just a problem of “statistical power”; it’s also a conceptual problem with this whole “reduced-form” way of looking at the world. To put it another way: Lack of good theory has practical consequences.

Andrew Gelman

Hans-Jürgen Krahl and the Frankfurt School

27 Jun, 2023 at 16:04 | Posted in Politics & Society | Comments Off on Hans-Jürgen Krahl and the Frankfurt School

.

My favourite French teacher

27 Jun, 2023 at 13:22 | Posted in Varia | Comments Off on My favourite French teacher

.

Sraffa on Ricardo’s ‘corn model’ (wonkish)

26 Jun, 2023 at 17:29 | Posted in Economics | Comments Off on Sraffa on Ricardo’s ‘corn model’ (wonkish)

After being tasked with editing David Ricardo’s Collected Works in 1930, Sraffa, with the assistance of Maurice Dobb, published them between 1951 and 1973. This work earned him the 1961 Söderström Gold Medal from The Royal Swedish Academy of Sciences.

Papers of Piero Sraffa (1898-1983), economist - Trinity College CambridgeFor the edition, Sraffa wrote an interesting and thought-provoking introduction. Its purpose was to demonstrate that the classical economists based their theory on the concept of surplus, defined as the remainder of the product after deducting the necessary production costs. Ricardo’s contribution to the classical tradition was primarily to specify in more detail the relationship between the shares of the various social classes in this surplus—the social net product—and its development and changes over time. Sraffa shows how Ricardo wrestled with the problem of how to define the surplus in a way that allows for an unambiguous determination throughout his adult life.

What has been fiercely debated among historians of economic thought is Sraffa’s thesis that Ricardo, up until 1815 and Essay on Profits, solved this problem by constructing a ‘corn model.’ According to Sraffa, Ricardo reasoned as follows:

In agriculture, only labour and corn (seed) is used to produce more corn. The real wage is assumed to be given at a certain level and defined as a specific quantity of corn. This means that inputs and outputs are homogeneous and can therefore be measured in physical terms. The rate of profit can then be defined as the quantity of output (expressed in corn) minus the quantity of input (expressed in corn) divided by the quantity of input (expressed in corn). This can be reconciled without taking into account the other sectors in society. Since, according to the “equality assumption,” there can only exist one rate of profit, the individual rates of profit in the other sectors must adjust to that of agriculture, which thereby determines the general rate of profit in the economy.

The way in which Ricardo, according to Sraffa, determines the rate of profit in Essay on Profits implies that there is no need for a theory of value to determine the distribution of the net product between profit, rent, and wages. However, as soon as Ricardo departs from the assumption that the economy is fundamentally single-sectorial, it becomes necessary to determine the profit on capital as it is determined in the market through commodity prices. When attempting to extend a corn model to include other production sectors that are mutually interdependent, problems arise. These force one onto partially new paths: one reaches a crossroads where the choice lies between divisible and indivisible production systems. It involves a possible conception of production as natural on the one hand and a necessary concept of the social character of production on the other.

In a single-sector model, distribution and production can be reduced from social entities to natural ones, but the idea of production confined to one sector contradicts the very concept of capitalist production, which is the production of value without physical boundaries. Therefore, it becomes necessary to go beyond the corn model to achieve a more accurate understanding of distribution and value within the framework of the capitalist society. In doing so, one will also partially break with the views of some earlier classical economists on the nature of the economy and instead see production as socially determined. The importance of a theory of value becomes clear to Ricardo through the contradictions that this creates within the theory of distribution. According to Sraffa, what Ricardo is attempting to demonstrate in Principles is that the conclusions he has reached regarding distribution within the single-sector model are also generally valid in an interdependent multisectional economy.

In the corn model, labour functions merely as a mediating link in the natural production process where corn is transformed into more corn. There is no conception of labour as socially equivalent. By having all production occur in the corn sector, all labour is transformed into corn labour. This is, to use Marx’s terminology, to reduce abstract labour to concrete labour and therefore to reduce the social character of labour to labour as nature. The root of the confusion between labour and labour power, which will complicate much of the subsequent work on the theory of value, can be found here.

The perception of production as a natural process essentially means that production loses its economic character and instead takes the form of general metabolism. It becomes a logical category with the meaning of transforming inputs into outputs. However, production in this general form cannot serve as the basis for economic relations because it is indifferent to the specificity of these relations.

In the distribution theory attributed to Ricardo by Sraffa, the starting point is the corn model because only through it can Ricardo separate the determination of the value of goods from their distribution. Capital in this model takes the form of circulating capital, specifically a wage fund of labour advanced, which is also the share of the previous production period that accrues to labour. In an economy of this kind, the simplicity of the distribution problem is a function of its character as the distribution of material entities. To measure the net product, we only need access to the natural unit of corn. It is worth noting further that the distribution of the good has nothing to do with the exchange of the product, and the secret of the corn model actually lies in this assumption of the absence of commodity exchange. Since there is only one product, there is, of course, no reason for the exchange of this universal product among the different actors.

Ricardo’s solution to the distribution problem in Essay on Profits is to dissolve the distributive parts into their natural class components. In order to present these shares as natural, the distribution must be separated from the value form of the product. Even in his later works, Ricardo, according to Sraffa, would attempt to reduce his theory of value to the only thing he assumes it is based on, namely the distribution of material products. Wages and rent acquire the character of natural distribution variables because they are not produced. Profit, which is the return on capital, on the other hand, expresses that production in a capitalist society is the production of capital. However, Ricardo tries to reduce profit to a natural factor by considering capital as an accumulation of material objects, as a capital stock.

Once capital and its self-increase in the form of profit have been reduced to natural categories, the only task remaining is to determine the proportions in which wages and profit should stand to each other. Establishing this proportion, and above all its development, is Ricardo’s main concern in Principles. The task of economics becomes to determine the distribution of a given value production. The inability of the corn model to treat distribution as the distribution of value is related to an unwillingness to view distribution as fundamentally a social process. The social form must be subordinated to natural reality.

However, Sraffa’s interpretation of the ‘early Ricardo’ is based on rather vague grounds, and he himself admits (in the Introduction to Ricardo’s Collected Works, vol. I, p. xxxi) that his rational reconstruction was never “explicitly expressed by Ricardo.” Upon closer examination of Sraffa’s arguments, it proves to be untenable. When Sraffa claims that Ricardo provided an exact formula for determining the profit rate in the agricultural sector, it is more accurate to say that he indicated the dependence of the profit rate on the size of the surplus. In Essay on Profits capital is calculated in terms of corn — capital is estimated in quarters of corn — but nowhere is it assumed that it would consist only of corn, let alone that corn would be the sole input or the only commodity included in the wage basket.

Continue Reading Sraffa on Ricardo’s ‘corn model’ (wonkish)…

Identitetspolitiska dilemman …

25 Jun, 2023 at 16:02 | Posted in Politics & Society | Comments Off on Identitetspolitiska dilemman …

Det är inte lätt alltid att vara PK

PK-kris på förskolan 😱 - Stallet med vänner | Det är nästan omöjligt att  hitta en sång som passar alla. 😳 Stallet med vänner 👉 SVT Play:  https://bit.ly/3wERaxE | By SVT Humor | Facebook

Yours truly i MMT intervju

25 Jun, 2023 at 14:58 | Posted in Economics | Comments Off on Yours truly i MMT intervju

Flamman (F): Först och främst, vad är MMT?

Lars Pålsson Syll (LPS): I grunden är det en reaktion på sättet som pengar och hur de skapas beskrivs i den traditionella ekonomiska litteraturen. Där beskrivs pengar som något som man sparar genom att  Top Economics Influencers to Follow - FocusEconomics in dem på banken, och som banken i sin tur kan låna ut genom att skapa krediter. Bankernas pengaskapande förutsätter alltså att privata individer sparar. Men den idén är fullständigt fel, det är inte så pengar fungerar. Speciellt inte i en ekonomi där den monetära basen av sedlar och mynt bara utgör någon enstaka procent av de pengar som är i omlopp. Det som MMT lyfter fram är att pengar skapas ”ex nihilo” som man säger, det vill säga ur tomma intet. Banken trycker på en knapp och skapar ett lån och skapar därmed också nya pengar. Det är alltså lån som skapar pengar och inte tvärtom. Det är den stora skillnaden.

F: Vilka är de politiska implikationerna av detta?

LPS: Vi inom MMT menar att bankerna på detta sätt får en otrolig makt över ekonomin. De har ett egenintresse av att skapa en massa krediter genom lån som de sedan kan göra vinster med eftersom de betalar en lägre ränta än vad de lånar ut till. Det är detta räntenetto som bankerna lever på. I en modern ekonomi skapar det rätt perversa böjelser hos bankväsendet eftersom de hela tiden vill få krediterna att expandera vilket i sin tur leder till fastighetsbubblor och så vidare, vilket i förlängningen ökar risken för finansiella kriser. Det var det som min gamle lärare Hyman Minsky, som också är en av inspirationskällorna för MMT, varnade för redan på 1960- och 1970-talet.

F: MMT brukar beskrivas som ett sätt att bedriva expansiv ekonomisk politik genom att man helt enkelt skapar nya pengar som staten sedan får tillbaka via skatteuppbörden. Detta eftersom ett land som har en suverän valuta inte kan gå i konkurs. En förutsättning är dock att landet har sin egen valuta.

LPS: Ja, det blir mer komplicerat om man inte har en egen suverän valuta, som euroländerna. Grunden för bankernas makt är att de kan skapa pengar som de vet att alla är intresserade av, därför att alla människor måste betala skatt. I Sverige görs det i svenska kronor. Enligt MMT är anledningen till att man kan skapa pengar att man måste betala skatt i just kronor. Annars hade människor ju kunnat betala varandra i vilka valutor de vill. Jag forskade tidigare om alternativa valutor och här i Malmö fanns det något som hette Möllevångsdollar. Sådant kan fungera i liten skala där människor har förtroende för varandra. Men problemet med alternativa valutor är att du inte kan betala din skatt med dem. Därför är kronorna de enda riktiga pengarna i Sverige. Och det är därför som bankerna har makten att via lån skapa pengar …

F: Vad skulle hända med inflationen om man bedrev MMT-politik?

LPS: Många hävdar att inflationen skulle  bli lägre därför att man inte ­längre skulle föda bankernas låneverksamhet som späder på finansiella bubblor. Det skulle göra att inflationstakten saktar ned. Men det beror på vilken inflation man pratar om. Det vanliga inflationsmåttet är konsumentprisindex. Men vi har så mycket annat i dag som tyvärr är viktigare än konsumtionen. Finanssektorn har tagit över en jättedel av ekonomin och förändringarna som sker på deras tillgångsmarknader finns inte med i de vanliga inflationsmåtten. Så om man skulle ge mer pengar till vanligt folk skulle deras köpkraft öka, och eftersom de konsumerar en stor del av sina inkomster så skulle efterfrågan på traditionella konsumtionsvaror öka. Och därför kan man hävda att inflationen skulle stiga. Men samtidigt kan man säga att mindre pengar skulle bli över till finanssektorn, som i huvudsak har drivit på inflationen. Därför menar MMT:are att man skulle kunna hålla inflationen i schack. Om Riksbanken tar över penningverksamheten kan man enklare reglera inflationen än vad som är möjligt i dag. Tvärtom mot vad det står i läroböcker i ekonomi så är det ju Riksbanken som i dag anpassar sig efter affärsbankernas lånebehov och inte tvärtom. En förutsättning för detta är dock att Riksbankens nya valuta helt ersätter affärsbankernas kronor.

Jonas Elvander / Flamman

The difference between rate and probability (wonkish)

25 Jun, 2023 at 11:44 | Posted in Statistics & Econometrics | 3 Comments

Suppose there is a series of Bernoulli trials, that each trial has the same probability p of success, and that the trials are independent—like the standard model of coin tossing, treating ‘heads’ as ‘success.’ Then the Law of Large Numbers guarantees that the rate of successes converges (in probability) to the probability of success.

Probability and economics | LARS P. SYLLIf a sequence of trials is random and the chance of success is the same in each trial, then the empirical rate of success is an unbiased estimate of the underlying chance of success. If the trials are random and they have the same chance of success and you know the dependence structure of the trials (for example, if the trials are independent), then you can quantify the uncertainty of that estimate of the underlying chance of success. But the mere fact that something has a rate does not mean that it is the result of a random process.

For example, suppose a sequence of heads and tails results from a series of random, independent tosses of an ideal fair coin. Then the rate of heads will converge (in probability) to one half. But suppose I give you the sequence ‘heads, tails, heads, tails, heads, tails, heads, tails, heads, tails, …’ ad infinitum. The limiting rate of heads is 1/2. While that sequence could be the result of a sequence of fair random tosses, it is implausible, and it certainly need not be. Sequences of outcomes are not necessarily the result of anything random, and rates are not necessarily (estimates of) probabilities.

Philip Stark

Things you should not do in your kitchen …

24 Jun, 2023 at 14:45 | Posted in Varia | Comments Off on Things you should not do in your kitchen …

.

Fight against cancel culture and identity politics

24 Jun, 2023 at 12:08 | Posted in Politics & Society | Comments Off on Fight against cancel culture and identity politics

.

Was tun gegen die hohe Inflation?

23 Jun, 2023 at 16:22 | Posted in Economics | Comments Off on Was tun gegen die hohe Inflation?

.

Interessante Diskussion, nicht zuletzt Adam Tooze’s Beitrag.

Glenda Jackson (1936-2023)

22 Jun, 2023 at 16:50 | Posted in Politics & Society | Comments Off on Glenda Jackson (1936-2023)

.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.