How to achieve exchangeability (student stuff)

21 Oct, 2021 at 16:53 | Posted in Statistics & Econometrics | Leave a comment

.

Pourquoi toutes les marques de luxe sont-elles françaises?

21 Oct, 2021 at 14:47 | Posted in Varia | 1 Comment

.

My favourite French teacher!

Economics — non-ideological and value-free? I’ll be dipped!

20 Oct, 2021 at 17:46 | Posted in Economics | Leave a comment

jp-imgresI’ve subsequently stayed away from the minimum wage literature for a number of reasons. First, it cost me a lot of friends. People that I had known for many years, for instance, some of the ones I met at my first job at the University of Chicago, became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.

David Card

David Card and the minimum wage myth

18 Oct, 2021 at 22:07 | Posted in Economics | 2 Comments

David Card - WikipediaBack in 1992, New Jersey raised the minimum wage by 18 per cent while its neighbour state, Pennsylvania, left its minimum wage unchanged. Unemployment in New Jersey should — according to mainstream economic theory’s competitive model — have increased relative to Pennsylvania. However, when ‘Nobel prize’ winning economist David Card and his colleague Alan Krueger gathered information on fast food restaurants in the two states to check what employment effects the minimum wage really have — using a basic difference-in-differences approach — it turned out that unemployment had actually decreased in New Jersey relative to that in Pennsylvania. Counter to mainstream theory we had an anomalous case of a backward-sloping supply curve.

Lo and behold!

But of course — when facts and theory don’t agree, it’s the facts that have to be wrong …

buchC6The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the pre-supposition that human choice behavior is sufficiently rational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teaching of two centuries; we have not yet become a bevy of camp-following whores.

James M. Buchanan in Wall Street Journal (April 25, 1996)

Does working from home work?

17 Oct, 2021 at 15:06 | Posted in Economics | Leave a comment

.

The frequency of WFH [working from home] has been rising rapidly … We report the results of the first randomized experiment on working from home, run in a 16,000- employee, NASDAQ-listed Chinese firm … We found a highly significant 13% increase in employee performance from WFH, of which about 9% was from employees working more minutes of their shift period (fewer breaks and sick days) and about 4% from higher performance per minute … Home workers also reported substantially higher work satisfaction and psychological attitude scores, and their job attrition rates fell by over 50%. Furthermore, when the experiment ended and workers were allowed to choose whether to work at home or in the office, selection effects almost doubled the gains in performance.

Nicholas Bloom et al.

Deutschland – ein Paradies für Geldwäsche 

16 Oct, 2021 at 18:31 | Posted in Politics & Society | Leave a comment

.

‘Nobel prize’ econometrics

16 Oct, 2021 at 09:59 | Posted in Statistics & Econometrics | Leave a comment

.

Great presentation, but I do think Angrist ought to have also mentioned that although ‘ideally controlled experiments’ may tell us with certainty what causes what effects, this is so only when given the right ‘closures.’ Making appropriate extrapolations from — ideal, accidental, natural or quasi — experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here.” The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods used when analyzing ‘natural experiments’ is often despairingly small. Since the core assumptions on which IV analysis builds are NEVER directly testable, those of us who choose to use instrumental variables to find out about causality ALWAYS have to defend and argue for the validity of the assumptions the causal inferences build on. Especially when dealing with natural experiments, we should be very cautious when being presented with causal conclusions without convincing arguments about the veracity of the assumptions made. If you are out to make causal inferences you have to rely on a trustworthy theory of the data generating process. The empirical results causal analysis supply us with are only as good as the assumptions we make about the data generating process. Garbage in, garbage out.

Äntligen en budget i balans!

15 Oct, 2021 at 15:24 | Posted in Economics | Leave a comment

Beskrivning saknas.

Poem of the atoms

15 Oct, 2021 at 15:10 | Posted in Varia | Leave a comment

.

O day, arise! The atoms are dancing.
Thanks to Him the universe is dancing.
The souls are dancing, overcome with ecstasy.
I’ll whisper in your ear where their dance is taking them.
All the atoms in the air and in the desert know well, they seem insane.
Every single atom, happy or miserable,
Becomes enamoured of the sun, of which nothing can be said.

Jalāl ad-Dīn Muhammad Rūmī (1207-1273)

Econometric toolbox developers get this year’s ‘Nobel prize’ in economics

11 Oct, 2021 at 17:46 | Posted in Statistics & Econometrics | 2 Comments

Many of the big questions in the social sciences deal with cause and effect. How does immigration affect pay and employment levels? How does a longer education affect someone’s future income? …

Tre får dela på ekonomipriset till Alfred Nobels minneThis year’s Laureates have shown that it is possible to answer these and similar questions using natural experiments. The key is to use situations in which chance events or policy changes result in groups of people being treated differently, in a way that resembles clinical trials in medicine.

Using natural experiments, David Card has analysed the labour market effects of minimum wages, immigration and education …

Data from a natural experiment are difficult to interpret, however … In the mid-1990s, Joshua Angrist and Guido Imbens solved this methodological problem, demonstrating how precise conclusions about cause and effect can be drawn from natural experiments.

Press release: The Prize in Economic Sciences 2021

For economists interested in research methodology in general and natural experiments in specific, these three economists are well-known. A central part of their work is based on the idea that random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, and hence this kind of ‘simple and transparent’ design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but …

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

• And then there is also the problem that ‘Nature’ may not always supply us with the random experiments we are most interested in. If we are interested in X, why should we study Y only because design dictates that? Method should never be prioritized over substance!

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of this nowadays popular — and ill-informed — view that randomization is the only valid and the best method on the market. It is not.

Trygve Haavelmo — the father of modern probabilistic econometrics — once wrote that he and other econometricians could not build a complete bridge between our models and reality by logical operations alone, but finally had to make “a non-logical jump.”  To Haavelmo and his modern followers, econometrics is not really in the truth business. The explanations we can give of economic relations and structures based on econometric models are “not hidden truths to be discovered” but rather our own “artificial inventions”.

Rigour and elegance in the analysis do not make up for the gap between reality and model. A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that it is harmless to consider economic variables as stochastic variables. In most cases, no such arguments are given.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments therefore is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization — just as econometrics — promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) econometric methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite.

The prize committe says that econometrics and natural experiments “help answer important questions for society.” Maybe so, but it is far from evident to what extent they do so. As a rule, the econometric practitioners of natural experiments have far to over-inflated hopes on their explanatory potential and value.

The Münchhausen Trilemma

9 Oct, 2021 at 15:15 | Posted in Theory of Science & Methodology | 2 Comments

.

The term ‘Münchhausen Trilemma’ is used in epistemology to stress the impossibility to prove any truth (even in logic and mathematics). The term was coined by Albert in 1968 in reference to Popper’s Trilemma of dogmatism vs. infinite regress vs. psychologism.

When do regressions give us causality?

9 Oct, 2021 at 09:13 | Posted in Economics | Leave a comment

Statistical Models and Causal Inference: A Dialogue with the Social  Sciences: Amazon.co.uk: Freedman, David A.: 9780521123907: BooksThe issue boils down to this. Does the conditional distribution of Y given X represent mere association, or does it represent the distribution Y would have had if we had intervened and set the values of X? There is a similar question for the distribution of Z given X and Y. These questions cannot be answered just by fitting the equations and doing data analysis on X, Y, and Z. Additional information is needed. From this perspective, the equations are “structural” if the conditional distributions inferred from the equations tell us the likely impact of interventions, thereby allowing a causal rather than an associational interpretation. The take-home message will be clear: You cannot infer a causal relationship from a data set by running regressions—unless there is substantial prior knowledge about the mechanisms that generated the data …

We want to use regression to draw causal inferences from nonexperimental data. To do that, we need to know that certain parameters and certain distributions would remain invariant if we were to intervene. That invariance can seldom if ever be demonstrated by intervention. What then is the source of the knowledge? “Economic theory” seems like a natural answer but an incomplete one. Theory has to be anchored in reality. Sooner or later, invariance needs empirical demonstration, which is easier said than done …

The lesson: Finding the mathematical consequences of assumptions matters, but connecting assumptions to reality matters even more.

I got you babe (personal)

8 Oct, 2021 at 21:07 | Posted in Varia | Leave a comment

.

As always for you, Jeanette Meyer.

Raising Keynes

7 Oct, 2021 at 11:24 | Posted in Economics | 3 Comments

The defeat and suppression of the classical perspective — with its evolutionary, institutionalist, and developmental descendants — cleared the way for a dogmatic economics that exalted self-regulating competitive markets …

As we have seen, this perspective soon ran into serious — but temporary — difficulties with the Great Depression, mass unemployment, and the rise of Keynes, whose theory is revived in Harvard University economist Stephen A. Marglin’s Raising Keynes …

Raising Keynes: A Twenty-First-Century General Theory (9780674971028):  Stephen A. Marglin - BiblioVaultMarglin’s basic argument is stated in two parts. First, he focuses on the “Keynesian first-pass model” in the context of the static, general equilibrium framework favored by John Hicks (this is known in textbooks as the IS-LM model). He concludes that within that framework, Keynes’s theory is reduced to dealing with “frictions and rigidities,” implying that “if only” markets were competitive in the neoclassical mode, mass unemployment could not exist …

In his “second-pass model,” Marglin resets Keynes in a dynamic frame, dealing with events and changes that occur through time … Like Keynes, Marglin argues, correctly, that in this world, persistent involuntary unemployment cannot be resolved by cutting wages and breaking unions, even if you can get away with doing these things. Here, Marglin is, in effect, restating what Keynes’s closest collaborators always argued. My first encounter with Robinson came in a University of Cambridge lecture hall in 1974. She had been sitting in to heckle Frank Hahn, one of the leading neoclassicists there at the time. As undergraduates fled the scene, I introduced myself and she invited me to lunch. Once seated in the buttery of the University Library, she started in: “You can’t put time on the IS-LM diagram. Time comes out of the blackboard.” I had no idea what she was talking about, but she certainly did (and now so do I) …

Marglin has taken 80 years of neoclassical distortions of Keynes, presented them with great clarity in their own language, and then pounded them into dust, pushing the detritus back into the faces of the high priests of the neoclassical synthesis, the New Keynesians, and the New Classical Economists. Raising Keynes issues a challenge that they would be cowardly to refuse – which is not to suggest that they won’t do their best to ignore it.

James K. Galbraith

Again — as so often — it turns out that when we economists disagree it ultimately boils down to methodology . And here — again — we are back to the question if ‘bastard Keynesians’ hobbyhorse IS-LM interpretation of Keynes is fruitful and relevant for understanding monetary economies.

Here yours truly’s view is basically the same as Marglin’s — IS-LM is not fruitful and relevant and it does not adequately reflect the width and depth of Keynes’s insights on the workings of monetary economies:

Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.

IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me, Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historical time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …

IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.

IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.

IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard, it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

6  IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.

Given this, it’s difficult not to side with Marglin. The IS/LM approach is not fruitful or relevant for understanding modern monetary economies. And it does not capture Keynes’ approach to the economy other than in name.

Mainstream economics — the poverty of fictional story-telling

5 Oct, 2021 at 09:59 | Posted in Economics | 3 Comments

Why sci-fi and economics have more in common than you think | New HumanistOne of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalization you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there are some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfill these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyze real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

Most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

Economic models frequently invoke entities that do not exist, such as perfectly rational agents, perfectly inelastic demand functions, and so on. As economists often defensively point out, other sciences too invoke non-existent entities, such as the frictionless planes of high-school physics. But there is a crucial difference: the false-ontology models of physics and other sciences are empirically constrained. If a physics model leads to successful predictions and interventions, its false ontology can be forgiven, at least for instrumental purposes – but such successful prediction and intervention is necessary for that forgiveness. The
idealizations of economic models, by contrast, have not earned their keep in this way. So the problem is not the idealizations in themselves so much as the lack of empirical success they buy us in exchange. As long as this problem remains, claims of explanatory credit will be unwarranted.

A. Alexandrova & R. Northcott

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modeling project?

Mainstream ‘as if’ models are based on the logic of idealization and a set of tight axiomatic and ‘structural’ assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the assumptions are true, the conclusions necessarily follow. But it is a poor guide for real-world systems. As Hans Albert has it on this ‘style of thought’:

A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

The way axioms and theorems are formulated in mainstream economics often leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and theory without informational content never risks being empirically tested and found falsified. Used in mainstream ‘thought experimental’ activities, it may, of course, ​be very ‘handy,’ but totally void of any empirical value.

Some economic methodologists have lately been arguing that economic models may well be considered ‘minimal models’ that portray ‘credible worlds’ without having to care about things like similarity, isomorphism, simplified ‘representationality’ or resemblance to the real world. These models are said to resemble ‘realistic novels’ that portray ‘possible worlds’. And sure: economists constructing and working with that kind of models learn things about what might happen in those ‘possible worlds’. But is that really the stuff real science is made of? I think not. As long as one doesn’t come up with credible export warrants to real-world target systems and show how those models — often building on idealizations with known to be false assumptions — enhance our understanding or explanations about the real world, well, then they are just nothing more than just novels.  Showing that something is possible in a ‘possible world’ doesn’t give us a justified license to infer that it therefore also is possible in the real world. ‘The Great Gatsby’ is a wonderful novel, but if you truly want to learn about what is going on in the world of finance, I would recommend rather reading Minsky or Keynes and directly confront real-world finance.

Different models have different cognitive goals. Constructing models that aim for explanatory insights may not optimize the models for making (quantitative) predictions or deliver some kind of ‘understanding’ of what’s going on in the intended target system. All modeling in science have tradeoffs. There simply is no ‘best’ model. For one purpose in one context model A is ‘best’, for other purposes and contexts model B may be deemed ‘best’. Depending on the level of generality, abstraction, and depth, we come up with different models. But even so, I would argue that if we are looking for what I have called ‘adequate explanations’ (Syll, Ekonomisk teori och metod, Studentlitteratur, 2005) it is not enough to just come up with ‘minimal’ or ‘credible world’ models.

The assumptions and descriptions we use in our modeling have to be true — or at least ‘harmlessly’ false — and give a sufficiently detailed characterization of the mechanisms and forces at work. Models in mainstream economics do nothing of the kind.

Coming up with models that show how things may possibly be explained is not what we are looking for. It is not enough. We want to have models that build on assumptions that are not in conflict with known facts and that show how things actually are to be explained. Our aspirations have to be more far-reaching than just constructing coherent and ‘credible’ models about ‘possible worlds’. We want to understand and explain ‘difference-making’ in the real world and not just in some made-up fantasy world. No matter how many mechanisms or coherent relations you represent in your model, you still have to show that these mechanisms and relations are at work and exist in society if we are to do real science. Science has to be something more than just more or less realistic ‘story-telling’ or ‘explanatory fictionalism.’ You have to provide decisive empirical evidence that what you can infer in your model also helps us to uncover what actually goes on in the real world. It is not enough to present your students with epistemically informative insights about logically possible but non-existent general equilibrium models. You also, and more importantly, have to have a world-linking argumentation and show how those models explain or teach us something about real-world economies. If you fail to support your models in that way, why should we care about them? And if you do not inform us about what are the real-world intended target systems of your modeling, how are we going to be able to value or test them? Without giving that kind of information it is impossible for us to check if the ‘possible world’ models you come up with actually hold also for the one world in which we live — the real world.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: