DSGE models — a macroeconomic dead end

7 Dec, 2022 at 12:33 | Posted in Economics | 1 Comment

Both approaches to DSGE macroeconometrics (VAR and Bayesian) have evident vulnerabilities, which substantially derive from how parameters are handled in the technique. In brief, parameters from formally elegant models are calibrated in order to obtain simulated values that reproduce some stylized fact and/or some empirical data distribution, thus relating the underlying theoretical model and the observational data. But there are at least three main respects in which this practice fails.

lucasFirst of all, DSGE models have substantial difficulties in taking account of many important mechanisms that actually govern real economies, for example, institutional constraints like the tax system, thereby reducing DSGE power in policy analysis … In the attempt to deal with this serious problem, various parameter constraints on the model policy block are provided. They derive from institutional analysis and reflect policymakers’ operational procedures. However such model extensions, which are intended to reshape its predictions to reality and to deal with the underlying optimization problem, prove to be highly unflexible, turning DSGE into a “straitjacket tool” … In particular, the structure imposed on DSGE parameters entails various identification problems, such as observational equivalence, underidentification, and partial and weak identification.

These problems affect both empirical DSGE approaches. Fundamentally, they are ascribable to the likelihoods to estimate. In fact, the range of structural parameters that generate impulse response functions and data distributions fitting very close to the true ones does include model specifications that show very different features and welfare properties. So which is the right model specification (i.e., parameter set) to choose? As a consequence, reasonable estimates do not derive from the informative contents of models and data, but rather from the ancillary restrictions that are necessary to make the likelihoods informative, which are often arbitrary. Thus, after the Lucas’s super-exogeneity critique has been thrown out the door, it comes back through the window.

Roberto Marchionatti & Lisa Sella

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude toward probabilistic inferences in economic contexts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavor since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities, or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

This is a more fundamental and radical problem than the celebrated ‘Lucas critique’ have suggested. This is not the question if deep parameters, absent on the macro-level, exist in ‘tastes’ and ‘technology’ on the micro-level. It goes deeper. Real-world social systems are not governed by stable causal mechanisms or capacities.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social systems they mostly do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics rather useless.

Both the ‘Lucas critique’ and the ‘Keynes critique’ of econometrics argued that it was inadmissible to project history on the future. Consequently, an economic policy cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different ‘variables’ was not enough. If they could not get at the causal structure that generated the data, they were not really ‘identified’. Lucas himself drew the conclusion that the problem with unstable relations was to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioral parameters are seen to be stable even to changes in economic policies. As yours truly has argued in a couple of posts — e. g. here and here — this, however, is a dead end.

Brothers in arms

5 Dec, 2022 at 11:01 | Posted in Varia | Leave a comment

.

Then the righteous will answer him, ‘Lord, when did we see you hungry and feed you, or thirsty and give you something to drink? When did we see you a stranger and invite you in, or needing clothes and clothe you? When did we see you sick or in prison and go to visit you?’

The King will reply, ‘Truly I tell you, whatever you did for one of the least of these brothers and sisters of mine, you did for me.’

Economists — people biased toward overconfidence

4 Dec, 2022 at 20:50 | Posted in Economics | 2 Comments

nate silverNow consider what happened in November 2007. It was just one month before the Great Recession officially began …

Economists in the Survey of Professional Forecasters, a quarterly poll put out by the Federal Reserve Bank of Philadelphia, nevertheless foresaw a recession as relatively unlikely. Instead, they expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008 … This was a very bad forecast: GDP actually shrank by 3.3 percent once the financial crisis hit. What may be worse is that the economists were extremely confident in their prediction. They assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008 …

Indeed, economists have for a long time been much too confident in their ability to predict the direction of the economy … Their predictions have not just been overconfident but also quite poor in a real-world sense … Economic forecasters get more feedback than people in most other professions, but they haven’t chosen to correct for their bias toward overconfidence.

My Sweet Lord

4 Dec, 2022 at 17:40 | Posted in Varia | Leave a comment

.

Exile

3 Dec, 2022 at 17:08 | Posted in Varia | Leave a comment

.

Lacrimosa (personal)

2 Dec, 2022 at 14:32 | Posted in Varia | Leave a comment

östraÖstra kyrkogården i Lund är en plats som alltid betytt mycket för mig. Hit har jag kommit många gånger för att söka tröst och vila för min själ. Tre av de människor jag älskat mest i mitt liv ligger begravda här — min älskade Kristina, min bror Peter, och min vän Bengt.

kristina

Weekend combinatorics (III)

2 Dec, 2022 at 13:11 | Posted in Statistics & Econometrics | 3 Comments

Combinatorics in .NET - Part I - Permutations, Combinations & Variations –  try { } catch { } meAn easy one this week: At a small economics conference, a photographer wants to line up nine participants for a photo. Two of them — Robert and Milton — insist on standing next to each other. How many different arrangements (lineups) are possible?

Da doo ron ron

1 Dec, 2022 at 16:08 | Posted in Varia | Leave a comment

.

Three pages — all it takes to change science forever

1 Dec, 2022 at 08:50 | Posted in Theory of Science & Methodology | Leave a comment

.

And here is Edmund Gettier’s three pages article.

The one logic lecture mainstream economists did not attend

30 Nov, 2022 at 13:58 | Posted in Economics | Leave a comment

.

Using formal mathematical modelling, mainstream economists sure can guarantee that the conclusions hold given the assumptions. However the validity we get in abstract model worlds does not warrant transfer to real-world economies. Validity may be good, but it is not enough.

Mainstream economists are proud of having an ever-growing smorgasbord of models to cherry-pick from (as long as, of course, the models do not question the standard modelling strategy) when performing their analyses. The ‘rigorous’ and ‘precise’ deductions made in these closed models, however, are not in any way matched by a similar stringency or precision when it comes to what ought to be the most important stage of any economic research — making statements and explaining things in real economies. Although almost every mainstream economist holds the view that thought-experimental modelling has to be followed by confronting the models with reality — which is what they indirectly want to predict/explain/understand using their models — they then all of a sudden become exceedingly vague and imprecise. It is as if all the intellectual force has been invested in the modelling stage and nothing is left for what really matters — what exactly do these models teach us about real economies.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single iota if they do not stand the acid test of relevance to the target.  Proving things ‘rigorously’ in mathematical models is not a good recipe for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value. In the realm of true science, it is of little or no value to simply make claims about a model and lose sight of reality.

To have valid evidence is not enough. What economics needs is sound evidence. The premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspiration level too low for developing a realist and relevant science.

Necessary and sufficient (student stuff)

30 Nov, 2022 at 09:06 | Posted in Theory of Science & Methodology | Leave a comment

.

Is economics nothing but a library of models?

28 Nov, 2022 at 22:32 | Posted in Economics | 8 Comments

Chameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. monocle_chameleon_2The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions … If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic …

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

Paul Pfleiderer

As we all know, economics has become a model-based science. And in many of the methodology and philosophy of economics books published during the last two decades, this is seen as something positive.

In Dani Rodrik’s Economics Rules (OUP 2015) — just to take one illustrative example — economics is looked upon as nothing but a smorgasbord of ‘thought experimental’ models. For every purpose you may have, there is always an appropriate model to pick. The proliferation of economic models is unproblematically presented as a sign of great diversity and abundance of new ideas:

Rather than a single, specific model, economics encompasses a collection of models … Economics is in fact, a collection of diverse models …The possibilities of social life are too diverse to be squeezed into unique frameworks. But each economic model is like a partial map that illuminates a fragment of the terrain …

Different contexts … require different models … The correct answer to almost any question in economics is: It depends. Different models, each equally respectable, provide different answers.

But, really, there have​ to be some limits to the flexibility of a theory!

If you freely can substitute any part of the core and auxiliary sets of assumptions and still consider that you deal with the same theory, well, then it’s not a theory, but a chameleon picked from your model library.

The big problem with the mainstream cherry-picking view of models is of course that the theories and models presented get totally immunized against all critique.  A sure way to get rid of all kinds of ‘anomalies,’ yes, but at a far too high price. So people do not behave optimizing? No problem, we have models that assume satisficing! So people do not maximize expected utility? No problem, we have models that assume … etc., etc …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

200px-Hans_Albert_2005-2A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

Hans Albert

A theory that accommodates any observed phenomena whatsoever by creating a new special model for the occasion, and a fortiori having no chance of being tested severely and found wanting, is of little or no real value at all.

Chebyshev’s and Markov’s Inequality Theorems

28 Nov, 2022 at 14:45 | Posted in Statistics & Econometrics | Leave a comment

Chebyshev’s Inequality Theorem — named after Russian mathematician Pafnuty Chebyshev (1821-1894) — states that for a population (or sample) at most 1/kof the distribution’s values can be more than k standard deviations away from the mean. The beauty of the theorem is that although we may not know the exact distribution of the data — e.g. if it’s normally distributed  — we may still say with certitude (since the theorem holds universally)  that there are bounds on probabilities!

Another beautiful result of probability theory is Markov’s inequality (after the Russian mathematician Andrei Markov (1856-1922)):

If X is a non-negative stochastic variable (X ≥ 0) with a finite expectation value E(X), then for every a > 0

P{X ≥ a} ≤ E(X)/a

If the production of cars in a factory during a week is assumed to be a stochastic variable with an expectation value (mean) of 50 units, we can — based on nothing else but the inequality — conclude that the probability that the production for a week would be greater than 100 units can not exceed 50% [P(X≥100)≤(50/100)=0.5=50%]

I still feel humble awe at this immensely powerful result. Without knowing anything else but an expected value (mean) of a probability distribution we can deduce upper limits for probabilities. The result hits me as equally surprising today as forty-five years ago when I first run into it as a student of mathematical statistics.

Apache

27 Nov, 2022 at 20:55 | Posted in Varia | Leave a comment

.

The empirical turn in economics

25 Nov, 2022 at 18:44 | Posted in Economics | 2 Comments

The Empirical Revolution in Economics - Business Review at BerkeleyCe qui fait l’unité de la discipline est plutôt l’identification causale, c’est-à-dire un ensemble de méthodes statistiques qui permettent d’estimer les liens de cause à effet entre un facteur quelconque et des résultats économiques. Dans cette perspective, la démarche scientifique vise à reproduire in vivo l’expérience de laboratoire, où l’on peut distinguer aisément la différence de résultat entre un groupe auquel on administre un traitement et un autre groupe semblable qui n’est quant à lui pas affecté.

Les outils statistiques permettraient aux économistes d’appliquer cette méthode en dehors du laboratoire, y compris à l’histoire et à tout autre sujet. Là encore, il faudrait considérablement nuancer ce constat. Mais, il ne me semble pas aberrant de dire que si, pour comprendre les canons de la discipline, tout économiste devait auparavant au moins maîtriser les bases du calcul rationnel, il s’agit surtout aujourd’hui de maîtriser les bases de l’identification économétrique (variables instrumentales et méthode des différences de différences en particulier).

Si les canons de la discipline ont changé, les rapports de l’économie dominante aux autres disciplines n’ont quant à eux pas évolué. Certains économistes se considéraient supérieurs auparavant car ils pensaient que seuls les modèles formels d’individu rationnel pouvaient expliquer les comportements de manière scientifique. Les autres explications tenant de l’évaluation subjective non rigoureuse.

Eric Monnet

Although discounting empirical evidence cannot be the right way to solve economic issues, there are still, as Monnet argues, several weighty reasons why we perhaps shouldn’t be too excited about the so-called ’empirical revolution’ in economics.

Behavioural experiments and laboratory research face the same basic problem as theoretical models — they are built on often rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments to avoid the ‘confounding factors’, the less the conditions are reminiscent of the real ‘target system.’ The nodal issue is how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. One may have justified doubts on the generalizability of this research strategy since the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or laboratory research as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

A standard procedure in behavioural economics — think of e.g. dictator or ultimatum games — is to set up a situation where one induces people to act according to the standard microeconomic — homo oeconomicus — benchmark model. In most cases, the results show that people do not behave as one would have predicted from the benchmark model, in spite of the setup almost invariably being ‘loaded’ for that purpose. [And in those cases where the result is consistent with the benchmark model, one, of course, has to remember that this in no way proves the benchmark model to be right or ‘true,’ since there, as a rule, are multiple outcomes that are consistent with that model.]

For most heterodox economists this is just one more reason for giving up on the standard model. But not so for mainstreamers and many behaviouralists. To them, the empirical results are not reasons for giving up on their preferred hardcore axioms. So they set out to ‘save’ or ‘repair’ their model and try to ‘integrate’ the empirical results into mainstream economics. Instead of accepting that the homo oeconomicus model has zero explanatory real-world value, one puts lipstick on the pig and hopes to go on with business as usual. Why we should keep on using that model as a benchmark when everyone knows it is false is something we are never told. Instead of using behavioural economics and its results as building blocks for a progressive alternative research program, the ‘save and repair’ strategy immunizes a hopelessly false and irrelevant model.

By this, I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of behavioural experiments and laboratory research within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe we can achieve with our mediational epistemological tools and methods in the social sciences.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

So — although it is good that that much of the behavioural economics research has vastly undermined the lure of axiomatic-deductive mainstream economics, there is still a long way to go before economics has become a truly empirical science. The great challenge for future economics is not to develop methodologies and theories for well-controlled laboratories, but to develop relevant methodologies and theories for the messy world in which we happen to live.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: