The one logic lecture mainstream economists did not attend
30 Nov, 2022 at 13:58 | Posted in Economics | Comments Off on The one logic lecture mainstream economists did not attend.
Using formal mathematical modelling, mainstream economists sure can guarantee that the conclusions hold given the assumptions. However the validity we get in abstract model worlds does not warrant transfer to real-world economies. Validity may be good, but it is not enough.
Mainstream economists are proud of having an ever-growing smorgasbord of models to cherry-pick from (as long as, of course, the models do not question the standard modelling strategy) when performing their analyses. The ‘rigorous’ and ‘precise’ deductions made in these closed models, however, are not in any way matched by a similar stringency or precision when it comes to what ought to be the most important stage of any economic research — making statements and explaining things in real economies. Although almost every mainstream economist holds the view that thought-experimental modelling has to be followed by confronting the models with reality — which is what they indirectly want to predict/explain/understand using their models — they then all of a sudden become exceedingly vague and imprecise. It is as if all the intellectual force has been invested in the modelling stage and nothing is left for what really matters — what exactly do these models teach us about real economies.
No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single iota if they do not stand the acid test of relevance to the target. Proving things ‘rigorously’ in mathematical models is not a good recipe for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value. In the realm of true science, it is of little or no value to simply make claims about a model and lose sight of reality.
To have valid evidence is not enough. What economics needs is sound evidence. The premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspiration level too low for developing a realist and relevant science.
Necessary and sufficient (student stuff)
30 Nov, 2022 at 09:06 | Posted in Theory of Science & Methodology | Comments Off on Necessary and sufficient (student stuff).
Is economics nothing but a library of models?
28 Nov, 2022 at 22:32 | Posted in Economics | 8 CommentsChameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand.
The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions … If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic …
Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.
As we all know, economics has become a model-based science. And in many of the methodology and philosophy of economics books published during the last two decades, this is seen as something positive.
In Dani Rodrik’s Economics Rules (OUP 2015) — just to take one illustrative example — economics is looked upon as nothing but a smorgasbord of ‘thought experimental’ models. For every purpose you may have, there is always an appropriate model to pick. The proliferation of economic models is unproblematically presented as a sign of great diversity and abundance of new ideas:
Rather than a single, specific model, economics encompasses a collection of models … Economics is in fact, a collection of diverse models …The possibilities of social life are too diverse to be squeezed into unique frameworks. But each economic model is like a partial map that illuminates a fragment of the terrain …
Different contexts … require different models … The correct answer to almost any question in economics is: It depends. Different models, each equally respectable, provide different answers.
But, really, there have to be some limits to the flexibility of a theory!
If you freely can substitute any part of the core and auxiliary sets of assumptions and still consider that you deal with the same theory, well, then it’s not a theory, but a chameleon picked from your model library.
The big problem with the mainstream cherry-picking view of models is of course that the theories and models presented get totally immunized against all critique. A sure way to get rid of all kinds of ‘anomalies,’ yes, but at a far too high price. So people do not behave optimizing? No problem, we have models that assume satisficing! So people do not maximize expected utility? No problem, we have models that assume … etc., etc …
Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …
A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.
A theory that accommodates any observed phenomena whatsoever by creating a new special model for the occasion, and a fortiori having no chance of being tested severely and found wanting, is of little or no real value at all.
Chebyshev’s and Markov’s Inequality Theorems
28 Nov, 2022 at 14:45 | Posted in Statistics & Econometrics | Comments Off on Chebyshev’s and Markov’s Inequality TheoremsChebyshev’s Inequality Theorem — named after Russian mathematician Pafnuty Chebyshev (1821-1894) — states that for a population (or sample) at most 1/k2 of the distribution’s values can be more than k standard deviations away from the mean. The beauty of the theorem is that although we may not know the exact distribution of the data — e.g. if it’s normally distributed — we may still say with certitude (since the theorem holds universally) that there are bounds on probabilities!
Another beautiful result of probability theory is Markov’s inequality (after the Russian mathematician Andrei Markov (1856-1922)):
If X is a non-negative stochastic variable (X ≥ 0) with a finite expectation value E(X), then for every a > 0
P{X ≥ a} ≤ E(X)/a
If the production of cars in a factory during a week is assumed to be a stochastic variable with an expectation value (mean) of 50 units, we can — based on nothing else but the inequality — conclude that the probability that the production for a week would be greater than 100 units can not exceed 50% [P(X≥100)≤(50/100)=0.5=50%]
I still feel humble awe at this immensely powerful result. Without knowing anything else but an expected value (mean) of a probability distribution we can deduce upper limits for probabilities. The result hits me as equally surprising today as forty-five years ago when I first run into it as a student of mathematical statistics.
The empirical turn in economics
25 Nov, 2022 at 18:44 | Posted in Economics | 2 Comments
Ce qui fait l’unité de la discipline est plutôt l’identification causale, c’est-à-dire un ensemble de méthodes statistiques qui permettent d’estimer les liens de cause à effet entre un facteur quelconque et des résultats économiques. Dans cette perspective, la démarche scientifique vise à reproduire in vivo l’expérience de laboratoire, où l’on peut distinguer aisément la différence de résultat entre un groupe auquel on administre un traitement et un autre groupe semblable qui n’est quant à lui pas affecté.
Les outils statistiques permettraient aux économistes d’appliquer cette méthode en dehors du laboratoire, y compris à l’histoire et à tout autre sujet. Là encore, il faudrait considérablement nuancer ce constat. Mais, il ne me semble pas aberrant de dire que si, pour comprendre les canons de la discipline, tout économiste devait auparavant au moins maîtriser les bases du calcul rationnel, il s’agit surtout aujourd’hui de maîtriser les bases de l’identification économétrique (variables instrumentales et méthode des différences de différences en particulier).
Si les canons de la discipline ont changé, les rapports de l’économie dominante aux autres disciplines n’ont quant à eux pas évolué. Certains économistes se considéraient supérieurs auparavant car ils pensaient que seuls les modèles formels d’individu rationnel pouvaient expliquer les comportements de manière scientifique. Les autres explications tenant de l’évaluation subjective non rigoureuse.
Although discounting empirical evidence cannot be the right way to solve economic issues, there are still, as Monnet argues, several weighty reasons why we perhaps shouldn’t be too excited about the so-called ’empirical revolution’ in economics.
Behavioural experiments and laboratory research face the same basic problem as theoretical models — they are built on often rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments to avoid the ‘confounding factors’, the less the conditions are reminiscent of the real ‘target system.’ The nodal issue is how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. One may have justified doubts on the generalizability of this research strategy since the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.
If we see experiments or laboratory research as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).
A standard procedure in behavioural economics — think of e.g. dictator or ultimatum games — is to set up a situation where one induces people to act according to the standard microeconomic — homo oeconomicus — benchmark model. In most cases, the results show that people do not behave as one would have predicted from the benchmark model, in spite of the setup almost invariably being ‘loaded’ for that purpose. [And in those cases where the result is consistent with the benchmark model, one, of course, has to remember that this in no way proves the benchmark model to be right or ‘true,’ since there, as a rule, are multiple outcomes that are consistent with that model.]
For most heterodox economists this is just one more reason for giving up on the standard model. But not so for mainstreamers and many behaviouralists. To them, the empirical results are not reasons for giving up on their preferred hardcore axioms. So they set out to ‘save’ or ‘repair’ their model and try to ‘integrate’ the empirical results into mainstream economics. Instead of accepting that the homo oeconomicus model has zero explanatory real-world value, one puts lipstick on the pig and hopes to go on with business as usual. Why we should keep on using that model as a benchmark when everyone knows it is false is something we are never told. Instead of using behavioural economics and its results as building blocks for a progressive alternative research program, the ‘save and repair’ strategy immunizes a hopelessly false and irrelevant model.
By this, I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of behavioural experiments and laboratory research within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe we can achieve with our mediational epistemological tools and methods in the social sciences.
The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.
So — although it is good that that much of the behavioural economics research has vastly undermined the lure of axiomatic-deductive mainstream economics, there is still a long way to go before economics has become a truly empirical science. The great challenge for future economics is not to develop methodologies and theories for well-controlled laboratories, but to develop relevant methodologies and theories for the messy world in which we happen to live.
Weekend combinatorics (II)
25 Nov, 2022 at 13:31 | Posted in Varia | 7 Comments

How many permutations exist of the ten digits (0-9) that either begin with”123″, contain “56” in the 6th & 7th positions, or end with “789”?
Knowledge and growth
24 Nov, 2022 at 13:54 | Posted in Economics | Comments Off on Knowledge and growth
If you have an apple and I have an apple and we exchange these apples then you and I will each have one apple.
But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.
George Bernard Shaw
Adam Smith once wrote that a really good explanation is “practically seamless.” Is there any such theory within one of the most important fields of social sciences — economic growth?
In Paul Romer’s Endogenous Technological Change (1990) knowledge is made the most important driving force of growth. Knowledge (ideas) are presented as the locomotive of growth — but as Allyn Young, Piero Sraffa and others had shown already in the 1920s, knowledge is also something that has to do with increasing returns to scale and therefore not really compatible with neoclassical economics with its emphasis on decreasing returns to scale.
Increasing returns generated by non-rivalry between ideas is simply not compatible with pure competition and the simplistic invisible hand dogma. That is probably also the reason why neoclassical economists have been so reluctant to embrace the theory wholeheartedly.
Mainstream economics has tried to save itself by more or less substituting human capital for knowledge/ideas. But knowledge or ideas should not be confused with human capital. Although some have problems with the distinction between ideas and human capital in modern endogenous growth theory, this passage gives a succinct and accessible account of the difference:
Of the three statevariables that we endogenize, ideas have been the hardest to bring into the applied general equilibrium structure. The difficulty arises because of the defining characteristic of an idea, that it is a pure nonrival good. A given idea is not scarce in the same way that land or capital or other objects are scarce; instead, an idea can be used by any number of people simultaneously without congestion or depletion.
Because they are nonrival goods, ideas force two distinct changes in our thinking about growth, changes that are sometimes conflated but are logically distinct. Ideas introduce scale effects. They also change the feasible and optimal economic institutions. The institutional implications have attracted more attention but the scale effects are more important for understanding the big sweep of human history.
The distinction between rival and nonrival goods is easy to blur at the aggregate level but inescapable in any microeconomic setting. Picture, for example, a house that is under construction. The land on which it sits, capital in the form of a measuring tape, and the human capital of the carpenter are all rival goods. They can be used to build this house but not simultaneously any other. Contrast this with the Pythagorean Theorem, which the carpenter uses implicitly by constructing a triangle with sides in the proportions of 3, 4 and 5. This idea is nonrival. Every carpenter in the world can use it at the same time to create a right angle.
Of course, human capital and ideas are tightly linked in production and use. Just as capital produces output and forgone output can be used to produce capital, human capital produces ideas and ideas are used in the educational process to produce human capital. Yet ideas and human capital are fundamentally distinct. At the micro level, human capital in our triangle example literally consists of new connections between neurons in a carpenter’s head, a rival good. The 3-4-5 triangle is the nonrival idea. At the macro level, one cannot state the assertion that skill-biased technical change is increasing the demand for education without distinguishing between ideas and human capital.
On models and simplicity
20 Nov, 2022 at 14:44 | Posted in Economics | 5 CommentsWhen it comes to modelling yours truly does see the point emphatically made time after time by e. g. Paul Krugman about simplicity — at least as long as it doesn’t impinge on our truth-seeking. ‘Simple’ macroeconomic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconomics do not investigate and make an effort of providing a justification for the credibility of the simplicity assumptions on which they erect their building, it will not fulfil its tasks. Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a sceptic of the pretences and aspirations of ‘simple’ macroeconomic models and theories. So far, I can’t really see that e. g. ‘simple’ microfounded models have yielded very much in terms of realistic and relevant economic knowledge.
All empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
Being able to model a ‘credible world,’ a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.
Explanation, understanding, and prediction of real-world phenomena, relations, and mechanisms therefore cannot be grounded on simpliciter assuming simplicity. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that when we export them from are models to our target systems they do not change from one situation to another, then they – considered ‘simple’ or not – only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation, and prediction of our real world target system.
The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach, is that similarity’ or ‘resemblance’ tout court does not guarantee that the correspondence between model and target is interesting, relevant, revealing, or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts are made in the model, if the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism, etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.
Constructing simple macroeconomic models somehow seen as ‘successively approximating’ macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest in understanding and explaining features of real economies. Many of the model assumptions standardly made by mainstream macroeconomics – simplicity being one of them – are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.
If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their ‘simple’ models are stable in the sense that they do not change when exported to their ‘target systems,’ they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.
That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.
As scientists, we have to get our priorities right. Ontological under-labouring has to precede epistemology.
How to prove things
16 Nov, 2022 at 14:27 | Posted in Statistics & Econometrics | Comments Off on How to prove things.
Great lecture series.
Yours truly got Solow’s book when he was studying mathematics back in the 80s.
Now in its 6th edition, it’s better than ever.
Macroeconomics and the Friedman-Savage ‘as if’ logic
15 Nov, 2022 at 21:38 | Posted in Economics | 6 Comments
An objection to the hypothesis just presented that is likely to be raised by many … is that it conflicts with the way human beings actually behave and choose. … Is it not patently unrealistic to suppose that individuals … base their decision on the size of the
expected utility?While entirely natural and under-
standable, this objection is not strictly relevant … The hypothesis asserts rather that, in making a particular class of decisions, individuals behave as if they calculated and compared expected utility and as if they knew the odds. The validity of this assertion … depend solely on whether it yields sufficiently accurate predictions about the class of decisions
with which the hypothesis deals.
‘Modern’ macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and New ‘Keynesian’ — still follows the Friedman-Savage ‘as if’ logic of denying the existence of genuine uncertainty and treat variables as if drawn from a known ‘data-generating process’ with a known probability distribution that unfolds over time and on which we, therefore, have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ – if we do not have the ‘true’ model – the whole edifice collapses. And of course, it has to. Who really honestly believes that we have access to this mythical Holy Grail, the data-generating process?
‘Modern’ macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.
This is like saying that you are going on a holiday trip and that you know that the chance of the weather being sunny is at least 30% and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either or. Uncertainty is reduced to risk.
But as Keynes convincingly argued in his monumental Treatise on Probability (1921), this is not always possible. Often we simply do not know. According to one model, the chance of sunny weather is perhaps somewhere around 10% and according to another — equally good — model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.
In the end, this is what it all boils down to. We all know that many activities, relations, processes, and events are of the Keynesian uncertainty type. The data do not unequivocally single out one decision as the only ‘rational’ one. Neither the economist nor the deciding individual can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.
Some macroeconomists, however, still want to be able to use their hammer. So they — like Friedman and Savage — decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.
How much better — how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control — if instead, we could just admit that we often simply do not know and that we have to live with that uncertainty as well as it goes.
Fooling people into believing that one can cope with an unknown economic future in a way similar to playing the roulette wheels, is a sure recipe for only one thing — economic catastrophe!
In state of emergency
12 Nov, 2022 at 18:57 | Posted in Varia | Comments Off on In state of emergency.
Koncerten i S:t Pauli kyrka tidigare i kväll var för yours truly en av årets musilkaliska höjdpunkter.
Har ni möjlighet så missa inte koncerten (gratis) i Allhegonakyrkan i Lund i morgon kl. 19.00!
Tentamen
12 Nov, 2022 at 14:19 | Posted in Varia | Comments Off on TentamenYours truly har rykte om sig att vara en krävande tentator. Det stämmer nog.
Men det finns alltid de som haft det ‘värre’ …
Blog at WordPress.com.
Entries and Comments feeds.