Dass sich der Pressesprecher einer Bundestagsfraktion mit einer Journalistin in einer Bar verabredet, um ein offenes und vertrauliches Gespräch zu führen, ist Alltag im Berliner Regierungsviertel. Aber das Abendessen, zu dem sich Christian Lüth, der damalige Sprecher der AfD-Fraktion, und die rechtslastige YouTuberin Lisa Licentia am 23. Februar 2020 in der Newton Bar in Berlin-Mitte treffen, ist alles andere als alltäglich …
Man müsse dafür sorgen, sagt Lüth, dass es der Bundesrepublik noch schlechter gehe, denn das würde der AfD politisch in die Hände spielen: “Wenn jetzt alles gut laufen würde … dann wäre die AfD bei drei Prozent. Wollen wir nicht. Deshalb müssen wir uns eine Taktik überlegen zwischen: Wie schlimm kann es Deutschland gehen? Und: Wie viel können wir provozieren?”
Zuletzt ergeht Lüth sich sogar in Gewaltfantasien. Lisa Licentia fragt ihn: “Vor allem klingt das so, als ob es in deinem Interesse wäre, dass noch mehr Migranten kommen?” Darauf Lüth: “Ja. Weil dann geht es der AfD besser. Wir können die nachher immer noch alle erschießen. Das ist überhaupt kein Thema. Oder vergasen, oder wie du willst. Mir egal!”
Rocker ohne amputiertes Gehirn
30 Sep, 2020 at 13:20 | Posted in Varia | Comments Off on Rocker ohne amputiertes Gehirn.
Game theory — a severe case of Model Platonism
29 Sep, 2020 at 14:14 | Posted in Economics | Comments Off on Game theory — a severe case of Model Platonism
The critic may respond that the game theorist’s victory in the debate is at best Pyrrhic, since it is bought at the cost of reducing the propositions of game theory to the status of ‘mere’ tautologies. But such an accusation disturbs the game theorist not in the least. There is nothing a game theorist would like better than for his propositions to be entitled to the status of tautologies, just like proper mathematical theorems.
When applying deductivist thinking to economics, game theorists like Ken Binmore set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be real-world relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not. When addressing real-world systems, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.
If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.
Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …
A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.
Seen from a deductive-nomological perspective, typical economic models (M) usually consist of a theory (T) — a set of more or less general (typically universal) law-like hypotheses (H) — and a set of (typically spatio-temporal) auxiliary assumptions (A). The auxiliary assumptions give ‘boundary’ descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises T & A. Using this kind of model game theorists are (portrayed as) trying to explain (predict) facts by subsuming them under T, given A.
An obvious problem with the formal-logical requirements of what counts as H is the often severely restricted reach of the ‘law.’ In the worst case, it may not be applicable to any real, empirical, relevant, situation at all. And if A is not true, then M does not really explain (although it may predict) at all. Deductive arguments should be sound – valid and with true premises – so that we are assured of having true conclusions. Constructing game theoretical models assuming ‘common knowledge’ and ‘rational expectations,’ says nothing of situations where knowledge is ‘non-common’ and expectations are ‘non-rational.’
Building theories and models that are ‘true’ in their own very limited ‘idealized’ domain is of limited value if we cannot supply bridges to the real world. ‘Laws’ that only apply in specific ‘idealized’ circumstances — in ‘nomological machines’ — are not the stuff that real science is built of.
When confronted with the massive empirical refutations of almost all models they have set up, many game theorists react by saying that these refutations only hit A (the Lakatosian ‘protective belt’), and that by ‘successive approximations’ it is possible to make the models more readily testable and predictably accurate. Even if T & A1 do not have much of empirical content, if by successive approximation we reach, say, T & A25, we are to believe that we can finally reach robust and true predictions and explanations.
Hans Albert’s ‘Model Platonism’ critique shows that there is a strong tendency for modellers to use the method of successive approximations as a kind of ‘immunization,’ taking for granted that there can never be any faults with the theory. Explanatory and predictive failures hinge solely on the auxiliary assumptions. That the kind of theories and models used by game theorists should all be held non-defeasibly corroborated, seems, however — to say the least — rather unwarranted.
Retreating — as Ken Binmore and other game theorists — into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes whatsoever of relating theories and models to the real world is pure defeatism. Instead of trying to bridge the gap between models and the world, they simply decide to look the other way.
To me, this kind of scientific defeatism is equivalent to surrendering our search for understanding and explaining the world we live in. It cannot be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything about the world we live in – then why should we waste any of our precious time on them?
Why game theory fails to live up to its promise
28 Sep, 2020 at 11:40 | Posted in Economics | 1 CommentWhy, it might be objected, should the goal of social science be mere causal explanations of particular events? Isn’t such an attitude more the province of the historian? Social science should instead be concentrating on systematic knowledge. The Prisoner’s Dilemma, this objection concludes, is a laudable example of exactly that – a piece of theory that sheds light over many different cases.
In reply, we certainly agree that regularities or models that explain or that give heuristic value over many different cases are highly desirable. But ones that do neither are not – especially if they use up huge resources along the way. When looking at the details, the Prisoner’s Dilemma’s explanatory record so far is poor and its heuristic record mixed at best. The only way to get a reliable sense of what theoretical input would actually be useful is via detailed empirical investigations. What useful contribution – whether explanatory, heuristic or none at all – the Prisoner’s Dilemma makes to such investigations cannot be known until they are tried. Therefore resources would be better directed towards that rather than towards yet more theoretical development or laboratory experiments.
Game theory is, like mainstream economics in general, model-oriented. There are many reasons for this – the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc. Most mainstream economists and game theorists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.
Building their economic models, modern mainstream economists ground their models on a set of core assumptions describing the agents as ‘rational’ actors and a set of auxiliary assumptions. Based on these two sets of assumptions, they try to explain and predict both individual and social phenomena.
The model used is typically seen as a kind of thought experimental ‘as if’ benchmark device for enabling a rigorous mathematically tractable illustration of social interaction in an ideal-type model world, and to be able to compare that ‘ideal’ with reality. The ‘interpreted’ model is supposed to supply analytical and explanatory power, enabling us to detect and
understand mechanisms and tendencies in what happens around us in real economies.
But if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold. If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Being told that the model is rigorous and amenable to ‘successive approximations’ to reality is of little avail, especially when the law-like (nomological) core assumptions are highly
questionable and extremely difficult to test.
Many mainstream economists – still – think that game theory is useful and can be applied to real-life and give important and interesting results. That, however, is a rather unsubstantiated view. What game theory does is, strictly seen, nothing more than investigating the logic of behaviour among non-existant robot-imitations of humans. Knowing how those ‘rational fools’ play games do not help us to decide and act when interacting with
real people. Knowing some game theory may actually make us behave in a way that hurts both ourselves and others. Decision-making and social interaction are always embedded in socio-cultural contexts. Not taking account of that, game theory will remain an analytical cul-de-sac that never will be able to come up with useful and relevant explanations.
Over-emphasizing the reach of instrumental rationality and abstracting away from the influence of many known to be important factors, reduces the analysis to a pure thought experiment without any substantial connection to reality. Limiting theoretical economic analysis in this way – not incorporating both motivational and institutional factors when trying to explain human behaviour – makes economics insensitive to social facts.
Game theorists extensively exploit ‘rational choice’ assumptions in their explanations. That is probably also the reason why game theory has not been able to accommodate well-known anomalies in its theoretical framework. That should hardly come as a surprise to anyone. Game theory with its axiomatic view on individuals’ tastes, beliefs, and preferences, cannot accommodate very much of real-life behaviour. It is hard to find really compelling arguments in favour of us continuing down its barren paths since individuals obviously do not comply with, or are guided by, game theory.
Apart from a few notable exceptions it is difficult to find really successful applications of game theory. Why? To a large extent simply because the boundary conditions of game theoretical models are false and baseless from a real-world perspective. And, perhaps even more importantly, since they are not even close to being good approximations of real-life, game theory is lacking predictive power. This should come as no surprise. As long
as game theory sticks to its ‘rational choice’ foundations, there is not much to be hoped for.
Game theorists can, of course, marginally modify their tool-box and fiddle with the auxiliary assumptions to get whatever outcome they want. But as long as the ‘rational choice’ core assumptions are left intact, it seems a pointless effort of hampering with an already excessive deductive-axiomatic formalism. If you do believe in a real-world relevance of game theoretical ‘science fiction’ assumptions such as expected utility, ‘common knowledge,’ ‘backward induction,’ correct and consistent beliefs etc., etc., then adding things like ‘framing,’ ‘cognitive bias,’ and different kinds of heuristics, do not ‘solve’ any problem. If we want to construct a theory that can provide us with explanations of individual cognition, decisions, and social interaction, we have to look for something else.
As noted by Northcott and Alexandrova, applications of game theory have on the whole resulted in massive predictive failures. People simply do not act according to the theory. They do not know or possess the assumed probabilities, utilities, beliefs or information to calculate the different (‘subgame,’ ‘tremblinghand perfect’) Nash equilibria. They may be reasonable and make use of their given cognitive faculties as well as they can, but they are obviously not those perfect and costless hyper-rational expected utility maximizing calculators game theory posits. And fortunately so. Being ‘reasonable’ make them avoid all those made-up ‘rationality’ traps that game theory would have put them in if they had tried to act as consistent players in a game theoretical sense.
“Je schlechter es Deutschland geht, desto besser für die AfD”
28 Sep, 2020 at 09:32 | Posted in Politics & Society | Comments Off on “Je schlechter es Deutschland geht, desto besser für die AfD”What’s the use of economic models?
26 Sep, 2020 at 14:06 | Posted in Economics | Comments Off on What’s the use of economic models?
One can generally develop a theoretical model to produce any result within a wide range. Do you want a model that produces the result that banks should be 100% funded by deposits? Here is a set of assumptions and an argument that will give you that result. That such a model exists tells us very little …
Being logically correct may earn a place for a theoretical model on the bookshelf, but when a theoretical model is taken off the shelf and applied to the real world, it is important to question whether the model’s assumptions are in accord with what we know about the world. To be taken seriously models should pass through the real world filter.
Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made … In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.
Pfleiderer’s absolute gem of an article reminds me of what H. L. Mencken once famously said:
There is always an easy solution to every problem — neat, plausible and wrong.
Pfleiderer’s perspective may be applied to many of the issues involved when modelling complex and dynamic economic phenomena. Let me take just one example — simplicity.
‘Simple’ macroeconom(etr)ic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconom(etr)ics do not investigate and make an effort of providing a justification for the credibility of the simplicity-assumptions on which they erect their building, it will not fullfil its tasks. Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a skeptic of the pretences and aspirations of ‘simple’ macroeconom(etr)ic models and theories. So far, I can’t really see that e. g. ‘simple’ microfounded models have yielded very much in terms of realistic and relevant economic knowledge.
All empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
Being able to model a ‘credible world,’ a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though — as Pfleiderer acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.
If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they — considered ‘simple’ or not — only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.
The obvious ontological shortcoming of a basically epistemic — rather than ontological — approach, is that ‘similarity’ or ‘resemblance’ tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. If the simplifications made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism, etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.
Many of the model assumptions standardly made in mainstream macroeconomics — simplicity being one of them — are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all. If economists aren’t able to show that the mechanisms or causes that they isolate and handle in their ‘simple’ models are stable in the sense that they do not change when exported to their ‘target systems,’ they do only hold under ceteris paribus conditions and are a fortiori of limited value to our understanding, explanations or predictions of real economic systems.
That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.
MMT and the need for tax reforms
24 Sep, 2020 at 17:20 | Posted in Economics | 3 Comments
The analytical steps taken in this contribution point to a potential tax reform agenda, which would achieve useful macroeconomic and social policy objectives. The thumbnail sketch we have provided is intended to provoke others to further consider how tax reforms can serve a dual macroeconomic and social policy purpose. For example, taxing capital gains at the same rates as in the 1980s and similar moves to restore corporation tax to pre- 2010 levels would discourage the diversion of income into company structures, which undermines both income tax, and potentially the cancellation or withdrawal function of the tax system as a whole. Generous capital gains tax allowances on buy-to-let properties, that potentially fuel asset inflation and reduce access to affordable homes, could be substantially reduced. The proceeds from reduced allowances could then be invested in social housing. The fact that national insurance charges in the UK apply only to income from work, but not investment income, also make it a potential target for reform based on the application of similar MMT logics. At present, those who work for a living pay considerably more tax on identical levels of income than those who receive income as a return on investments of a variety of forms. When the saving process is not required to drive investment, the socially regressive nature of such policies becomes much clearer.
Evidence-based policies
23 Sep, 2020 at 17:09 | Posted in Economics | Comments Off on Evidence-based policiesEvidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence based on randomized experiments, therefore, is the best.
More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.
Yours truly would, however, rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. Just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.
The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:
• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!
• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.
• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.
• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.
Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.
Argent sale
22 Sep, 2020 at 18:46 | Posted in Economics | Comments Off on Argent saleStratégiques et vitales pour l’économie, les banques sont devenues, au fil des crises financières, l’un des secteurs les plus régulés au monde.
Pourtant, malgré les règles et les contrôles, le secteur bancaire mondial reste poreux au blanchiment d’argent et peine à lutter contre la circulation de l’argent sale, selon les « FinCEN Files », la nouvelle enquête conduite par le Consortium international des journalistes d’investigation (ICIJ) avec le site d’information américain BuzzFeed News et 108 médias internationaux, dont Le Monde …
A l’heure de la mondialisation financière, le constat est sans appel : malgré le récent durcissement des règles antiblanchiment, les banques demeurent éminemment faillibles. Tandis que la régulation du secteur bancaire mondial reste l’un des grands enjeux des décennies à venir, les « FinCEN Files » montrent le rôle central des grandes banques systémiques dans la circulation des flux d’argent sale liés à la fraude, la corruption, le crime organisé et le terrorisme.
Les rapports de la cellule de renseignement financier américaine dévoilent ainsi qu’au moins cinq grandes banques – JPMorgan, HSBC, Standard Chartered Bank, Deutsche Bank et la Bank of New York Mellon – ont échoué à endiguer certains transferts illicites de capitaux, parfois même après avoir été sanctionnées et s’être engagées auprès de la justice à renforcer leurs contrôles.
How to cope with inflation
22 Sep, 2020 at 10:19 | Posted in Economics | 5 CommentsLess well worked out is a technique of dealing with the other responsibility of the creator of money – the responsibility for maintaining its value. The key points here are not in the direct supply of money, or even in the regulation of the level of spending. The key points are in the determination of wage rates and in the determination of rates of markup of selling prices over costs … Higher wages relatively to prices are necessary for long-run prosperity but raising wages will do no good because they will only lead to higher prices and inflation.
The dilemma can be resolved only by the government going to work on both money wage determination and on markup rates. Both are problems of monopoly and as such are inevitably destructive of a free economy. Markup rates must be reduced by antimonopoly measures … The most important help to the government in this will be the policy of maintaining full employment which will make it profitable for business to work with smaller markups.
Most proponents of MMT argue that the aggregate demand impact of interest rate changes is unclear. Their effect depends on intricate and complex relations (especially distributional) and institutions, which makes it realiter impossible to always be able to tell which way they work. Public budget deficits and higher interest rates may cause inflation to go up — or go down. And if so, the neoliberal dream of the efficacy of central bank inflation targeting is nothing but a fantasy.
Expected utility theory — an ex-parrot
20 Sep, 2020 at 16:16 | Posted in Economics | Comments Off on Expected utility theory — an ex-parrotIf a friend of yours offered you a gamble on the toss of a coin where you could lose €100 or win €200, would you accept it? Many of us probably wouldn’t. But if you were offered to make one hundred such bets, you would probably be willing to accept it, since most of us see that the aggregated gamble of one hundred 50–50 lose €100/gain €200 bets has an expected return of €5000 (and making our probabilistic calculations we find out that there is only a 0.04% ‘risk’ of losing any money).
Unfortunately – at least if you want to adhere to the standard mainstream expected utility theory – you are then considered irrational! A mainstream utility maximizer that rejects the single gamble should also reject the aggregate offer.
Expected utility theory does not explain actual behaviour and choices. But still — although the theory is obviously descriptively inadequate — economists and microeconomics textbook writers gladly continue to use it, as though its deficiencies were unknown or unheard of.
That cannot be the right attitude when facing scientific anomalies!
When models are plainly wrong, you’d better replace them. Or as Matthew Rabin and Richard Thaler have it:
It is time for economists to recognize that expected utility is an ex-hypothesis, so that we can concentrate our energies on the important task of developing better descriptive models of choice under uncertainty.
Yes indeed — expected utility theory is an ‘ex-hypothesis.’
This parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off the twig! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-PARROT!!
An ex-parrot that transmogrifies truth shouldn’t just be marginally mended. It should be replaced!
Erik Syll In Memoriam (personal)
18 Sep, 2020 at 08:24 | Posted in Varia | Comments Off on Erik Syll In Memoriam (personal).
In loving memory of my father-in-law, Erik Syll, whose funeral took place yesterday at Skeda church, Sweden.
Why economic models do not explain
16 Sep, 2020 at 12:33 | Posted in Theory of Science & Methodology | 6 Comments
Analogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.
One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.
But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’
Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.
The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?
One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately state some kind of ceteris paribus interpretation of the ‘law.’
Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalisation you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’
In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions have to obtain).
In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?
As far as I can see there some heavy balls out there, but not even one single real economic law.
Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.
Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought-experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.
By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.
To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyse real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.
The problem articulated by Cartwright is that most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.
Whenever model-based causal claims are made, experimentalists quickly find that these claims do not hold under disturbances that were not written into the model. Our own stock example is from auction design – models say that open auctions are supposed to foster better information exchange leading to more efficient allocation. Do they do that in general? Or at least under any real world conditions that we actually know about? Maybe. But we know that introducing the smallest unmodelled detail into the setup, for instance complementarities between different items for sale, unleashes a cascade of interactive effects. Careful mechanism designers do not trust models in the way they would trust genuine Galilean thought experiments. Nor should they …
Economic models frequently invoke entities that do not exist, such as perfectly rational agents, perfectly inelastic demand functions, and so on. As economists often defensively point out, other sciences too invoke non-existent entities, such as the frictionless planes of high-school physics. But there is a crucial difference: the false-ontology models of physics and other sciences are empirically constrained. If a physics model leads to successful predictions and interventions, its false ontology can be forgiven, at least for instrumental purposes – but such successful prediction and intervention is necessary for that forgiveness. The
idealizations of economic models, by contrast, have not earned their keep in this way. So the problem is not the idealizations in themselves so much as the lack of empirical success they buy us in exchange. As long as this problem remains, claims of explanatory credit will be unwarranted.
In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.
So why then do mainstream economists keep on pursuing this modelling project?
The value of economics — a cost-benefit analysis
16 Sep, 2020 at 09:10 | Posted in Economics | Comments Off on The value of economics — a cost-benefit analysis
Economists cannot simply dismiss as “absurd” or “impossible” the possibility that our profession has imposed total costs that exceed total benefits. And no, building a model which shows that it is logically possible for economists to make a positive net contribution is not going to make questions about our actual effect go away. Why don’t we just stipulate that economists are now so clever at building models that they can use a model to show that almost anything is logically possible. Then we could move on to making estimates and doing the math.
In the 19th century, when it became clear that the net effect of having a doctor assist a woman in child-birth was to increase the probability that she would die, western society faced a choice:
– Get rid of doctors; or
– Insist that they wash their hands.I do not want western society to get rid of economists. But to remain viable, our profession needs to be open to the possibility that in a few cases, a few of its members are doing enormous harm; then it must take on a collective responsibility for making sure that everyone keeps their hands clean.
Mainstream economic theory today is still in the story-telling business whereby economic theorists create mathematical make-believe analogue models of our real-world economic system.
The problem is that without strong evidence, all kinds of absurd claims and nonsense may pretend to be science. Mathematics and logic cannot establish the truth value of facts.
We have to demand more of a justification than rather watered-down versions of ‘anything goes’ when it comes to the main postulates on which mainstream economics is founded. If one proposes ‘efficient markets’ or ‘rational expectations’ one also has to support their underlying assumptions. As a rule, none is given, which makes it rather puzzling how things like ‘efficient markets’ and ‘rational expectations’ have become the standard modelling assumption made in much of modern macroeconomics. The reason for this sad state of ‘modern’ economics is that economists often mistake mathematical beauty for truth. It would be far better if they instead made sure they “keep their hands clean”!
Kriminalitet och forskning
15 Sep, 2020 at 23:07 | Posted in Politics & Society | 2 CommentsMen kartläggning och analyser av detta slags släktbaserade nätverk som genom hot om våld och trakasserier utövar stor makt i invandrartäta förortsområden och genom detta allvarligt förhindrar integration lyser med sin frånvaro …
I Sverige får man ha vilka värderingar man vill. Att omfatta och propagera för till exempel kommunistiska, islamistiska, kristna, reaktionära, feministiska, patriarkala och även totalitära värderingar ingår i de fri- och rättigheter som grundlagen stadgar.
Men, i Sverige måste man, vilka värderingar man än har och vill propagera för, följa den svenska lagstiftningen. Och det är på denna punkt som den typ av kriminella släktbaserade nätverk som Bäckström Lerneby så förtjänstfullt lagt i dagen är synnerligen problematiska. Vad boken visar är att dessa nätverk i sin lokala miljö etablerar en egen rättsordning som i långa stycken strider mot svensk lag.
Man måste fråga sig varför det är journalistiska insatser och inte forskning som kartlagt och belyst konsekvenserna av detta problem. Ett möjligt skäl till bristen på forskning kan vara områdets ideologiska och politiska laddning. Detta kan ha lett till att forskare som velat ifrågasätta den etablerade bilden av integrationsproblematiken som ett strukturellt problem drivet av majoritetsbefolkningens diskriminering, inte har kunnat verka inom området.
Att det finns en ideologisk och politisk laddning tror jag är en riktig hypotes när det gäller förklaringen till att så lite forskning bedrivs på området.
Ett talande exempel är reaktionerna på Brottsförebyggande rådets rapport från ifjol där kopplingen mellan den stora migrantströmmen år 2015 och den efterföljande ökningen av rapporterade sexbrott diskuterades. Slutsatsen av de övergripande tentativa analyserna och osäkra skattningarna var att sambandet är “svagt”.
Detta hindrade dock inte Jerzy Sarnecki från att i DN påstå att studien ”visar att invandringsvågen inte har påverkat antalet sexualbrott”. Vilket ju är minst sagt anmärkningsvärt eftersom Brå:s rapport inte över huvud bygger på härkomstdata!
Män födda i utlandet är kraftigt överrepresenterade bland de som dömts för att ha begått våldtäkt i Sverige. Detta är fakta — och just därför kan man ju undra varför ledande svenska politiker och brottsforskare inte tyckt att det har varit viktigt eller speciellt intressant att statistiskt belägga våldtäktsmännens etnicitet. Skälet som åberopats — inte minst av Sarnecki — är att man TROR sig veta att de huvudsakliga orsaksfaktorerna är socio-ekonomiska och att fokus på etnicitet bara skulle spela rasism och utlänningsfientlighet i händerna.
Detta försök till bortförklaring är inget konstigt eller ovanligt — åtminstone om vi talar om politik och medier. Där sysslar man dagligen med den typen av resonemang som bygger på haltande logik och halvsanningar. Mer anmärkningsvärt och mer kritisabelt är det när även forskare hänger sig åt dylikt.
För de flesta sociala fenomen föreligger mekanismer och orsakskedjor som till stor del ytterst går att hänföra till socio-ekonomiska faktorer. Så även med stor sannolikhet när det gäller våldsbrott och mer specifikt våldtäktsbrott. Detta betyder dock ingalunda att man vid exempelvis en statistisk regressionsanalys med ‘konstanthållande’ av socio-ekonomiska variabler helt restlöst skulle i någon kausal mening kunna trolla bort andra viktiga faktorer som etnicitet, kultur, etc.
Och detta är pudelns kärna! Socio-ekonomiska faktorer ÄR viktiga. Men så även andra faktorer. Att dessa i någon mening skulle kunna uppfattas som ‘känsliga’ att kartlägga är inget försvar för att i vetenskapliga sammanhang blunda för dem — något som borde vara självklart för alla forskare och myndighetsföreträdare.
Inte minst Sarnecki har under lång tid och vid upprepade tillfällen tvärsäkert hävdat att våldtäkter bara kan förstås och förklaras som resultat av socio-ekonomiska faktorer. Några entydiga evidensbaserade forskningsresultat som skulle kunna utgöra grund för denna tvärsäkerhet föreligger dock inte.
Att hävda att det kan finnas andra ‘förklaringsfaktorer’ — som t ex etnicitet och kultur — stämplas som ‘farligt.’ Detta är långt ifrån första gången i historien som ny kunskap, data och vetenskapliga teorier ifrågasätts utifrån en rädsla för att de kan ha negativa samhälleliga konsekvenser (Galileos och Darwins nya fakta och kunskaper om astronomi och evolution möttes först med invändningar och krav på hemlighållande från dåtidens etablissemang).
‘Fakta sparkar’ som Gunnar Myrdal brukade säga. Att av rädsla för att fakta kan missbrukas välja att mörklägga information om stora och viktiga samhällsproblem som brott och våld är fullständigt oacceptabelt. Det är ett svek både mot samhället i stort och de personer som utsätts för brotten och våldet.
Mer — inte mindre — fakta och kunskap, är en förutsättning för att på ett effektivt sätt kunna minska förekomsten av våld och brott i vårt samhälle. Ett samhälle måste ha förtroende för sina medborgares förmåga att hantera information. Avsaknad av det förtroendet är något som vi förknippar med auktoritära samhällen. I en demokrati mörklägger man inte information!
Blog at WordPress.com.
Entries and Comments feeds.