Rational expectations — the triumph of ideology over science

29 Nov, 2021 at 11:46 | Posted in Economics | 2 Comments

Senate Banking Subcommittee On Financial Institutions Hearing With Stiglitz For more than 20 years, economists were enthralled by so-called “rational expectations” models which assumed that all participants have the same (if not perfect) information and act perfectly rationally, that markets are perfectly efficient, that unemployment never exists (except when caused by greedy unions or government minimum wages), and where there is never any credit rationing.

That such models prevailed, especially in America’s graduate schools, despite evidence to the contrary, bears testimony to a triumph of ideology over science. Unfortunately, students of these graduate programmes now act as policymakers in many countries, and are trying to implement programmes based on the ideas that have come to be called market fundamentalism … Good science recognises its limitations, but the prophets of rational expectations have usually shown no such modesty.

Joseph Stiglitz

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to place macroeconomic models building on representative actors and rational expectations microfoundations in the dustbin of pseudo-science.

For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that give us a rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place — instead of simply conjuring the problem away by assuming rational expectations and treating uncertainty as if it was possible to reduce it to stochastic risk. That is scientific cheating. And it has been going on for too long now.

The Sonnenschein-Mantel-Debreu Theorem

25 Nov, 2021 at 09:50 | Posted in Economics | Leave a comment

SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …

24958274Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.”

S. Abu Turab Rizvi

And so what? Why should we care about Sonnenschein-Mantel-Debreu?

Because  Sonnenschein-Mantel-Debreu ultimately explains why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and New ‘Keynesian’ microfounded macromodels are such bad substitutes for real macroeconomic analysis!

These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972),​ Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equilibrium​ solution. A century and a half after Léon Walras founded neoclassical general equilibrium theory, modern mainstream economics hasn’t been able to show that markets move economies to equilibria. This if anything shows that the whole Bourbaki-Debreu project of axiomatizing​ economics was nothing but a delusion.

You enquire whether or not Walras was supposing that exchanges actually take place at the prices originally proposed when the prices are not equilibrium prices. The footnote which you quote convinces me that he assuredly supposed that they did not take place except at the equilibrium prices … All the same, I shall hope to convince you some day that Walras’ theory and all the others along those lines are little better than nonsense!

Letter from J. M. Keynes to N. Georgescu-Roegen, December 9, 1934

Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are — as I have argued at length in my On the use and misuse of theories and models in mainstream economics — rather an evasion whereby issues of distribution, coordination, heterogeneity are swept under the rug.

Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds not so little of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there …

Economy Studies: A Guide to Rethinking Economics Education

23 Nov, 2021 at 18:04 | Posted in Economics | Leave a comment

.

La théorie du ruissellement 

20 Nov, 2021 at 13:35 | Posted in Economics | Leave a comment

.

Study proves trickle-down didn't trickle | PoliticsNC

What killed macroeconomics?

19 Nov, 2021 at 16:44 | Posted in Economics | 17 Comments

The COVID-19 pandemic impelled governments to fall back on “fiscal Keynesianism,” because there was no way that just increasing the quantity of money could lead to the reopening of businesses that were prevented by law from doing so. Fiscal Keynesianism in the big lockdown meant issuing Treasury payments to people prevented from working.risk vs uncertainty

But now that the economy has reopened, the practical rationale for monetary and fiscal expansion has disappeared. Mainstream financial commentators believe the economy will bounce back as if nothing had happened. After all, economies fall into foxholes no more often than individuals normally do. So, the time has come to tighten both monetary and fiscal policy, because continued expansion of either or both will lead only to a “surge in inflation.” We can all breathe a sigh of relief; the trauma is over, and normal life without unemployment will resume.

Monetary policy works in theory but not in practice; fiscal policy works in practice but not in theory. Fiscal Keynesianism is still a policy in search of a theory. Acemoglu, Laibson, and List supply a piece of the missing theory when they note that shocks are “hard to predict.” Keynes would have said they are impossible to predict, which is why he rejected the standard view that economies are cyclically stable in the absence of shocks (which is as useless as saying that leaves don’t flutter in the absence of wind).

The supply and demand models that first-year economics students are taught can illuminate the equilibrium path of the hairdressing industry but not of the economy as a whole. Macroeconomics is the child of uncertainty. Unless economists recognize the existence of inescapable uncertainty, there can be no macroeconomic theory, only prudential responses to emergencies.

Robert Skidelsky

Modern macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and New ‘Keynesian’ — still follows an ‘as if’ logic of denying the existence of genuine uncertainty and treat variables as if drawn from a known ‘data-generating process’ with known probability distribution that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ — if we do not have the ‘true’ model — the whole edifice collapses. And of course, it has to. Who really honestly believes that we have access to this mythical Holy Grail, the data-generating process?

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

This is like saying that you are going on a holiday-trip and that you know that the chance the weather being sunny is at least 30​% and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.

But as Keynes convincingly argued in his monumental Treatise on Probability (1921), this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another — equally good — model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.

In the end,​ this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty-type. The data do not unequivocally single out one decision as the only ‘rational’ one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.

How much better — how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control — if instead, we could just admit that we often simply do not know, and that we have to live with that uncertainty as well as it goes.

Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing — economic catastrophe.

More than economists

18 Nov, 2021 at 16:17 | Posted in Economics | 2 Comments

Veblen, Keynes, and Hirschman were more than economists because they practiced their economics from a standpoint outside the profession, using it to criticize not only the assumption of rational self-interest, but also the consequences of economists’ indifference to “preferences.” Veblen’s standpoint was explicitly religious; he was still of a believing generation. Keynes, too, was an ethicist. G.E. Moore’s Principia Ethica remained what he called his “religion under the surface.” Hirschman wanted a “moral social science” that would be continually sensitive to the ethical content of its analysis …

Prof. Lord Robert Skidelsky (C. 1953-58), OB of the Month, July 2012 - Old  Brightonians - The Alumni of Brighton CollegeThese three economists’ frequently mocking style was their way of establishing their distance from their profession. Their irony was not ornamental but actually shaped the substance of their arguments. This style limited their impact on economics, but made them highly influential outside it, because critics of economics sensed something transgressive about them.

Systematic thinkers close a subject, leaving their followers with “normal” science to fill up the learned journals. Fertile ones open up their disciplines to critical scrutiny, for which they rarely get credit.

Robert Skidelsky

Rethinking economics

15 Nov, 2021 at 17:53 | Posted in Economics | 7 Comments

marquesThe incorporation of new information makes sense only if the future is to be similar to the past. Any kind of empirical test, whatever form it adopts, will not make sense, however, if the world is uncertain because in such a world induction does not work. Past experience is not a useful guide to guess the future in these conditions (it only serves when the future, somehow, is already implicit in the present) … I believe the only way to use past experience is to assume that the world is repetitive. In a non-repetitive world in which relevant novelties unexpectedly arise testing is irrelevant …

Conceiving economic processes like sequences of events in which uncertainty reigns, where consequently there are “no laws”, nor “invariants” or “mechanisms” to discover, the kind of learning that experiments or last experience provide is of no use for the future, because it eliminates innovation and creativity and does not take into account the arboreal character and the open-ended nature of the economic process … However, as said before, we can gather precise information, restricted in space and time (data). But, what is the purpose of obtaining this sort of information if uncertainty about future events prevails? … The problem is that taking uncertainty seriously puts in question the relevance the data obtained by means of testing or experimentation has for future situations.

Marqués’ book is a serious challenge to much of mainstream economic thinking and its methodological and philosophical underpinnings. A must-read for anyone interested in the foundations of economic theory, showing how far-reaching the effects of taking Keynes’ concept of genuine uncertainty really are.

treatprobScience according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different ‘variables’ is not enough. If they cannot help us get at the causal structure that generated the data, they are not really ‘identified.’

How strange then that economics textbooks do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world! An educated guess on why this is a fact would be that Keynes’ concepts are not possible to squeeze into a single calculable numerical ‘probability.’ In the quest for quantities one puts a blind eye to qualities and looks the other way and hopempeople will forget about Keynes’ fundamental insight

Robert Lucas once wrote — in Studies in Business-Cycle Theory — that “in cases of uncertainty, economic reasoning will be of no value.”  Now, if that was true, it would put us in a tough dilemma. If we have to consider — as Lucas — uncertainty incompatible with economics being a science, and we actually know for sure that there are several and deeply important situations in real-world contexts where we — both epistemologically and ontologically — face genuine uncertainty, well, then we actually would have to choose between reality and science.

That can’t be right. We all know we do not know very much about the future. We all know the future harbours lots of unknown unknowns. Those are ontological facts we just have to accept — and still go for both reality and science, in developing a realist and relevant economic science.

Making sense of economics

9 Nov, 2021 at 10:31 | Posted in Economics | Leave a comment

The Assumptions Economists Make eBook : Schlefer, Jonathan: Kindle Store -  Amazon.comRobert Lucas, one of the most creative model-builders, tells a story about his undergraduate encounter with Gregor Mendel’s model of genetic inheritance. He liked the Mendelian model—“you could work out predictions that would surprise you”—though not the lab work breeding fruit flies to test it. (Economists are not big on mucking around in the real world.) Over the weekend, he enjoyed writing a paper comparing the model’s predictions with the class’s experimental results. When a friend returned from a weekend away without having written the required paper, Lucas agreed to let the friend borrow from his. The friend remarked that Lucas had forgotten to discuss how “crossing-over” could explain the substantial discrepancies between the model and experimental results. “Crossing-over is b—s—,” Lucas told his friend, a “label for our ignorance.” He kept his paper’s focus on the unadorned Mendelian model, and added a section arguing that experimental errors could explain the discrepancies. His friend instead appended a section on crossing-over. His friend got an A. Lucas got a C-minus, with a comment: “This is a good report, but you forgot about crossing-over.” Crossing-over is actually a fact; it occurs when a portion of one parent gene is incorporated in the other parent gene. But Lucas’s anecdote brilliantly illustrates the powerful temptation to model-builders—across the ideological spectrum—of ignoring inconvenient facts that don’t fit their models.

Economics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfil its task. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate arguments as a mixture of rather unhelpful metaphors and metaphysics.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations.

A rigorous application of economic methods really presupposes that the phenomena of our real-world economies are ruled by stable causal relations. Unfortunately, real-world social systems are usually not governed by stable causal relations and mechanisms. The kinds of ‘laws’ and relations that economics has established, are laws and relations about entities in models that usually presuppose causal mechanisms being invariant, atomistic and additive. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent.

20 références d’économie incontournables

6 Nov, 2021 at 12:26 | Posted in Economics | Leave a comment

.

Norbert Häring und das Endspiel des Kapitalismus

4 Nov, 2021 at 17:35 | Posted in Economics | Comments Off on Norbert Häring und das Endspiel des Kapitalismus

.

Wealth inequality explained

4 Nov, 2021 at 13:05 | Posted in Economics | Comments Off on Wealth inequality explained

.

The logic of financial markets

2 Nov, 2021 at 13:09 | Posted in Economics | 3 Comments

beautyProfessional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each​ competitor has to pick not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgement are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.

J M Keynes General Theory

Still the best description of the logic of financial markets. Professional money management is at heart a guessing game where investors try to guess what other investors guess about other investors guess about the future …

On Diane Coyle’s Cogs and Monsters

30 Oct, 2021 at 19:10 | Posted in Economics | Comments Off on On Diane Coyle’s Cogs and Monsters

Cogs and Monsters: What Economics Is, and What It Should Be: Coyle, Diane:  9780691210599: Amazon.com: BooksMacroeconomists seem to me the biggest offenders in not taking such empirical issues (of practical data handling) seriously enough. This might sound like sheer contrarianism given that macroeconomists are constantly wielding data; after all, their business is analysing the behaviour of the whole economy and forecasting its future path. My concerns are, first, that too few think about the vast uncertainty associated with the statistics they download and use; and secondly, how difficult it is to draw definitive conclusions about economy-wide phenomena, the aggregated outcomes of choice made by millions of businesses and consumers interacting in specific historical and geographic contexts, and social and political relations.

There’s a lot in this new book by Diane Coyle that I like, and I highly recommend reading it.

Unfortunately, there are also some things in it I find very hard to swallow.

A recurrent theme in the book — as in her earlier The Soulful Science (2010) — is Coyle’s view that much of the critique waged against mainstream economics from heterodox economists like yours truly and others are more or less of a straw-man kind and that we haven’t really understood the fact that economics “has changed a lot in two decades.”

One example she refers to — to underpin her view — is the development of the ‘new’ behavioural, ‘experimental,’ and ’empirical turn’ in economics.

So let’s take a look at that and what some of us ‘heterodox’ economists really have had to say about it.

Coyle — as many other more or less mainstream economists nowadays — seems to maintain that the empirical methods developed within economics — natural experiments, field experiments, RCTs — help us to answer important economic questions. I beg to differ. When looked at carefully, there are in fact few real reasons to share the optimism on this ’empirical turn’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminiscent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts about the generalizability of all three research strategies because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

By this, I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favor of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridgeless axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent.

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Contrary to Coyle’s optimism, I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Almost all the change and diversity that takes place in mainstream economics today only takes place within the analytic-formalistic modeling strategy that makes up the core of mainstream economics. All the flowers that do not live up to the precepts of the mainstream methodological canon are pruned. You’re free to take your analytical formalist models and apply them to whatever you want – as long as you do it using a modeling methodology acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. “If it isn’t modelled, it isn’t economics.” This isn’t pluralism. It’s a methodological reductionist straightjacket.

No matter how many thousands of models mainstream economists come up with, as long as they are just axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

So — in conclusion — it is not that heterodox critics haven’t noticed the development in mainstream economics that has taken place during the past 20-30 years. We have noticed — and understood that it still far too much builds on the same old neoclassical straight-jacket methodology.

Why economists should prefer accurate imprecision to inaccurate precision

28 Oct, 2021 at 18:36 | Posted in Economics | Comments Off on Why economists should prefer accurate imprecision to inaccurate precision

broken-linkMicrofounded DSGE models standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.

Behavioural and experimental economics — not to speak of psychology — show beyond any doubts that “deep parameters” — peoples’ preferences, choices and forecasts — are regularly influenced by those of other participants in the economy. And how about the homogeneity assumption? And if all actors are the same — why and with whom do they transact? And why does economics have to be exclusively teleological (concerned with intentional states of individuals)? Where are the arguments for that ontological reductionism? And what about collective intentionality and constitutive background rules?

These are all justified questions — so, in what way can one maintain that these models give workable microfoundations for macroeconomics? Science philosopher Nancy Cartwright gives a good hint at how to answer that question:

Our assessment of the probability of effectiveness is only as secure as the weakest link in our reasoning to arrive at that probability. We may have to ignore some issues to make heroic assumptions about them. But that should dramatically weaken our degree of confidence in our final assessment. Rigor isn’t contagious from link to link. If you want a relatively secure conclusion coming out, you’d better be careful that each premise is secure going on.

Avoiding logical inconsistencies is crucial in all science. But it is not enough. Just as important is avoiding factual inconsistencies. And without showing — or at least warrantedly arguing — that the assumptions and premises of their models are in fact true, mainstream economists aren’t really reasoning, but only playing games. Formalistic deductive ‘Glasperlenspiel’ can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Instead of making the model the message, I think we are better served by economists who more  than anything else try to contribute to solving real problems. And then the motto of John Maynard Keynes is more valid than ever:

It is better to be vaguely right than precisely wrong

Of what use are RCTs?

27 Oct, 2021 at 16:20 | Posted in Economics | Comments Off on Of what use are RCTs?

nancyIn her interesting Pufendorf lectures Nancy Cartwright presents a theory of evidence and explains why randomized controlled trials (RCTs) are not at all the “gold standard” that it has lately often been portrayed as. As yours truly has repeatedly argued on this blog (e.g. here and here), RCTs usually do not provide evidence that their results are exportable to other target systems. The almost religious belief with which its advocates portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for it to work for us or even that it works generally.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: