Old love never rusts …
The search-and-matching model works like this: one class of agents, “workers,” is either searching for a job or employed while another class of agents, “firms,” is either vacant, meaning it has a vacancy posted, or filled, in which case it employs a worker and together they hum along productively. In the earliest formulation of the model, the only economic decision made by either agent was whether a firm would post a vacancy in an attempt to match with a worker or would choose not to, thus remaining inactive. Otherwise, both searching workers and vacant firms mindlessly wait until they match up, then commence a productive relationship that lasts until their match spontaneously dissolves. Then the worker goes back to searching and the firm makes its decision about whether to post a vacancy or not once again.
The reason why search-and-matching labor market models have something to say about the recent history of the labor market is because they are a good deal richer than the simple search-based story, which is the reason why Federal Reserve Bank of Richmond economist Karthik Athreya, whose book sparked this discussion, said that “search is not really about searching.” Specifically, they allow for alternative theories of wage-setting, a factual timeline for unemployment spells and what determines their duration, a rich set of labor market outcomes beyond employment and unemployment, and an implementable notion of power that is often a critical missing piece of economic modeling.
Like all economic models, the search-and-matching model is a simplification, even a ludicrous one … So do we search theorists have a big problem?
Notice that my summary of the model left out one big thing: how the fruits of the productive employment relationship are split between the worker and the firm. This is by far the biggest controversy in the field of search theory. The assumption made by the earliest search-and-matching models is that the “surplus” the two agents generate is split between the parties in the optimal way, where the optimality concept is defined within the model but with some relationship to a more general intuition about what each party would want (that optimal way is known as the “Nash Bargain” after the Nobel-Prize-winning mathematical theorist John Nash) …
The problem is that this theory of wage setting is an empirical disaster. Not only is it inconsistent with investigations into how actual wages are actually set, but it generates false predictions about unemployment spells that crucially fail to line up with what happens to unemployment during recessions (it goes up, and it stays high for a long time, when the optimal theory of wages says that wages should do the adjusting). Refinements that make the theory with Nash Bargaining consistent with the data on unemployment in recessions yield their own big empirical problem: those refinements imply that being unemployed isn’t really that bad for workers, which everyone who is sentient knows to be untrue.
So the search-and-matching model has a crazy theory about how wages are set, and that makes it a crazy model of how labor markets work, right? No. What the search-and-matching theory has, and what its alternatives lack for the most part, is indeterminacy about how wages are set. The “Nash Bargain” theory is optimal, but it’s not necessary—other wage-setting assumptions can be used to resolve the indeterminacy. And if economists can get their minds around the idea that the “market solution” is not always optimal then they can make real headway with the search-and-matching approach precisely because it’s consistent with those alternatives.
[h/t Brad DeLong & Dwayne Woods]
Advocates for choice-based solutions should take a look at what’s happened to schools in Sweden, where parents and educators would be thrilled to trade their country’s steep drop in PISA scores over the past 10 years for America’s middling but consistent results. What’s caused the recent crisis in Swedish education? Researchers and policy analysts are increasingly pointing the finger at many of the choice-oriented reforms that are being championed as the way forward for American schools. While this doesn’t necessarily mean that adding more accountability and discipline to American schools would be a bad thing, it does hint at the many headaches that can come from trying to do so by aggressively introducing marketlike competition to education.
There are differences between the libertarian ideal espoused by Friedman and the actual voucher program the Swedes put in place in the early ’90s … But Swedish school reforms did incorporate the essential features of the voucher system advocated by Friedman. The hope was that schools would have clear financial incentives to provide a better education and could be more responsive to customer (i.e., parental) needs and wants when freed from the burden imposed by a centralized bureaucracy …
But in the wake of the country’s nose dive in the PISA rankings, there’s widespread recognition that something’s wrong with Swedish schooling … Competition was meant to discipline government schools, but it may have instead led to a race to the bottom …
It’s the darker side of competition that Milton Friedman and his free-market disciples tend to downplay: If parents value high test scores, you can compete for voucher dollars by hiring better teachers and providing a better education—or by going easy in grading national tests. Competition was also meant to discipline government schools by forcing them to up their game to maintain their enrollments, but it may have instead led to a race to the bottom as they too started grading generously to keep their students …
Maybe the overall message is … “there are no panaceas” in public education. We tend to look for the silver bullet—whether it’s the glories of the market or the techno-utopian aspirations of education technology—when in fact improving educational outcomes is a hard, messy, complicated process. It’s a lesson that Swedish parents and students have learned all too well: Simply opening the floodgates to more education entrepreneurs doesn’t disrupt education. It’s just plain disruptive.
[h/t Jan Milch]
James Heckman, winner of the “Nobel Prize” in economics (2000), did an inteview with John Cassidy in 2010. It’s an interesting read (Cassidy’s words in italics):
What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?
I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”
It became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lard Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.
What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?
Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.
What about you? When rational expectations was sweeping economics, what was your reaction to it? I know you are primarily a micro guy, but what did you think?
What struck me was that we knew Keynesian theory was still alive in the banks and on Wall Street. Economists in those areas relied on Keynesian models to make short-run forecasts. It seemed strange to me that they would continue to do this if it had been theoretically proven that these models didn’t work.
What about the efficient-markets hypothesis? Did Chicago economists go too far in promoting that theory, too?
Some did. But there is a lot of diversity here. You can go office to office and get a different view.
[Heckman brought up the memoir of the late Fischer Black, one of the founders of the Black-Scholes option-pricing model, in which he says that financial markets tend to wander around, and don’t stick closely to economics fundamentals.]
[Black] was very close to the markets, and he had a feel for them, and he was very skeptical. And he was a Chicago economist. But there was an element of dogma in support of the efficient-market hypothesis. People like Raghu [Rajan] and Ned Gramlich [a former governor of the Federal Reserve, who died in 2007] were warning something was wrong, and they were ignored. There was sort of a culture of efficient markets—on Wall Street, in Washington, and in parts of academia, including Chicago.
What was the reaction here when the crisis struck?
Everybody was blindsided by the magnitude of what happened. But it wasn’t just here. The whole profession was blindsided. I don’t think Joe Stiglitz was forecasting a collapse in the mortgage market and large-scale banking collapses.
So, today, what survives of the Chicago School? What is left?
I think the tradition of incorporating theory into your economic thinking and confronting it with data—that is still very much alive. It might be in the study of wage inequality, or labor supply responses to taxes, or whatever. And the idea that people respond rationally to incentives is also still central. Nothing has invalidated that—on the contrary.
So, I think the underlying ideas of the Chicago School are still very powerful. The basis of the rocket is still intact. It is what I see as the booster stage—the rational-expectation hypothesis and the vulgar versions of the efficient-markets hypothesis that have run into trouble. They have taken a beating—no doubt about that. I think that what happened is that people got too far away from the data, and confronting ideas with data. That part of the Chicago tradition was neglected, and it was a strong part of the tradition.
When Bob Lucas was writing that the Great Depression was people taking extended vacations—refusing to take available jobs at low wages—there was another Chicago economist, Albert Rees, who was writing in the Chicago Journal saying, No, wait a minute. There is a lot of evidence that this is not true.
Milton Friedman—he was a macro theorist, but he was less driven by theory and by the desire to construct a single overarching theory than by attempting to answer empirical questions. Again, if you read his empirical books they are full of empirical data. That side of his legacy was neglected, I think.
When Friedman died, a couple of years ago, we had a symposium for the alumni devoted to the Friedman legacy. I was talking about the permanent income hypothesis; Lucas was talking about rational expectations. We have some bright alums. One woman got up and said, “Look at the evidence on 401k plans and how people misuse them, or don’t use them. Are you really saying that people look ahead and plan ahead rationally?” And Lucas said, “Yes, that’s what the theory of rational expectations says, and that’s part of Friedman’s legacy.” I said, “No, it isn’t. He was much more empirically minded than that.” People took one part of his legacy and forgot the rest. They moved too far away from the data.
Yes indeed, they certainly “moved too far away from the data.”
In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:
it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.
And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:
More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.
For even more on the issue, permit me to self-indulgently recommend reading my article Rational expectations — a fallacious foundation for macroeconomics in a non-ergodic world in real-world economics review no. 62.
De senaste åren har en omfattande politisk energi gått ut på att försöka minska friskolornas svängrum och, måste man nog säga, skadeverkningar … Kanske kan det rentav vara så att svikten i skolans resultat kan knytas, i alla fall till någon del, till denna revolution av skolans huvudmannaskap?
I en regering där Moderaterna är största parti och dess värderingar och ideal dominerar får det inte råda någon tvekan på denna punkt. Det gör det inte heller. Jan Björklund är en kompromisslös försvarare av friskolereformen …
Jan Björklund var surrad vid masten för att han inte skulle kunna lystra till sirenernas sång som handlar om att det finns en annan värld som är möjlig. En där vi börjar om igen och försöker samarbeta. Där skolan är vår gemensamma uppgift, som den var en gång i ett ljusare samhälle där FP var med och byggde världens bästa skola …
Det borde inte vara omöjligt. Men med Jan Björklund var det faktiskt just — omöjligt.
Sörlins tio sidor långa analys i Magasinet Arena av Jan Björklunds tid som svensk skolminister är ett absolut “must read”!
Frequentist hypothesis testing has come under sustained and vigorous attack in recent years … But there are a couple of good things about Frequentist hypothesis testing that I haven’t seen many people discuss. Both of these have to do not with the formal method itself, but with social conventions associated with the practice …
Why do I like these social conventions? Two reasons. First, I think they cut down a lot on scientific noise. “Statistical significance” is sort of a first-pass filter that tells you which results are interesting and which ones aren’t. Without that automated filter, the entire job of distinguishing interesting results from uninteresting ones falls to the reviewers of a paper, who have to read through the paper much more carefully than if they can just scan for those little asterisks of “significance”.
A non-trivial part of teaching statistics is made up of teaching students to perform significance testing. A problem I have noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests – p-values – really are, still most students misinterpret them. And a lot of researchers obviously also fall pray to the same mistakes:
Are women three times more likely to wear red or pink when they are most fertile? No, probably not. But here’s how hardworking researchers, prestigious scientific journals, and gullible journalists have been fooled into believing so.
The paper I’ll be talking about appeared online this month in Psychological Science, the flagship journal of the Association for Psychological Science, which represents the serious, research-focused (as opposed to therapeutic) end of the psychology profession.
“Women Are More Likely to Wear Red or Pink at Peak Fertility,” by Alec Beall and Jessica Tracy, is based on two samples: a self-selected sample of 100 women from the Internet, and 24 undergraduates at the University of British Columbia. Here’s the claim: “Building on evidence that men are sexually attracted to women wearing or surrounded by red, we tested whether women show a behavioral tendency toward wearing reddish clothing when at peak fertility. … Women at high conception risk were more than three times more likely to wear a red or pink shirt than were women at low conception risk. … Our results thus suggest that red and pink adornment in women is reliably associated with fertility and that female ovulation, long assumed to be hidden, is associated with a salient visual cue.”
Pretty exciting, huh? It’s (literally) sexy as well as being statistically significant. And the difference is by a factor of three—that seems like a big deal.
Really, though, this paper provides essentially no evidence about the researchers’ hypotheses …
The way these studies fool people is that they are reduced to sound bites: Fertile women are three times more likely to wear red! But when you look more closely, you see that there were many, many possible comparisons in the study that could have been reported, with each of these having a plausible-sounding scientific explanation had it appeared as statistically significant in the data.
The standard in research practice is to report a result as “statistically significant” if its p-value is less than 0.05; that is, if there is less than a 1-in-20 chance that the observed pattern in the data would have occurred if there were really nothing going on in the population. But of course if you are running 20 or more comparisons (perhaps implicitly, via choices involved in including or excluding data, setting thresholds, and so on), it is not a surprise at all if some of them happen to reach this threshold.
The headline result, that women were three times as likely to be wearing red or pink during peak fertility, occurred in two different samples, which looks impressive. But it’s not really impressive at all! Rather, it’s exactly the sort of thing you should expect to see if you have a small data set and virtually unlimited freedom to play around with the data, and with the additional selection effect that you submit your results to the journal only if you see some catchy pattern. …
Statistics textbooks do warn against multiple comparisons, but there is a tendency for researchers to consider any given comparison alone without considering it as one of an ensemble of potentially relevant responses to a research question. And then it is natural for sympathetic journal editors to publish a striking result without getting hung up on what might be viewed as nitpicking technicalities. Each person in this research chain is making a decision that seems scientifically reasonable, but the result is a sort of machine for producing and publicizing random patterns.
There’s a larger statistical point to be made here, which is that as long as studies are conducted as fishing expeditions, with a willingness to look hard for patterns and report any comparisons that happen to be statistically significant, we will see lots of dramatic claims based on data patterns that don’t represent anything real in the general population. Again, this fishing can be done implicitly, without the researchers even realizing that they are making a series of choices enabling them to over-interpret patterns in their data.
Indeed. If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero – even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:
There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.
Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and here)- it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on a weird backward logic that students and researchers usually don’t understand?
Suppose that we as educational reformers have a hypothesis that implementing a voucher system would raise the mean test results with 100 points (null hypothesis). Instead, when sampling, it turns out it only raises it with 75 points and has a standard error (telling us how much the mean varies from one sample to another) of 20.
Does this imply that the data do not disconfirm the hypothesis? Given the usual normality assumptions on sampling distributions the one-tailed p-value is approximately 0.11. Thus, approximately 11% of the time we would expect a score this low or lower if we were sampling from this voucher system population. That means – using the ordinary 5% significance-level — we would not reject the null hypothesis although the test has shown that it is “likely” that the hypothesis is false.
In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypothesis by making it hard to disconfirm the null hypothesis.
And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” But looking at our example, standard scientific methodology tells us that since there is only 11% probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.
And, most importantly, of course we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-value of 0.11 means next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:
I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.
There is a nice YouTube video with Tony O’Hagan interviewing Dennis Lindley. Of course, Dennis is a legend and his impact on the field of statistics is huge.
At one point, Tony points out that some people liken Bayesian inference to a religion. Dennis claims this is false. Bayesian inference, he correctly points out, starts with some basic axioms and then the rest follows by deduction. This is logic, not religion.
I agree that the mathematics of Bayesian inference is based on sound logic. But, with all due respect, I think Dennis misunderstood the question. When people say that “Bayesian inference is like a religion,” they are not referring to the logic of Bayesian inference. They are referring to how adherents of Bayesian inference behave.
(As an aside, detractors of Bayesian inference do not deny the correctness of the logic. They just don’t think the axioms are relevant for data analysis. For example, no one doubts the axioms of Peano arithmetic. But that doesn’t imply that arithmetic is the foundation of statistical inference. But I digress.)
The vast majority of Bayesians are pragmatic, reasonable people. But there is a sub-group of die-hard Bayesians who do treat Bayesian inference like a religion. By this I mean:
They are very cliquish.
They have a strong emotional attachment to Bayesian inference.
They are overly sensitive to criticism.
They are unwilling to entertain the idea that Bayesian inference might have flaws.
When someone criticizes Bayes, they think that critic just “doesn’t get it.”
They mock people with differing opinions …
No evidence you can provide would ever make the die-hards doubt their ideas. To them, Sir David Cox, Brad Efron and other giants in our field who have doubts about Bayesian inference, are not taken seriously because they “just don’t get it.”
So is Bayesian inference a religion? For most Bayesians: no. But for the thin-skinned, inflexible die-hards who have attached themselves so strongly to their approach to inference that they make fun of, or get mad at, critics: yes, it is a religion.
Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump” argument – susceptible to being ruined by some clever “bookie”.
Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here and here) there is no strong warrant for believing so.
In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.
The view that Bayesian decision theory is only genuinely valid in a small world was asserted very firmly by Leonard Savage when laying down the principles of the theory in his path-breaking Foundations of Statistics. He makes the distinction between small and large worlds in a folksy way by quoting the proverbs ”Look before you leap” and ”Cross that bridge when you come to it”. You are in a small world if it is feasible always to look before you leap. You are in a large world if there are some bridges that you cannot cross before you come to them.
As Savage comments, when proverbs conflict, it is proverbially true that there is some truth in both—that they apply in different contexts. He then argues that some decision situations are best modeled in terms of a small world, but others are not. He explicitly rejects the idea that all worlds can be treated as small as both ”ridiculous” and ”preposterous” … Frank Knight draws a similar distinction between making decision under risk or uncertainty …
Bayesianism is understood [here] to be the philosophical principle that Bayesian methods are always appropriate in all decision problems, regardless of whether the relevant set of states in the relevant world is large or small. For example, the world in which financial economics is set is obviously large in Savage’s sense, but the suggestion that there might be something questionable about the standard use of Bayesian updating in financial models is commonly greeted with incredulity or laughter.
Someone who acts as if Bayesianism were correct will be said to be a Bayesianite. It is important to distinguish a Bayesian like myself—someone convinced by Savage’s arguments that Bayesian decision theory makes sense in small worlds—from a Bayesianite. In particular, a Bayesian need not join the more extreme Bayesianites in proceeding as though:
• All worlds are small.
• Rationality endows agents with prior probabilities.
• Rational learning consists simply in using Bayes’ rule to convert a set of prior
probabilities into posterior probabilities after registering some new data.
Bayesianites are often understandably reluctant to make an explicit commitment to these principles when they are stated so baldly, because it then becomes evi-dent that they are implicitly claiming that David Hume was wrong to argue that the principle of scientific induction cannot be justified by rational argument …
Bayesianites believe that the subjective probabilities of Bayesian decision theory can be reinterpreted as logical probabilities without any hassle. Its adherents therefore hold that Bayes’ rule is the solution to the problem of scientific induction. No support for such a view is to be found in Savage’s theory—nor in the earlier theories of Ramsey, de Finetti, or von Neumann and Morgenstern. Savage’s theory is entirely and exclusively a consistency theory. It says nothing about how decision-makers come to have the beliefs ascribed to them; it asserts only that, if the decisions taken are consistent (in a sense made precise by a list of axioms), then they act as though maximizing expected utility relative to a subjective probability distribution …
A reasonable decision-maker will presumably wish to avoid inconsistencies. A Bayesianite therefore assumes that it is enough to assign prior beliefs to as decisionmaker, and then forget the problem of where beliefs come from. Consistency then forces any new data that may appear to be incorporated into the system via Bayesian updating. That is, a posterior distribution is obtained from the prior distribution using Bayes’ rule.
The naiveté of this approach doesn’t consist in using Bayes’ rule, whose validity as a piece of algebra isn’t in question. It lies in supposing that the problem of where the priors came from can be quietly shelved.
Savage did argue that his descriptive theory of rational decision-making could be of practical assistance in helping decision-makers form their beliefs, but he didn’t argue that the decision-maker’s problem was simply that of selecting a prior from a limited stock of standard distributions with little or nothing in the way of soulsearching. His position was rather that one comes to a decision problem with a whole set of subjective beliefs derived from one’s previous experience that may or may not be consistent …
But why should we wish to adjust our gut-feelings using Savage’s methodology? In particular, why should a rational decision-maker wish to be consistent? After all, scientists aren’t consistent, on the grounds that it isn’t clever to be consistently wrong. When surprised by data that shows current theories to be in error, they seek new theories that are inconsistent with the old theories. Consistency, from this point of view, is only a virtue if the possibility of being surprised can somehow be eliminated. This is the reason for distinguishing between large and small worlds. Only in the latter is consistency an unqualified virtue.
Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.
That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.
I think this critique of Bayesianism is in accordance with the views of Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.
The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.
In their latest book, Think Like a Freak, co-authors Steven Levitt and Stephen Dubner tell a story about meeting David Cameron in London before he was Prime Minister. They told him that the U.K.’s National Health Service — free, unlimited, lifetime heath care — was laudable but didn’t make practical sense.
“We tried to make our point with a thought experiment,” they write. “We suggested to Mr. Cameron that he consider a similar policy in a different arena. What if, for instance…everyone were allowed to go down to the car dealership whenever they wanted and pick out any new model, free of charge, and drive it home?”
Rather than seeing the humor and realizing that health care is just like any other part of the economy, Cameron abruptly ended the meeting, demon-strating one of the risks of ‘thinking like a freak,’ Dubner says in the accompanying video.
“Cameron has been open to [some] inventive thinking but if you start to look at things in a different way you’ll get some strange looks,” he says. “Tread with caution.”
So what do Dubner and Levitt make of the Affordable Care Act, aka Obamacare, which has been described as a radical rethinking of America’s health care system?
“I do not think it’s a good approach at all,” says Levitt, a professor of economics at the University of Chicago. “Fundamentally with health care, until people have to pay for what they’re buying it’s not going to work. Purchasing health care is almost exactly like purchasing any other good in the economy. If we’re going to pretend there’s a market for it, let’s just make a real market for it.”
Portraying health care as “just like any other part of the economy” is of course nothing but total horseshit. So, instead of “thinking like a freak,” why not e. g. read what Kenneth Arrow wrote on the issue of medical care already back in 1963?
Under ideal insurance the patient would actually have no concern with the informational inequality between himself and the physician, since he would only be paying by results anyway, and his utility position would in fact be thoroughly guaranteed. In its absence he wants to have some guarantee that at leats the physician is using his knowledge to the best advantage. This leads to the setting up of a relationship of trust and confidence, one which the physician has a social obligation to live up to … The social obligation for best practice is part of the commodity the physician sells, even though it is a part that is not subject to thorough inspection by the buyer.
One consequence of such trust relations is that the physician cannot act, or at least appear to act, as if he is maximizing his income at every moment of time. As a signal to the buyer of his intentions to act as thoroughly in the buyer’s behalf as possible, the physician avoids the obvious stigmata of profit-maximizing … The very word, ‘profit’ is a signal that denies the trust relation.
Kenneth Arrow, “Uncertainty and the Welfare Economics of Medical Care”. American Economic Review, 53 (5).