## Probability and rationality — trickier than you may think

15 October, 2016 at 23:05 | Posted in Statistics & Econometrics | 40 Comments

The Coin-tossing Problem

My friend Ben says that on the first day he got the following sequence of Heads and Tails when tossing a coin:
H H H H H H H H H H

And on the second day he says that he got the following sequence:
H T T H H T T H T H

Which day-report makes you suspicious?
Most people I ask this question says the first day-report looks suspicious.

But actually both days are equally probable! Every time you toss a (fair) coin there is the same probability (50 %) of getting H or T. Both days Ben makes equally many tosses and every sequence is equally probable!

The Linda Problem

Linda is 40 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which of the following two alternatives is more probable?

A. Linda is a bank teller.
B. Linda is a bank teller and active in the feminist movement.

‘Rationally,’ alternative B cannot be more likely than alternative A. Nonetheless Amos Tversky and Daniel Kahneman reported — ‘Judgments of and by representativeness.’ In D. Kahneman, P. Slovic & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge University Press 1982 — that more than 80 percent of respondents said that it was.

Why do we make such ‘irrational’ judgments in both these cases? Tversky and Kahneman argued that in making this kind of judgment we seek the closest resemblance between causes and effects (in The Linda Problem, between Linda’s personality and her behaviour), rather than calculating probability, and that this makes alternative B seem preferable. By using a heuristic called representativeness, statement B in The Linda Problem seems more ‘representative’ of Linda based on the description of her, although from a probabilistic point of view it is clearly less likely.

1. “My friend Ben says that on the first day he got the following sequence of Heads and Tails when tossing a coin:
H H H H H H H H H H
.
And on the second day he says that he got the following sequence:
H T T H H T T H T H
.
Which day-report makes you suspicious?
Most people I ask this question says the first day-report looks suspicious.
.
But actually both days are equally probable! Every time you toss a (fair) coin there is the same probability (50 %) of getting H or T. Both days Ben makes equally many tosses and every sequnece are equally probable!”
.
.
You are conflating probability and likelihood, which are separate, but complementary concepts.
.
Coin tossing is an example of a Bernoulli process, for which we can use the binomial likelihood formula to perform parameter estimation.
.
Given a completely fair (p=0.5) coin, the likelihood of observing five heads in ten trials is 24.6%, whereas the likelihood of observing ten heads in ten trials is 0.097%. Hence “most people” are completely correct to suspect that the coin tossed on day one is not fair.
.
Your argument of equal probability of outcome only applies if the coin in question is, in demonstrable fact, fair (p=0.5), but that is not the question you are asking. The question you are asking people is, “given this observation, do you have cause to suspect the fairness of the coin,” which is a question of likelihood, not probability.

• Psychologists often seem to think that ‘looking suspicious’ corresponds to probability, in this case conditioned on the assumption that a coin is fair. In consequence, this sort of thinking has somehow become thought of as ‘rational’.

It seems to me that the following, following the example of a criminal court case or good scientific practice, is ‘better than rational’.

Once we have an initial sequence of five H’s it is reasonable to think that the coin might be double-headed. After another five H’s we might note that the weight of evidence strongly favours a double-headed coin over a fair one. If the coin was given to us by a magician then this might not be surprising. But if we previously believed the coin to be fair, it would be surprising to have such strong evidence that it was not.

For the second sequence no obvious conjecture occurs to me after the first five. If I try hard I can think of a few, but they are refuted by subsequent tosses. So the idea that the coin is fair remains the best explanation that I can think of. Of the coon was given to me by a magician I might find this surprising, but normally the sequence seems typical, in that if I divide it in two there seems no common pattern.

• Just a quick clarification on the distinction between probablility and likelihood.
.
In questions of probability, the parameter of the probability distribution is the known value, and the observations the unknown value: e.g., given a fair coin (p=0.5), how probable is a given observation?
.
In questions of likelihood, the observation is the known value and the parameter is the unknown value: e.g., given an observation of 10 heads out of 10 trials, how likely is a given value for p?

2. Michael.

What you have calculated and called the likelihood is just the a priori probability of a combination. However, knowing that, does not at all guarantee any particular outcome, fair coin or not. Presumably a rational person would understand this and would not be surprised by whatever combination of events occurred. Whereas, an “irrational” person might look at the outcome and draw a different conclusion.

• Henry,
.
Here are some links which explain “what I have called the likelihood”:
https://en.wikipedia.org/wiki/Likelihood_function
http://warnercnr.colostate.edu/~gwhite/fw663/BinomialLikelihood.PDF
.
I hope that helps.

• Michael,

The probabilities (24.6% and 0.097%, you call likelihood) you reported can be calculated using simple combinatorics. The trial described has a possible 1024 outcomes (2^10). The five “Hs” can occur in 252 ways, hence, 252/1024=0.246. 10 straight “Hs” has 1 way of occurring, hence, 1/1024=0.000977. These are simple probability calculations. If you want to call it “likelihood” that’s fine with me. These are a priori calculations – no data required.

• Henry, I’m sorry that you didn’t find the information at the provided links helpful.

• ” The question you are asking people is, “given this observation, do you have cause to suspect the fairness of the coin,” which is a question of likelihood, not probability.”

Michael,

3. I would like to add that coin tossing, as an example of a Bernoulli process, is “IID” (independent and identically distributed, an important statistical property).
.
This means that the probability of “heads” is identical for each toss (of the same coin), and that the probability of “heads” on any given toss is not in any way influenced by the outcome of any previous toss.
.
Because there is no temporal or sequential effect in outcome (all tosses are completely independent of each other), statistical analysis treats such groups of trials and outcomes as sets, not sequences.
.
Thus, while it is narrowly (and pedantically) true that, given a perfectly fair coin, the precise sequence of trial outcomes “H T T H H T T H T H” is equally probable as the precise sequence “H H H H H H H H H H”, this is not statistically meaningful. There are no statistical analytic techniques which make any distinction between, e.g., “H T T H H T T H T H” and “H T T T H T H H T H” as outcomes (assuming IID). The only statistically meaningful observation is a set of ten heads on day one, and a set of five heads and five tails on day two, and as discussed earlier, those are very different observations indeed.

4. I have commented further on the RWER blog, defending my ‘irrationality’. (Actually, it depends what you mean by surprised – perhaps we mean different things?)

• Do you have a link David.

• They are not surprised in the sense that they expect any of all possible outcomes to occur. They understand radical uncertainty. While the ability to calculate probability functions might assuage our fears about uncertain, unknowable outcomes (and that’s all they serve to do), the fact remains, that any of all possible outcomes can occur.

• So, I would be ‘surprised’ in its most general sense if something quite unexpected happened, such as me winning the lottery, having bought a ticket. This is very different from the type of surprise where you are forced to review your preconceptions. In the first case Bayes’ theorem can be applied, in the second it can’t.

Part of the definition of ‘surprise’ is the idea that it evokes an emotional response. Maybe we could distinguish based on which emotion we think is appropriate?

• Isn’t strange – if you win the lottery, you are surprised. If someone else wins the lottery, you are not surprised.

• Isn’t it strange – if you win the lottery, you are surprised. If someone else wins the lottery, you are not surprised.

5. “But if we previously believed the coin to be fair, it would be surprising to have such strong evidence that it was not.”

I don’t think we have different ideas about what “surprising” means. However, I do think we would be surprised by different things.

You are surprised that what you believed to be a fair coin yielded unusual results – that is results that would be expected if you believed in the efficacy of probability theory.

If you believed in radical uncertainty, where anything is possible, then you would not be surprised. Today is the day that five straight “Hs” were thrown – that’s all.

Possibility resides in and is a feature of reality – probability resides in the imagination of the mathematician and statistician.

• I would not be surprised (;-) if we basically agreed, but were talking a different language.

As a mathematician I see probability theory as mathematical, as distinct from the dogma propagated by some social sciences, which Keynes – appropriately, I think – called ‘pseudo mathematics’.

I do believe pretty much absolutely in probability theory as mathematics, while at the same time being almost evangelical about its limits as pseudo-mathematics. The distinction matters.

In the case of real coins, it seems to me fairly obvious that no one could ever prove a coin to be fair. My understanding is that US coins aren’t, and one would be very foolish to bet that one was.

This doesn’t mean that the mathematics of idealised coins is somehow wrong. It is just that real coins aren’t ideal.

In Lars’ example, suppose that I had previously inspected the coin and tossed it 100 times, so that it seemed to be fair. I would not be surprised if someone else could consistently get a strong bias to Heads. But I can’t imagine how they could contrive all Heads, so I would be surprised by that.

I think we agree that in Taleb’s Black Swans it is misleading to say that the event had been highly improbable. In many cases radical uncertainty seems to have actualised, so that the prior beliefs about possibilities were just wrong.

But where we may disagree is that with regard to the financial crisis of 2008ish I would say that probabilistic reasoning and its attendant rationality was just fine for certain purposes up to at least 2005, whereas by 2008 everyone ought to have been more concerned with the radical uncertainties.

It would be good to tease out some different types of ‘surprise’.

• Whether the coin is fair or not is a red herring. The point is there is no way of calculating what the outcome of the next toss will be, fair coin or not. So how can a rational person be surprised at anything?

“.. I would say that probabilistic reasoning and its attendant rationality was just fine for certain purposes up to at least 2005, whereas by 2008 everyone ought to have been more concerned with the radical uncertainties.”

It is well and good to say that with hindsight, however, I would say it is completely trivial. The point is that there is no way that probability theory could have predicted the events of 2008. There is no theory of probability which calculates when it is time to suspend the use of probability theory and be prepared for a black swan event.

I remember watching the events of early to mid 2007 in the US unfold. It brought a feeling of dread and horror to me. The kind of things that happened in 2008 were entirely predictable. And I say that having studied financial markets over a 30 year (at that time) period, a good deal of that time professionally. I didn’t need probability theory to tell me what kind events were in the offing. I could not predict exactly how it might unfold, but I could see where it was heading.

• Henry says “There is no theory of probability which calculates when it is time to suspend the use of probability theory and be prepared for a black swan event.”

A slight distraction here is that Taleb characterises Black Swan events as having a very low probability, in which case it might seem that probability theory would be relevant. My own view (and Henry’s?) is that for some events, such as the financial crash, it is inappropriate to think in terms of [conventional] probabilities.

More importantly, as at https://djmarsay.wordpress.com/2015/06/15/decision-making-under-uncertainty-after-keynes/ what I take from Lars et al is that there are perfectly respectable mathematical theories of probability that warn us when it is time to suspend the use of conventional [pseudo-mathematical] probability theory. In brief, one pays attention to the mathematics and takes note when the pseudo-mathematics is particularly unsound.

One such theory is due to Bayes. He notes that regular behaviours imply regular mechanisms, such that one might reasonably expect the regular behaviours to continue, at least for a while. If we apply this to a coin we might determine that a coin is fair when tossed by very many people, but Bayes provides no support for the view that just because very many people are unable to bias a coin, no-one can. Similarly prior to the financial crash there were many gross – potentially worrying – changes in the mechanisms, giving gross changes in behaviours. The pseudo-mathematical wisdom of the time was that as long as the key indicators was fine, no needs to worry. But we can apply Bayes’ probability theory (Not to be confused with Bayesian probability theory) to say that if some related behaviours were unstable, then the mechanism must be unstable, so we should look out for trouble.

I got involved in this area for Y2K. Nothing much actually happened, but clearly if the bug had not been hunted and reduced so much there could have been gross changes to financial mechanisms, which Bayes warned us could have had dire consequences for real economies.

6. “In the case of real coins, it seems to me fairly obvious that no one could ever prove a coin to be fair. My understanding is that US coins aren’t”
.
There is a very good reason why you can’t prove a real coin to be fair; namely, because you can’t prove an ideal coin to be fair. You can only progressively narrow the range of plausible values for p through successive trails.
.
Even for an ideal coin, for which p=0.5 precisely, the best you can prove is that p=0.5±ε with infinite trials. As an illustration, your chance of flipping a perfectly fair coin 10 times and getting exactly 5 heads is 1 in 4, whereas the chance of flipping the same coin 100,000 times and getting exactly 50,000 heads is 1 in 400. In other words, the greater the number of trials of a perfectly fair coin, the less probable a perfectly fair observation becomes.
.
The problem for econometrics is that for many metrics of interest, even if they were generated by a pure and ideal Bernoulli process (which they assuredly are not), the small number of trials is insufficient to arrive at a usefully narrow range of plausible values for p.
.
And that’s for the easiest possible population parameter interval estimation problem. The interval size vs. sample size challenge becomes even more problematic in the case of multivariate regressions of non-normal distributions typically found in economically-interesting real world scenarios.

• It is even worse than that.

Suppose you have done very very many trials and found that almost certainly that p lies in the range [0.49,0.51]. Can one conclude that the coin is almost certainly approximately fair?

What if there is some factor X for which P(Heads|X=0) = 0.55, but X was randomised in the trials? Then, in the hand of someone who knew about X, the coin would be unfair. It arguably would not even be fair in the hands of someone who happened not to randomise X. So to argue that a coin is fair you would have to give evidence that there is no such factor, X. How would you do that?

In general, Henry’s uncertainty seems unavoidable.

• “So to argue that a coin is fair you would have to give evidence that there is no such factor, X. How would you do that?”
.
I believe the word you may be looking for here is “science”.
.
E.g., https://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-07-statistical-methods-in-brain-and-cognitive-science-spring-2004/lecture-notes/12_exprimnt_dsg1.pdf
.
If it is Henry’s point that economics as a discipline does not aspire to the discipline of science, and that consequently any divergence between economic prediction and reality should not cause surprise, well, I’m not going to disagree with that.

• “If it is Henry’s point that economics as a discipline does not aspire to the discipline of science,…”

Michael,

It seems this is your point. 🙂

• May I recommend https://www.simonyi.ox.ac.uk/ as representing a UK view of science. The point is that where Henry’s source says “One needs to randomize using a computer, or Coins, dice or cards” (pg 6) one needs to be careful. I guess that it is possible for sufficiently knowledgeable and skilful people to generate sequences that will appear random when subject to specified statistical tests. But could we ever know that a sequence generated by someone else will not be biased?

The inclusion of dice in the above list seems particularly odd. Perhaps MIT have some special dice that are hard to bias? Or perhaps their students always ‘play the game’?

7. “I guess that it is possible for sufficiently knowledgeable and skilful people to generate sequences that will appear random when subject to specified statistical tests. But could we ever know that a sequence generated by someone else will not be biased?”
.
.
In any case, science is not about proving the impossibility of confounding variables; science is about proving the possibility of confounding variables (or the failure to do so).
.
The slide you reference is not saying that using dice guarantees the sample is not biased; rather it is saying that manual (i.e. without mechanical assistance) sample selection carries a presumption of bias (due to ample and robust evidence to that effect).
.
What makes good science better science than bad science is both the quantity and quality of the accumulated failures to prove the possibility of alternative hypotheses.
.
So, to your question, “what if there is some factor X for which P(Heads|X=0) = 0.55, but X was randomised in the trials”, I reply, show me the quantity and quality of the failed efforts to prove the possibility of X. If a large number of high quality efforts to prove the possibility of X have failed, that is not proof that X doesn’t exist, because that’s not how science works. However, the quality and quantity of failures may be sufficient that, on balance, the non-existence of X may be reasonably assumed for a specific practical application, which is how science works.
.
So there are two questions: should one be surprised by the occurrence of a “long tail” event, and should one be surprised that the observed thickness of the tail proves to be much greater than that of the model. The first is a question of mathematics, the second a question of science.
.
(I will observe without comment that, representative of the UK view of science, the Charles Simonyi Professor for the Public Understanding of Science chair is held by a mathematician.)

• ” ………that the observed thickness of the tail proves to be much greater than that of the model”
.
Michael,
.
I would like to be clear about what you are saying here.
.
“…observed thickness of the tail … ” – this refers to the distribution obtained by observation and you could just as equally have said : “………that the observed thickness of the tail proves to be much smaller than that of the model…”, depending on the results of the observations.
.
And “….than that of the model” refers to the distribution generated by a mathematical probability model?

• //and you could just as equally have said : “………that the observed thickness of the tail proves to be much smaller than that of the model…”, depending on the results of the observations.//
.
Well, yes, logically that is correct, but in practical terms, there is an asymmetry in that it does not typically produce great surprise when very infrequent events occur less frequently than a model predicts. Outside of particle physics, anyway.
.
//And “….than that of the model” refers to the distribution generated by a mathematical probability model?//
.
Yes.
.
This was explicitly the case in the financial crisis where tail risk was significantly and systematically mispriced because the mathematical models financial institutions were using to price the tail risk were (as later discovered) unfit for purpose.

• Michael, you may not be aware that in the run up to the financial crisis financiers defined risk in terms of volatility. As far as I can see those risks were priced reasonably, or at least I very much doubt that any mis-pricing was significant in how things turned out. This is hardly surprising, since financial risk is like what you have seen and what is now behind you as you drive, whereas what really matters is often the ‘risks’ that you are as yet unaware of.

So quite possibly the mathematical models of financial risk were fully fit for any reasonable purpose. It is just that, at least with hindsight, we now wish we had been considering some other risk.

The situation is like coin flipping, in that mathematicians and statisticians can provide good models of how we think coins behave, which might well be fully accurate models of how you have found coins to behave in the past. But, faced with a magician, should you be basing your decisions on that model? Or seeking better models?

• Michael, I take it that where your previous reference seemed to be recommending the use of a coin or dice as randomizers, we ought to realize that coins and dice may not actually be randomizers but that they are nevertheless good enough for randomized trials. More generally, scientists often use language differently from mathematicians, which is why I wasn’t assuming that I knew what MIT meant.

The important point here is about probability theory. But on the secondary issue of what it is reasonable to assume for coin flipping, https://en.wikipedia.org/wiki/Coin_flipping cites scientists (physicists) as supporting the mathematical view, that coin flipping is trickier than you may think.

• // As far as I can see those risks were priced reasonably, or at least I very much doubt that any mis-pricing was significant in how things turned out. //
.
Actually, the mispricing was in fact systematic, and played a critical role in how things turned out. For example, default risk correlation was modelled with Gaussian approximations because the maths were tidier:
.
https://www.wired.com/2009/02/wp-quant/
.
Literally trillions of dollars of risk were exchanged by people blinding plugging numbers into black box spreadsheet models, the soundness of which had never been properly scrutinized. And there was ample reason to believe such scrutiny was warranted. e.g.:
.
https://www.bis.org/cgfs/conf/mar02q.pdf

• How can the kind of events evident in the lead up to the GFC be priced into these valuation models?

How many trial and error attempts can the world financial system bear?

The sub-prime fiasco was preceded in not too much time by the Long Term Capital fiasco which almost brought the world’s financial system to its knees in 1998.

It seems these models are lethal weapons in the hands of the so-called “masters of the universe”.

And the public at large pay the price and bear the burden and the perpetrators get off scott free, if are not rewarded.

I would assert there is no reliable way of pricing risk in financial markets.

• Michael, clearly mis-pricing was endemic, and perhaps systematic. The failure of Li’s approach (your first link) could be because his model was bad, or because it was a model of the wrong thing. You seem to think the former, I think the latter.

In the mid 00s I was trying to ‘sell’ the idea of using a more general mathematical approach to finance, attempting to include more of the residual risk (including your ‘tail risk’). The reactions were either that there were no such risks, or that if there were they could not possibly be modelled, or if you could it was the government responsibility. Speaking to some of those who had been involved on both sides of the negotiations over de-regulation, this seemed a reasonable view. Unfortunately by this time, however, governments had been taking the view that since financiers were making lot of money, they must know what they were talking about, and recruiting experts from finance!

Your second link puts forward the view – without any evidence – that one can keep the conceptual base more or less the same and just fiddle with the details. Maybe we shall just have to disagree on that. The final link looks better to me, but even it could be read as endorsing the view that financial indexes can always be modelled purely stochastically.

A financier one told me the parable of a drunk regularly making his way home. The recognized risk may be based on his track record and that of others like him. But this doesn’t cover things like a sink hole opening up for the first time. To cover such risks you need to go looking for them, not just look at streams of data.

Hope this makes sense.

8. //How can the kind of events evident in the lead up to the GFC be priced into these valuation models?//
.
Henry, that’s a bit backward. The valuation models were the cause of the events evident in the lead up to the GFC.
.
The models enabled pooling and tranching which created a marketplace in which buyers and sellers could transact “AAA”-rated tranches of mortgage-backed securities. If the models had accurately reflected the underlying default risk in the tails of the pools, the market in “AAA”-rated MBS tranches would have been accurately priced (and consequently much smaller), and the industrial-scale risk arbitrage which drove the events leading up to the GFC would not have occured.
.
Imagine what would happen to the economy if the government ran a lottery where every \$2 ticket had a 1/1000 chance of winning \$2200. That’s basically what happened to the financial industry (until, SURPRISE, it turned out there was only a 1/1200 chance of winning \$2200, oops).

• Henry, the measures of risk that I saw were being calculated correctly. Problems arose in the conversion of the measures into prices. This assumed that the main risks were the ones being measured. But events were showing that there was a lot of additional risk. In these circumstances there was a clear ‘sell’ indicator, albeit the total risk was immeasurable. Thus while I agree with Michael on much, I would say that the models were wrongly interpreted rather than being wrong in their own (obscure) terms.

Mathematically they did what it said on the tin. But did anyone read the tin? (I am rebutting any imputation that the underlying mathematics was somehow wrong. But the pseudo-mathematics certainly confused things, and I think genuine mathematicians and statisticians should have done more to explain things. But how?)

9. //I am rebutting any imputation that the underlying mathematics was somehow wrong.//
.
To be clear, I’m not imputing that the mathematics were in any way wrong. The mathematics accurately modeled an alternate universe in which the distribution of default risk is thin-tailed, symmetrical about the mean, and can be negative.
.
In the actual universe which we inhabit, meanwhile:
.
“During the credit bubble, other rating agencies avoided fat-tailed distributions, too. While some of their quants warned, the business side overruled. Acknowledging fat tails would have made it harder to award lucrative top ratings.
Apart from money there was little excuse. The rating agencies knew from experience that portfolio risks tended to be fat-tailed.”
.
https://goo.gl/L2yD98

10. “that’s a bit backward. The valuation models were the cause of the events evident in the lead up to the GFC.”
.
Michael,
.
I’m not so sure about that.
.
Sub prime lending was a problem because there was fraud in the loan agreements written with a certain class of borrower. Lending standards were not adhered to. Massive tranches of money were floating around looking for a home. Overheated property markets collapsed and the rest is history.

Are you arguing that had the models priced in fraud everything would have been OK?

The FCIC report concluded it was human action and inaction that was the cause, not models – see their first conclusion. The effect of inadequate modelling was to exacerbate the crisis.

The fact is financial modellers will never be able to anticipate and adequately model risk in financial markets. It is folly to believe anything else.

You say yourself, “If the models had accurately reflected the underlying default risk in the tails of the pools…”

If it is so easy to model financial risk why is it that the LTCM and sub-prime crisis occurred within ten years of each other?

The hubris of financial modellers astounds.

• //Sub prime lending was a problem because there was fraud in the loan agreements written with a certain class of borrower. Lending standards were not adhered to. Massive tranches of money were floating around looking for a home.//
.
This is exactly backward. The opportunity to create highly-lucrative AAA-rated tranches out of fluff resulted in a dramatic increase in the demand for fluff. The precipitous drop in underwriting standards was driven by the sudden enormous demand for no-questions-asked mortgage paper to stuff into mortgage pools, so that the AAA-tranches could be skimmed off and sold to every pension fund etc. on the planet.
.
It was the alignment of financial incentives around this deal flow which motivated everyone up and down the deal chain to avert their eyes from what exactly was going into the sausage machine.

• Michael,
.
I think we are saying the same thing.
.
Either you wrote it in one of your posts above or it was in one of the articles linked, the analysts knew that the fat tail risk was being ignored (or is that just after the fact backsliding?) – the marketing agents and the rating agencies chose to ignore the advice of the modellers. So whether the models recognized fat tail risk or not appears not to be the issue. Business motives drove the train to its wrecking point.

I have experience with equity markets and not debt markets but I would imagine the venality evident in human beings that operate in equity markets is evident in the human beings that operate in the markets in question. It was the madness of crowds that won out in the end.
.
The question is, even if risk was priced soundly, would that have made a difference to the outcome? Doesn’t look like it.
.
The other question is, can models account for the madness of crowds?

11. ” The rating agencies knew from experience that portfolio risks tended to be fat-tailed”
.
I find it ironic that the discussion devolves to the consideration of the obesity of a probability distribution’s tail. I imagine Kim Kardashian is now the financial modeller’s universal pin up girl.