The Big Short

Michael Taylor

2019/01/22

The Big Short: Interest Rates Explained

Suppose your bank will give out 1,000 loans for 180,000 this year. Also suppose that your bank loses, after adding up all the costs, $200,000 per foreclosure. For simplicity, we assume that that includes all operational costs. A sampling model for this scenario is coded like this.

n <- 1000
loss_per_foreclosure <- -200000
p <- 0.02
defaults <- sample(c(0, 1), n, prob = c(1 - p, p), replace = TRUE )
(ans <- sum(defaults * loss_per_foreclosure))
## [1] -3800000

We either default and lose money, or not default and not lose money. If we run the simulation we see that we lose $3.810^{6} millions.

replicate(5, {
  defaults <- sample(c(0, 1), n, prob = c(1 - p, p), replace = TRUE )
  sum(defaults * loss_per_foreclosure)
})
## [1] -3400000 -3400000 -4200000 -3800000 -2400000

Note that the total loss defined by the final sum is a random variable. Every time you run the code you get a different answer. This is because it’s a probability of defaulting. It’s not going to happen for sure. We can easily construct a Monte Carlo simulation to get an idea of the distribution of this random variable.

B <- 10000

losses <- replicate(B, {
  defaults <- sample(c(0, 1), n, prob = c(1 - p, p), replace = TRUE )
  sum(defaults * loss_per_foreclosure)
})

Here’s the distribution. You can see that we’re going to lose money because of these people that default. And here’s the distribution of how much money we’re going to lose in millions.

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(ggplot2)
data.frame(losses_in_millions = losses / 10^6) %>% 
  ggplot(aes(x = losses_in_millions)) +
  geom_histogram(color="black", binwidth = 0.6)

We don’t really need a Monte Carlo simulation though. Using what we’ve learned, the CLT tells us that because our losses are a sum of independent draws, its distribution is approximately normal with expected value and standard deviation given by the following formula.

n*(p*loss_per_foreclosure + (1 - p) * 0)
## [1] -4e+06
sqrt(n) * abs(loss_per_foreclosure) * sqrt(p * (1 - p))
## [1] 885438

We can now set an interest rate to guarantee that on average, we break even. Basically, we need to add a quantity x to each loan, which in this case are represented by draws, so that the expected value is zero. That means breaking even.

If we define l to be the loss per foreclosure, we need to set x so that the following formula holds,

\[lp+x(1-p)=0\]

which implies that x can be calculated using this R code,

(ans <- -loss_per_foreclosure * p/(1 - p))
## [1] 4082

which gives us about 4081.6327.

This is an interest rate of about 2%.

We still have a problem though. Although this interest rate guarantees that on average we break even, there’s a 50% chance that we will lose money. If our bank loses money, we have to close it down. So we need to pick an interest rate that makes it unlikely for this to happen. At the same time, if the interest rate is too high our clients will go to another bank. So we must be willing to take some risks. So, let’s they say that we want our chances of losing money to be one in 100. What does x have to be now? This one is a bit harder. We want the sum– let’s call it capital S– to have the probability of S less than zero to be 0.01.

\[\Pr(S<0)=0.01\]

We know that S is approximately normal. The expected value of S is given by this formula, with n the number of draws, which in this case represents the number of loans.

\[{lp+x(1-p)}n\]

The standard error is given by this formula.

\[|x-l|\sqrt{np(1-p)}\]

Because x is positive and l is negative, we know that the absolute value of x minus l is \((x - l)\).

Now we’re going to use a mathematical trick that is very common in statistics. We’re going to add and subtract the same quantities to both sides of the event S less than zero so that the probability does not change, and we end up with a standard normal random variable on the left, which will then permit us to write down an equation with only x as an unknown.

We want the \(\Pr(S<0)=0.01\).

So what we’re going to do now is we’re going to subtract the expected value of S and divide by the standard error of S on both sides of the equation.

\[\Pr(\frac{S-E[S]}{SE[E]}<\frac{-E[S]}{SE[S]})\]

All we did was add and divide by the same quantity on both sides. We did this because now the term on the left is a standard normal random variable, which we will rename as capital Z.

\[\Pr(Z<\frac{-E[S]}{SE[S]})\]

Now, we will fill in the blanks with the actual formula for expected value and standard error. The formula now looks like this. We have a Z on the left. That’s good, that’s going to make things easier later. The right looks complicated, but remember that l, p, and n are all known amounts, so eventually we’ll turn them into numbers.

\[\Pr(Z<\frac{-\{lp+x(1-p)\}n}{(x-l)\sqrt{np(1-p)}})=0.01\]

Now, because the term on the left side is a normal random variable with expected value of zero and standard error one, it means that the quantity on the right must be equal to qnorm of 0.01,

qnorm(0.01)
## [1] -2.326

which is negative 2.32.

Then the equation above will hold true. Remember that if we set little z to be qnorm(0.01), this will give you a value of little z for which the following formula is true.

\[\Pr(Z\leq z)=0.01\]

The probability of big Z, less than or equal to little z, equal 0.01. So this means that the right side of the complicated equation must be equal to qnorm 0.01. So we have this formula.

\[\frac{-\ \{lp+x(1-p)\}N} {(x-l)\sqrt{Np(1-p)}})=z\]

The trick works, because we end up with an expression containing x that we know has to be equal to an quantity depending on things we know. Solving for x is now simply, algebra.

\[x=-\ l \frac{np-z\sqrt{np(1-p)}}{N(1-p)+z\sqrt{np(1-p)}}\]

l <- loss_per_foreclosure
z <- qnorm(0.01)
(x <- -l * (n * p - z * sqrt(n * p * (1 - p)) ) / 
   (n * (1-p) + z * sqrt(n * p * (1 - p)) )
 )
## [1] 6249
loss_per_foreclosure * p + x*(1-p)
## [1] 2124

If we do it, we get that the x has to be about 6,249, which is an interest rate of about 3%, which is still pretty good. Note also that by choosing this interest rate, we now have an expected profit per loan of about $2,124, which is a total expected profit of about $2 million.

We can run a Monte Carlo simulation and check our theoretical approximation. We do that, and we indeed get that value again.

B <- 10000
profit <- replicate(B, {
  draws <- sample(c(x, loss_per_foreclosure), 
                  n, 
                  prob = c(1-p, p), 
                  replace = TRUE)
  sum(draws)
})
mean(profit)
## [1] 2129313
mean(profit < 0)
## [1] 0.0114

And again, the probability of profit being less than zero according to the Monte Carlo simulation is about 1%.

One of our employees points out that since the bank is making about $2,000 per loan, that you should give out more loans. Why just n? You explain that finding those end clients was hard. You need a group that is predictable, and that keeps the chances of defaults low. He then points out that even if the probability of default is higher, as long as your expected value is positive, you can minimize your chances of losing money by increasing n, the number of loans, and relying on the law of large numbers. He claims that even if the default rate is twice as high, say 4%, if we set the rate just a bit higher so that this happens, you will get a positive expected value.

So if we set the interest rate at 5%, we are

# interest rate
r <- 0.05
# probability of default
p <- 0.04
x <- r * 180000
loss_per_foreclosure * p + x * (1 - p)
## [1] 640

guaranteed a positive expected value of $640 per loan. And we can minimize our chances of losing money by simply increasing the number of loans, since the probability of S being less than 0 is equal to the probability of Z being less than negative expected value of S divided by standard error S, with Z a standard normal random variable, as we saw earlier. And if we do the math, we will see that we can pick an n so that this probability is 1 in 100.

\[\Pr(S<0)=\Pr\left(Z<-\frac{E[S]}{SE[S]}\right)\]

Let’s see how. We define μ and σ to be the expected value and standard deviation of the earn, that is of a single loan, we have that using the formulas above, the expected value of S

\[E[S]=n\mu\]

, the standard error of S

\[SE[S]=\sqrt{n\sigma}\]

So if we define little z to be the qnorm(0.01), we have the following formula,

\[-\frac{n\mu}{\sqrt{n\sigma}}=\frac{\sqrt{n\mu}}{\sigma}=z\]

which tells us

\[n\geq z^2\sigma^2/\mu^2\]

that if n is bigger than z squared times σ squared divided by μ squared, we are guaranteed to have a probability of less than 0.01 of losing money.

The implication is that as long as μ, the expected value, is positive, we can find an n, a number of loans, that minimizes the probability of a loss. This is a form of the Law of Large Numbers.

With x fixed, now we can ask what n do we need for the probability to be 0.01? In our example, we use the following formula.

z <- qnorm(0.01)
n <- ceiling( 
  (z^2 * (x - l)^2 * p * (1 - p)) /
  (l * p + x * (1 - p))^2
  )
n
## [1] 22163

And we get that n has to be 2.216310^{4} loans. This is the number of loans we need so that our probability of losing is about 1%.

n* (loss_per_foreclosure * p + x * (1 - p) )
## [1] 14184320

And furthermore, we are expected to earn a total of about $14 million. We can confirm this with a Monte Carlo simulation.

p <- 0.04
x <- 0.05 * 180000
profit <- replicate(10000, {
  draws <- sample(c(x, loss_per_foreclosure),
                  n,
                  prob = c(1-p, p),
                  replace = TRUE)
  sum(draws)
})
mean(profit < 0)
## [1] 0.0112

We run the Monte Carlo simulation, and we get the probability of losing money is about 1%.

This seems like a no brainer. Your colleague decides to leave your bank, and start his own high-risk mortgage company. A few months later, your colleague’s bank has gone bankrupt. A book is written about it and eventually, a movie made relating to the mistake your friend and many others made. What happened? Your friend’s scheme was based in part on the following mathematical formula–

\[SE[(X_1+X_2+\cdots +X_n)/n]=\sigma/\sqrt{n}\]

by making n large, we minimize the standard error of our per-loan profit. However, for this rule to hold, the X’s must be independent draws. The fact that one person defaults must be independent of other people defaulting. Note that, for example, that in the extreme of averaging the same event over and over, an event is certainly not independent. In that case, we get a much higher standard error.

\[SE[(X_1+X_2+\cdots +X_n)/n]=SE[nX_1/n]=\sigma >\sigma/\sqrt{n}\] It’s the square root of n larger.

To construct a more realistic simulation than the original one your friend ran, let’s assume there could be a global event that affects everybody with high-risk mortgages, and changes their probability. We will assume that with a 50-50 chance, all the probabilities go up or down slightly to somewhere between 0.03 and 0.05. But it happens to everybody at once, not just one person. These draws are no longer independent. Let’s use a Monte Carlo simulation to see what happens under this model.

p <- 0.04
x <- 0.05 * 180000
profit <- replicate(10000, {
  new_p <- 0.04 + sample(seq(-0.01, 0.01, length.out = 100), 1)
  draws <- sample(c(x, loss_per_foreclosure),
                  n,
                  prob = c(1-new_p, new_p),
                  replace = TRUE)
  sum(draws)
})

Note that our expected profit is still large.

mean(profit)
## [1] 13877550

We’re expected to make about $14 million. However, the probability of the bank having negative earning shoots way up to almost 35%.

mean(profit < 0)
## [1] 0.3546

Even scarier is the probability of losing more than $10 million, which is 24%.

mean(profit < -10000000)
## [1] 0.2444

To understand how this happens, we can look at the distribution of our random variable. It doesn’t look normal at all.

data.frame(profit = profit / 10^6) %>% 
  ggplot(aes(x = profit)) +
  geom_histogram(color="black", binwidth = 6)

The theory completely breaks down, and our random variable has much more variability than expected. The financial meltdown of 2007 was due, among other things, to financial experts assuming independence when there was none.

sessionInfo()
## R version 3.5.1 (2018-07-02)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.1 LTS
## 
## Matrix products: default
## BLAS: /home/michael/anaconda3/lib/R/lib/libRblas.so
## LAPACK: /home/michael/anaconda3/lib/R/lib/libRlapack.so
## 
## locale:
## [1] en_CA.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] ggplot2_3.0.0        dplyr_0.7.6          RevoUtils_11.0.1    
## [4] RevoUtilsMath_11.0.0
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_0.12.18     pillar_1.3.0     compiler_3.5.1   plyr_1.8.4      
##  [5] bindr_0.1.1      tools_3.5.1      digest_0.6.15    evaluate_0.11   
##  [9] tibble_1.4.2     gtable_0.2.0     pkgconfig_2.0.1  rlang_0.2.1     
## [13] rstudioapi_0.7   yaml_2.2.0       blogdown_0.9.8   xfun_0.4.11     
## [17] bindrcpp_0.2.2   withr_2.1.2      stringr_1.3.1    knitr_1.20      
## [21] rprojroot_1.3-2  grid_3.5.1       tidyselect_0.2.4 glue_1.3.0      
## [25] R6_2.2.2         rmarkdown_1.10   bookdown_0.7     purrr_0.2.5     
## [29] magrittr_1.5     backports_1.1.2  scales_0.5.0     codetools_0.2-15
## [33] htmltools_0.3.6  assertthat_0.2.0 colorspace_1.3-2 labeling_0.3    
## [37] stringi_1.2.4    lazyeval_0.2.1   munsell_0.5.0    crayon_1.3.4

References

R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.

Wickham, Hadley, Winston Chang, Lionel Henry, Thomas Lin Pedersen, Kohske Takahashi, Claus Wilke, and Kara Woo. 2018. Ggplot2: Create Elegant Data Visualisations Using the Grammar of Graphics. https://CRAN.R-project.org/package=ggplot2.

Wickham, Hadley, Romain François, Lionel Henry, and Kirill Müller. 2018. Dplyr: A Grammar of Data Manipulation. https://CRAN.R-project.org/package=dplyr.