forked from hadley/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 0
/
model-assess.Rmd
301 lines (212 loc) · 12.3 KB
/
model-assess.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
# Model assessment
In this chapter, you'll turn the tools of multiple models towards model assessment: learning how the model performs when given new data. So far we've focussed on models as tools for description, using models to help us understand the patterns in the data we have collected so far. But ideally a model will do more than just describe what we have seen so far - it will also help predict what will come next.
In other words, we want a model that doesn't just perform well on the sample, but also accurately summarises the underlying population.
In some industries this is primarily the use of models: you spend relatively little time fitting the model compared to how many times you use it.
There are two basic ways that a model can fail with new data:
* You can under- or over-fit the model. Underfitting is where you fail
to model and important trend: you leave too much in the residuals, and not
enough in the model. Overfitting is the opposite: you fit a trend to
what is actually random noise: you've too put much model and not left
enough in the residuals. Generally overfitting tends to be more of a
problem than underfitting.
* The process that generates the data might change. There's nothing the
model can do about this. You can protect yourself against this to some
extent by creating models that you understand and applying your knowledge
to the problem. Are these fundamentals likely to change? If you have
a model that you are going to use again and again for a long time, you
need to plan to maintain the model, regularly checking that it still
makes sense. i.e. is the population the same?
<http://research.google.com/pubs/pub43146.html>
<http://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/>
The most common problem with a model that causes it to do poorly with new data is overfitting.
Obviously, there's a bit of a problem here: we don't have new data with which to check the model, and even if we did, we'd presumably use it to make the model better in the first place. One powerful technique of approaches can help us get around this problem: resampling.
There are two main resampling techniques that we're going to cover.
* We will use __cross-validation__ to assess model quality. In
cross-validation, you split the data into test and training sets. You fit
the data to the training set, and evaluate it on the test set. This avoids
intrinsic bias of using the same data to both fit the model and assess it's
quality. However it introduces a new bias: you're not using all the data to
fit the model so it's not going to be quite as good as it could be.
* We will use __boostrapping__ to understand how stable (or how variable)
the model is. If you sample data from the same population multiple times,
how much does your model vary? Instead of going back to collect new data,
you can use the best estimate of the population data: the data you've
collected so far. The amazing idea of the bootstrap is that you can resample
from the data you already have.
There are lots of high-level helpers to do these resampling methods in R. We're going to use the tools provided by the modelr package because they are explicit - you'll see exactly what's going on at each step.
<http://topepo.github.io/caret>. [Applied Predictive Modeling](https://amzn.com/1461468485), by Max Kuhn and Kjell Johnson.
If you're competing in competitions, like Kaggle, that are predominantly about creating good predicitons, developing a good strategy for avoiding overfitting is very important. Otherwise you risk tricking yourself into thinking that you have a good model, when in reality you just have a model that does a good job of fitting your data.
There is a closely related family that uses a similar idea: model ensembles. However, instead of trying to find the best models, ensembles make use of all the models, acknowledging that even models that don't fit all the data particularly well can still model some subsets well. In general, you can think of model ensemble techniques as functions that take a list of models, and a return a single model that attempts to take the best part of each.
### Prerequisites
```{r setup, message = FALSE}
# Standard data manipulation and visulisation
library(dplyr)
library(ggplot2)
# Tools for working with models
library(broom)
library(modelr)
library(splines)
# Tools for working with lots of models
library(purrr)
library(tidyr)
```
```{r}
# Options that make your life easier
options(
contrasts = c("contr.treatment", "contr.treatment"),
na.option = na.exclude
)
```
## Overfitting
Both bootstrapping and cross-validation help us to spot and remedy the problem of __over fitting__, where the model fits the data we've seen so far extremely well, but does a bad job of generalising to new data.
A classic example of over-fitting is to using a polynomial with too many degrees of freedom.
Bias - variance tradeoff. Simpler = more biased. Complex = more variable. Occam's razor.
```{r}
true_model <- function(x) {
1 + 2 * x + rnorm(length(x), sd = 0.25)
}
df <- tibble(
x = seq(0, 1, length = 20),
y = true_model(x)
)
df %>%
ggplot(aes(x, y)) +
geom_point()
```
We can create a model that fits this data very well:
```{r, message = FALSE}
library(splines)
my_model <- function(df) {
lm(y ~ poly(x, 7), data = df)
}
mod <- my_model(df)
rmse(mod, df)
grid <- df %>%
expand(x = seq_range(x, 50))
preds <- grid %>%
add_predictions(mod, var = "y")
df %>%
ggplot(aes(x, y)) +
geom_line(data = preds) +
geom_point()
```
As we fit progressively more and more complicated models, the model error decreases:
```{r}
fs <- list(
y ~ x,
y ~ poly(x, 2),
y ~ poly(x, 3),
y ~ poly(x, 4),
y ~ poly(x, 5),
y ~ poly(x, 6),
y ~ poly(x, 7)
)
models <- tibble(
n = 1:7,
f = fs,
mod = map(f, lm, data = df),
rmse = map2_dbl(mod, list(df), rmse)
)
models %>%
ggplot(aes(n, rmse)) +
geom_line(colour = "grey70") +
geom_point(size = 3)
```
But do you think this model will do well if we apply it to new data from the same population?
In real-life you can't easily go out and recollect your data. There are two approach to help you get around this problem. I'll introduce them briefly here, and then we'll go into more depth in the following sections.
```{r}
boot <- bootstrap(df, 100) %>%
mutate(
mod = map(strap, my_model),
pred = map2(list(grid), mod, add_predictions)
)
boot %>%
unnest(pred) %>%
ggplot(aes(x, pred, group = .id)) +
geom_line(alpha = 1/3)
```
It's a little easier to see what's going on if we zoom on the y axis:
```{r}
last_plot() +
coord_cartesian(ylim = c(0, 5))
```
(You might notice that while each individual model varies a lot, the average of all the models seems like it might not be that bad. That gives rise to a model ensemble technique called model averaging.)
Bootstrapping is a useful tool to help us understand how the model might vary if we'd collected a different sample from the population. A related technique is cross-validation which allows us to explore the quality of the model. It works by repeatedly splitting the data into two pieces. One piece, the training set, is used to fit, and the other piece, the test set, is used to measure the model quality.
The following code generates 100 test-training splits, holding out 20% of the data for testing each time. We then fit a model to the training set, and evalute the error on the test set:
```{r}
cv <- crossv_mc(df, 100) %>%
mutate(
mod = map(train, my_model),
rmse = map2_dbl(mod, test, rmse)
)
cv
```
Obviously, a plot is going to help us see distribution more easily. I've added our original estimate of the model error as a white vertical line (where the same dataset is used for both training and teseting), and you can see it's very optimistic.
```{r}
cv %>%
ggplot(aes(rmse)) +
geom_ref_line(v = rmse(mod, df)) +
geom_freqpoly(binwidth = 0.2) +
geom_rug()
```
The distribution of errors is highly skewed: there are a few cases which have very high errors. These respresent samples where we ended up with a few cases on all with low values or high values of x. Let's take a look:
```{r}
filter(cv, rmse > 1.5) %>%
unnest(map(train, as.data.frame)) %>%
ggplot(aes(x, .id)) +
geom_point() +
xlim(0, 1)
```
All of the models that fit particularly poorly were fit to samples that either missed the first one or two or the last one or two observation. Because polynomials shoot off to positive and negative, they give very bad predictions for those values.
Now that we've given you a quick overview and intuition for these techniques, lets dive in more more detail.
## Resamples
### Building blocks
Both the boostrap and cross-validation are build on top of a "resample" object. In modelr, you can access these low-level tools directly with the `resample_*` functions.
These functions return an object of class "resample", which represents the resample in a memory efficient way. Instead of storing the resampled dataset itself, it instead stores the integer indices, and a "pointer" to the original dataset. This makes resamples take up much less memory.
```{r}
x <- resample_bootstrap(as_tibble(mtcars))
class(x)
x
```
Most modelling functions call `as.data.frame()` on the `data` argument. This generates a resampled data frame. Because it's called automatically you can just pass the object.
```{r}
lm(mpg ~ wt, data = x)
```
If you get a strange error, it's probably because the modelling function doesn't do this, and you need to do it yourself. You'll also need to do it yourself if you want to `unnest()` the data so you can visualise it. If you want to just get the rows selected, you can use `as.integer()`.
### Dataframe API
`bootstrap()` and `crossv_mc()` are built on top of these simpler primitives. They are designed to work naturally in a model exploration environment by returning data frames. Each row of the data frame represents a single sample. They return slightly different columns:
* `boostrap()` returns a data frame with two columns:
```{r}
bootstrap(df, 3)
```
`strap` gives the bootstrap sample dataset, and `.id` assigns a
unique identifer to each model (this is often useful for plotting)
* `crossv_mc()` return a data frame with three columns:
```{r}
crossv_mc(df, 3)
```
`train` contains the data that you should use to fit (train) the model,
and `test` contains the data you should use to validate the model. Together,
the test and train columns form an exclusive partition of the full dataset.
## Numeric summaries of model quality
When you start dealing with many models, it's helpful to have some rough way of comparing them so you can spend your time looking at the models that do the best job of capturing important features in the data.
One way to capture the quality of the model is to summarise the distribution of the residuals. For example, you could look at the quantiles of the absolute residuals. For this dataset, 25% of predictions are less than \$7,400 away, and 75% are less than \$25,800 away. That seems like quite a bit of error when predicting someone's income!
```{r}
heights <- tibble(readRDS("data/heights.RDS"))
h <- lm(income ~ height, data = heights)
h
qae(h, heights)
range(heights$income)
```
You might be familiar with the $R^2$. That's a single number summary that rescales the variance of the residuals to between 0 (very bad) and 1 (very good):
```{r}
rsquare(h, heights)
```
$R^2$ can be interpreted as the amount of variation in the data explained by the model. Here we're explaining 3% of the total variation - not a lot! But I don't think worrying about the relative amount of variation explained is that useful; instead I think you need to consider whether the absolute amount of variation explained is useful for your project.
It's called the $R^2$ because for simple models like this, it's just the square of the correlation between the variables:
```{r}
cor(heights$income, heights$height) ^ 2
```
The $R^2$ is an ok single number summary, but I prefer to think about the unscaled residuals because it's easier to interpret in the context of the original data. As you'll also learn later, it's also a rather optimistic interpretation of the model. Because you're asssessing the model using the same data that was used to fit it, it really gives more of an upper bound on the quality of the model, not a fair assessment.
## Bootstrapping
## Cross-validation