Model accuracy and fit


Introduction

In this (optional) lab at home, you will learn how to plot a linear regression with confidence and prediction intervals, and various tools to assess model fit: calculating the MSE, making train-test splits, and writing a function for cross validation. Solutions to this lab will be posted on Monday May 17th on the course website.

We will use the Boston dataset, which is in the MASS package that comes with R.

library(ISLR)
library(MASS)
library(tidyverse)

  1. Inspect the Boston dataset using the View() function

The Boston dataset contains the housing values and other information about Boston suburbs. We will use the dataset to predict housing value (the variable medv, here the outcome/dependent variable) by socio-economic status (the variable lstat, here the predictor / independent variable).

Let’s explore socio-economic status and housing value in the dataset using visualization.


  1. Create a scatter plot from the Boston dataset with lstat mapped to the x position and medv mapped to the y position. Store the plot in an object called p_scatter.


Plotting linear regression with confidence intervals

We’ll start with making and visualizing the linear model. As you know, a linear model is fitted in R using the function lm(), which then returns a lm object. We are going to walk through the construction of a plot with a fit line and prediction / confidence intervals from an lm object.

First, we will create the linear model. This model will be used to predict outcomes for the current data set, and - further along in this lab - to create new data.


  1. Create a linear model object called lm_ses using the formula medv ~ lstat and the Boston dataset.

You have now trained a regression model with medv (housing value) as the outcome/dependent variable and lstat (socio-economic status) as the predictor / independent variable.

Remember that a regression estimates β (the intercept) and β1 (the slope) in the following equation:

y = β0 + β1 • x1 + ε


  1. Use the function coef() to extract the intercept and slope from the lm_ses object. Interpret the slope coefficient.


  1. Use summary() to get a summary of the lm_ses object. What do you see? You can use the help file ?summary.lm.

We now have a model object lm_ses that represents the formula

medvi = 34.55 - 0.95 • lstati + εi

With this object, we can predict a new medv value by inputting its lstat value. The predict() method enables us to do this for the lstat values in the original dataset.


  1. Save the predicted y values to a variable called y_pred


  1. Create a scatter plot with y_pred mapped to the x position and the true y value (Boston$medv) mapped to the y value. What do you see? What would this plot look like if the fit were perfect?

We can also generate predictions from new data using the newdat argument in the predict() method. For that, we need to prepare a data frame with new values for the original predictors.

One method of number generation, is through using the function seq(), this function from base R generates a sequence of number using a standardized method. Typically length of the requested sequence divided by the range between from to to. For more information call ?seq.


  1. Use the seq() function to generate a sequence of 1000 equally spaced values from 0 to 40. Store this vector in a data frame with (data.frame() or tibble()) as its column name lstat. Name the data frame pred_dat.


    1. Use the newly created data frame, from Question 8, as the newdata argument to a predict() call for lm_ses. Store it in a variable named y_pred_new.

Now, we’ll continue with the plotting part by adding a prediction line to the plot we previously constructed.


    1. Add the vector y_pred_new to the pred_dat data frame with the name medv.


  1. Add a geom_line() to p_scatter from Question 2, with pred_dat as the data argument. What does this line represent?


  1. The interval argument can be used to generate confidence or prediction intervals. Create a new object called y_pred_95 using predict() (again with the pred_dat data) with the interval argument set to “confidence”. What is in this object?


  1. Using the data from Question 11, and the sequence created in Question 8; create a data frame with 4 columns: medv, lstat, lower, and upper.


  1. Add a geom_ribbon() to the plot with the data frame you just made. The ribbon geom requires three aesthetics: x (lstat, already mapped), ymin (lower), and ymax (upper). Add the ribbon below the geom_line() and the geom_points() of before to make sure those remain visible. Give it a nice colour and clean up the plot, too!


  1. Explain in your own words what the ribbon represents.


  1. Do the same thing, but now with the prediction interval instead of the confidence interval.


Model fit using the mean square error

Next, we will write a function to assess the model fit using the mean square error: the square of how much our predictions on average differ from the observed values.


  1. Write a function called mse() that takes in two vectors: true y values and predicted y values, and which outputs the mean square error.

Start like so:

mse <- function(y_true, y_pred) {
  # your function here
}

Wikipedia may help for the formula.


  1. Make sure your mse() function works correctly by running the following code.

mse(1:10, 10:1)
## [1] 33

In the code, we state that our observed values correspond to \(1, 2, ..., 9, 10\), while our predicted values correspond to \(10, 9, ..., 2, 1\). This is graphed below, where the blue dots correspond to the observed values, and the yellow dots correspond to the predicted values. Using your function, you have now calculated the mean squared length of the dashed lines depicted in the graph below. If your function works correctly, the value returned should equal 33.


  1. Calculate the mean square error of the lm_ses model. Use the medv column as y_true and use the predict() method to generate y_pred.

You have calculated the mean squared length of the dashed lines in the plot below. As the MSE is computed using the data that was used to fit the model, we actually obtained the training MSE. Below we continue with splitting our data in a training, test and validation set such that we can calculate the out-of sample prediction error during model building using the validation set, and estimate the true out-of-sample MSE using the test set.

Note that you can also easily obtain how much the predictions on average differ from the observed values in the original scale of the outcome variable. To obtain this, you take the root of the mean square error. This is called the Root Mean Square Error, abbreviated as RMSE.


Obtaining train-validation-test splits

Now we will use the sample() function to randomly select observations from the Boston dataset to go into a training, test, and validation set. The training set will be used to fit our model, the validation set will be used to calculate the out-of sample prediction error during model building, and the test set will be used to estimate the true out-of-sample MSE.


  1. The Boston dataset has 506 observations. Use c() and rep() to create a vector with 253 times the word “train”, 152 times the word “validation”, and 101 times the word “test”. Call this vector splits.


  1. Use the function sample() to randomly order this vector and add it to the Boston dataset using mutate(). Assign the newly created dataset to a variable called boston_master.


  1. Now use filter() to create a training, validation, and test set from the boston_master data. Call these datasets boston_train, boston_valid, and boston_test.

We will set aside the boston_test dataset for now.


  1. Train a linear regression model called model_1 using the training dataset. Use the formula medv ~ lstat like in the first lm() exercise. Use summary() to check that this object is as you expect.


  1. Calculate the MSE with this object. Save this value as model_1_mse_train.


  1. Now calculate the MSE on the validation set and assign it to variable model_1_mse_valid. Hint: use the newdata argument in predict().

This is the estimated out-of-sample mean squared error.


  1. Create a second model model_2 for the train data which includes age and tax as predictors. Calculate the train and validation MSE.


  1. Compare model 1 and model 2 in terms of their training and validation MSE. Which would you choose and why?

In choosing the best model, you should base your answer on the validation MSE. Using the out of sample mean square error, we have made a model decision (which parameters to include, only lstat, or using age and tax in addition to lstat to predict housing value). Now we have selected a final model.


  1. For your final model, retrain the model one more time using both the training and the validation set. Then, calculate the test MSE based on the (retrained) final model. What does this number tell you?

As you will see during the remainder of the course, usually we set apart the test set at the beginning and on the remaining data perform the train-validation split multiple times. Performing the train-validation split multiple times is what we for example do in cross validation (see below). The validation sets are used for making model decisions, such as selecting predictors or tuning model parameters, so building the model. As the validation set is used to base model decisions on, we can not use this set to obtain a true out-of-sample MSE. That’s where the test set comes in, it can be used to obtain the MSE of the final model that we choose when all model decisions have been made. As all model decisions have been made, we can use all data except for the test set to retrain our model one last time using as much data as possible to estimate the parameters for the final model.


Cross-validation (advanced)

This is an advanced exercise. Some components we have seen before in this lab, but some things will be completely new. Try to complete it by yourself, but don’t worry if you get stuck. If you don’t know about for loops in R, read up on those before you start the exercise, for example by reading the Basics: For Loops tab on the course website.

Use help in this order:

  • R help files
  • Internet search & stack exchange
  • Your peers
  • The answer, which shows one solution

You may also just read the answer when they have been made available and try to understand what happens in each step.


  1. Create a function that performs k-fold cross-validation for linear models.

Inputs:

  • formula: a formula just as in the lm() function
  • dataset: a data frame
  • k: the number of folds for cross validation
  • any other arguments you need necessary

Outputs:

  • Mean square error averaged over folds

  1. Use your function to perform 9-fold cross validation with a linear model with as its formula medv ~ lstat + age + tax. Compare it to a model with as formula medv ~ lstat + I(lstat^2) + age + tax.