Linear regression for data science


Introduction

In this practical, you will learn how to handle many variables with regression by using variable selection techniques, shrinkage techniques, and how to tune hyper-parameters for these techniques. This practical has been derived from chapter 6 of ISLR. In addition, you will need for loops (see also the Basics: For Loops tab on the course website under week 3) and data manipulation techniques from Dplyr.

One of the packages we are going to use is glmnet. For this, you will probably need to install.packages("glmnet") before running the library() functions.

library(ISLR)
library(glmnet)
library(tidyverse)


Best subset selection

Our goal for today is to to predict Salary from the Hitters dataset from the ISLR package. First, we will prepare a dataframe baseball from the Hitters dataset where you remove the baseball players for which the Salary is missing. Use the following code:

baseball <- Hitters %>% filter(!is.na(Salary))

We can check how many baseball players are left using:

nrow(baseball)
## [1] 263

    1. Create baseball_train (50%), baseball_valid (30%), and baseball_test (20%) datasets.

Hint: In addition to once again using the dplyr function filter() to specify which players you want in the new data frame, consider how you can split the amount of players into three groups of these specified sizes.



    1. Using your knowledge of ggplot from lab 2, plot the salary information of the train, validate and test groups using geom_histogram() or geom_density()

We will use the following function which we called lm_mse() to obtain the mse on the validation dataset for predictions from a linear model:

lm_mse <- function(formula, train_data, valid_data) {
  y_name <- as.character(formula)[2]
  y_true <- valid_data[[y_name]]
  
  lm_fit <- lm(formula, train_data)
  y_pred <- predict(lm_fit, newdata = valid_data)
  
  mean((y_true - y_pred)^2)
}

Note that the input consists of (1) a formula, (2) a training dataset, and (3) a test dataset.


  1. Try out the function with the formula Salary ~ Hits + Runs, using baseball_train and baseball_valid.

We have pre-programmed a function for you to generate a character vector for all formulas with a set number of p variables. You can load the function into your environment by sourcing the .R file it is written in:

source("generate_formulas.R")

You can use it like so:

generate_formulas(p = 2, x_vars = c("x1", "x2", "x3", "x4"), y_var = "y")
## [1] "y ~ x1 + x2" "y ~ x1 + x3" "y ~ x1 + x4" "y ~ x2 + x3" "y ~ x2 + x4"
## [6] "y ~ x3 + x4"

  1. Create a character vector of all predictor variables from the Hitters dataset. colnames() may be of help. Note that Salary is not a predictor variable.


  1. Using the function generate_formulas(), generate all formulas with as outcome Salary and 3 predictors from the Hitters data. Assign this to a variable called formulas. There should be 969 elements in this vector.


  1. Use a for loop to find the best set of 3 predictors in the Hitters dataset based on MSE. Use the baseball_train and baseball_valid datasets.

When creating the for loop, use the function as.formula() from the stats package to loop over all the equations contained in formulas. as.formula() transforms the characters of the input to a formula, so we can actually use it as a formula in our code.

To select the best formula with the best MSE, use the function which.min(), which presents the lowest value from the list provided.



  1. Do the same for 1, 2 and 4 predictors. Now select the best model from the models with the best set of 1, 2, 3, or 4 predictors in terms of its out-of-sample MSE


    1. Calculate the test MSE for the model with the best number of predictors.


    1. Using the model with the best number of predictors, create a plot comparing predicted values (mapped to x position) versus observed values (mapped to y position) of baseball_test.

Through enumerating all possibilities, we have selected the best subset of at most 4 non-interacting predictors for the prediction of baseball salaries. This method works well for few predictors, but the computational cost of enumeration increases quickly to the point where it is not feasible to enumerate all combinations of variables:


Regularization with glmnet

glmnet is a package that implements efficient (quick!) algorithms for LASSO and ridge regression, among other things.


  1. Skim through the help file of glmnet. We are going to perform a linear regression with normal (gaussian) error terms. What format should our data be in?

Again, we will try to predict baseball salary, this time using all the available variables and using the LASSO penalty to perform subset selection. For this, we first need to generate an input matrix.


  1. First generate the input matrix using (a variation on) the following code. Remember that the “.” in a formula means “all available variables”. Make sure to check that this x_train looks like what you would expect.

x_train <- model.matrix(Salary ~ ., data = baseball_train %>% select(-split))

The model.matrix() function takes a dataset and a formula and outputs the predictor matrix where the categorical variables have been correctly transformed into dummy variables, and it adds an intercept. It is used internally by the lm() function as well!


  1. Using glmnet(), perform a LASSO regression with the generated x_train as the predictor matrix and Salary as the response variable. Set the lambda parameter of the penalty to 15. NB: Remove the intercept column from the x_matrixglmnet adds an intercept internally.


  1. The coefficients for the variables are in the beta element of the list generated by the glmnet() function. Which variables have been selected? You may use the coef() function.


  1. Create a predicted versus observed plot for the model you generated with the baseball_valid data. Use the predict() function for this! What is the MSE on the validation set?


Tuning lambda

Like many methods of analysis, regularized regression has a tuning parameter. In the previous section, we’ve set this parameter to 15. The lambda parameter changes the strength of the shrinkage in glmnet(). Changing the tuning parameter will change the predictions, and thus the MSE. In this section, we will select the tuning parameter based on out-of-sample MSE.


  1. Fit a LASSO regression model on the same data as before, but now do not enter a specific lambda value. What is different about the object that is generated? Hint: use the coef() and plot() methods on the resulting object.

For deciding which value of lambda to choose, we could work similarly to what we have don in the best subset selection section before. However, the glmnet package includes another method for this task: cross validation.


  1. Use the cv.glmnet function to determine the lambda value for which the out-of-sample MSE is lowest using 15-fold cross validation. As your dataset, you may use the training and validation sets bound together with bind_rows(). What is the best lambda value?

Remember to remove column 21 in your dataset (the mutated column from Question 1. a) using [ , -21], which designated which observations should be split into which groups), as this is not a variable within the Hitters Dataset (which you have called Baseball). And to call the specific data lambda.min from the result of using cv.glmnet.



  1. Try out the plot() method on this object. What do you see? What does this tell you about the bias-variance tradeoff?

It should be noted, that for all these previous exercises they can also be completed using the Ridge Method which is not covered in much depth during this practical session. To learn more about this method please refer back Section 6.2 in the An Introduction to Statistical Learning Textbook.


Comparing methods (advanced)

This last exercise is optional. You can also opt to view the answer when made available and try to understand what is happening in the code.


  1. Create a bar plot comparing the test set (baseball_test) MSE of (a) linear regression with all variables, (b) the best subset selection regression model we created, (c) LASSO with lambda set to 50, and (d) LASSO with cross-validated lambda. As training dataset, use the rows in both the baseball_train and baseball_valid