++++Notebook converted from Jupyter for blog publishing.
03-Regularization-Ridge-Lasso-ElasticNet
Regularization with SciKit-Learn
Previously we created a new polynomial feature set and then applied our standard linear regression on it, but we can be smarter about model choice and utilize regularization.
Regularization attempts to minimize the RSS (residual sum of squares) and a penalty factor. This penalty factor will penalize models that have coefficients that are too large. Some methods of regularization will actually cause non useful features to have a coefficient of zero, in which case the model does not consider the feature.
Let's explore two methods of regularization, Ridge Regression and Lasso. We'll combine these with the polynomial feature set (it wouldn't be as effective to perform regularization of a model on such a small original feature set of the original X).
Imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as snsData and Setup
df = pd.read_csv("Advertising.csv")
X = df.drop('sales',axis=1)
y = df['sales']Polynomial Conversion
from sklearn.preprocessing import PolynomialFeaturespolynomial_converter = PolynomialFeatures(degree=3, include_bias=False)poly_features = polynomial_converter.fit_transform(X)Train | Test Split
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)Scaling the Data
While our particular data set has all the values in the same order of magnitude ($1000s of dollars spent), typically that won't be the case on a dataset, and since the mathematics behind regularized models will sum coefficients together, its important to standardize the features. Review the theory videos for more info, as well as a discussion on why we only fit to the training data, and transform on both sets separately.
from sklearn.preprocessing import StandardScaler# help(StandardScaler)scaler = StandardScaler()scaler.fit(X_train)StandardScaler()In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
StandardScaler
?Documentation for StandardScaleriFitted
ParametersX_train = scaler.transform(X_train)X_test = scaler.transform(X_test)Ridge Regression
Make sure to view video lectures for full explanation of Ridge Regression and choosing an alpha.
from sklearn.linear_model import Ridgeridge_model = Ridge(alpha=10)ridge_model.fit(X_train,y_train)Ridge(alpha=10)test_predictions = ridge_model.predict(X_test)from sklearn.metrics import mean_absolute_error,mean_squared_errorMAE = mean_absolute_error(y_test,test_predictions)
MSE = mean_squared_error(y_test,test_predictions)
RMSE = np.sqrt(MSE)MAE0.5774404204714167RMSE0.894638646131965How did it perform on the training set? (This will be used later on for comparison)
# Training Set Performance
train_predictions = ridge_model.predict(X_train)
MAE = mean_absolute_error(y_train,train_predictions)
MAE0.5288348183025304Choosing an alpha value with Cross-Validation
Review the theory video for full details.
from sklearn.linear_model import RidgeCV# help(RidgeCV)# Choosing a scoring: https://scikit-learn.org/stable/modules/model_evaluation.html
# Negative RMSE so all metrics follow convention "Higher is better"
# See all options: sklearn.metrics.SCORERS.keys()
ridge_cv_model = RidgeCV(alphas=(0.1, 1.0, 10.0),scoring='neg_mean_absolute_error')# The more alpha options you pass, the longer this will take.
# Fortunately our data set is still pretty small
ridge_cv_model.fit(X_train,y_train)RidgeCV(alphas=array([ 0.1, 1. , 10. ]), scoring='neg_mean_absolute_error')ridge_cv_model.alpha_0.1test_predictions = ridge_cv_model.predict(X_test)MAE = mean_absolute_error(y_test,test_predictions)
MSE = mean_squared_error(y_test,test_predictions)
RMSE = np.sqrt(MSE)MAE0.4273774884351013RMSE0.6180719926948981# Training Set Performance
# Training Set Performance
train_predictions = ridge_cv_model.predict(X_train)
MAE = mean_absolute_error(y_train,train_predictions)
MAE0.30941321056569865ridge_cv_model.coef_array([ 5.40769392, 0.5885865 , 0.40390395, -6.18263924, 4.59607939,
-1.18789654, -1.15200458, 0.57837796, -0.1261586 , 2.5569777 ,
-1.38900471, 0.86059434, 0.72219553, -0.26129256, 0.17870787,
0.44353612, -0.21362436, -0.04622473, -0.06441449])Lasso Regression
from sklearn.linear_model import LassoCV# https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html
lasso_cv_model = LassoCV(eps=0.1,n_alphas=100,cv=5)lasso_cv_model.fit(X_train,y_train)LassoCV(cv=5, eps=0.1)lasso_cv_model.alpha_0.4943070909225832test_predictions = lasso_cv_model.predict(X_test)MAE = mean_absolute_error(y_test,test_predictions)
MSE = mean_squared_error(y_test,test_predictions)
RMSE = np.sqrt(MSE)MAE0.6541723161252867RMSE1.1308001022762548# Training Set Performance
# Training Set Performance
train_predictions = lasso_cv_model.predict(X_train)
MAE = mean_absolute_error(y_train,train_predictions)
MAE0.6912807140820709lasso_cv_model.coef_array([1.002651 , 0. , 0. , 0. , 3.79745279,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ])Elastic Net
Elastic Net combines the penalties of ridge regression and lasso in an attempt to get the best of both worlds!
from sklearn.linear_model import ElasticNetCVelastic_model = ElasticNetCV(l1_ratio=[.1, .5, .7,.9, .95, .99, 1],tol=0.01)elastic_model.fit(X_train,y_train)ElasticNetCV(l1_ratio=[0.1, 0.5, 0.7, 0.9, 0.95, 0.99, 1], tol=0.01)elastic_model.l1_ratio_1.0test_predictions = elastic_model.predict(X_test)MAE = mean_absolute_error(y_test,test_predictions)
MSE = mean_squared_error(y_test,test_predictions)
RMSE = np.sqrt(MSE)MAE0.5663262117569452RMSE0.7485546215633726# Training Set Performance
# Training Set Performance
train_predictions = elastic_model.predict(X_train)
MAE = mean_absolute_error(y_train,train_predictions)
MAE0.43075829904723684elastic_model.coef_array([ 3.78993643, 0.89232919, 0.28765395, -1.01843566, 2.15516144,
-0.3567547 , -0.271502 , 0.09741081, 0. , -1.05563151,
0.2362506 , 0.07980911, 1.26170778, 0.01464706, 0.00462336,
-0.39986069, 0. , 0. , -0.05343757])