The Bayesian Information Criterion, often abbreviated BIC, is a metric that is used to compare the goodness of fit of different regression models.
In practice, we fit several regression models to the same dataset and choose the model with the lowest BIC value as the model that best fits the data.
We use the following formula to calculate BIC:
BIC: (RSS+log(n)dσ̂2) / n
where:
- d: The number of predictors
- n: Total observations
- σ̂: Estimate of the variance of the error associate with each response measurement in a regression model
- RSS: Residual sum of squares of the regression model
- TSS: Total sum of squares of the regression model
To calculate the BIC of several regression models in Python, we can use the statsmodels.regression.linear_model.OLS() function, which has a property called bic that tells us the BIC value for a given model.
The following example shows how to use this function to calculate and interpret the BIC for various regression models in Python.
Example: Calculate BIC of Regression Models in Python
Suppose we would like to fit two different multiple linear regression models using variables from the mtcars dataset.
First, we’ll load this dataset:
from sklearn.linear_model import LinearRegression import statsmodels.api as sm import pandas as pd #define URL where dataset is located url = "https://raw.githubusercontent.com/Statology/Python-Guides/main/mtcars.csv" #read in data data = pd.read_csv(url) #view head of data data.head() model mpg cyl disp hp drat wt qsec vs am gear carb 0 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 1 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 2 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 3 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 4 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
Next, we’ll fit the following two regression models:
- Model 1: mpg = β0 + β1(disp)+ β2(qsec)
- Model 2: mpg = β0 + β1(disp)+ β2(wt)
The following code shows how to fit the first model and calculate the BIC:
#define response variable
y = data['mpg']
#define predictor variables
x = data[['disp', 'qsec']]
#add constant to predictor variables
x = sm.add_constant(x)
#fit regression model
model = sm.OLS(y, x).fit()
#view BIC of model
print(model.bic)
174.23905634994506
The BIC of this model turns out to be 174.239.
Next, we’ll fit the second model and calculate the BIC:
#define response variable
y = data['mpg']
#define predictor variables
x = data[['disp', 'wt']]
#add constant to predictor variables
x = sm.add_constant(x)
#fit regression model
model = sm.OLS(y, x).fit()
#view BIC of model
print(model.bic)
166.56499196301334
The BIC of this model turns out to be 166.565.
Since the second model has a lower BIC value, it is the better fitting model.
Once we’ve identified this model as the best, we can proceed to fit the model and analyze the results including the R-squared value and the beta coefficients to determine the exact relationship between the set of predictor variables and the response variable.
Additional Resources
Two other metrics that are commonly used to compare the fit of regression models are AIC and adjusted R-squared.
The following tutorials explain how to calculate each of these metrics for regression models in Python:
How to Calculate AIC of Regression Models in Python
How to Calculate Adjusted R-Squared in Python