The Mahalanobis distance is the distance between two points in a multivariate space. It’s often used to find outliers in statistical analyses that involve several variables.
This tutorial explains how to calculate the Mahalanobis distance in Python.
Example: Mahalanobis Distance in Python
Use the following steps to calculate the Mahalanobis distance for every observation in a dataset in Python.
Step 1: Create the dataset.
First, we’ll create a dataset that displays the exam score of 20 students along with the number of hours they spent studying, the number of prep exams they took, and their current grade in the course:
import numpy as np import pandas as pd import scipy as stats data = {'score': [91, 93, 72, 87, 86, 73, 68, 87, 78, 99, 95, 76, 84, 96, 76, 80, 83, 84, 73, 74], 'hours': [16, 6, 3, 1, 2, 3, 2, 5, 2, 5, 2, 3, 4, 3, 3, 3, 4, 3, 4, 4], 'prep': [3, 4, 0, 3, 4, 0, 1, 2, 1, 2, 3, 3, 3, 2, 2, 2, 3, 3, 2, 2], 'grade': [70, 88, 80, 83, 88, 84, 78, 94, 90, 93, 89, 82, 95, 94, 81, 93, 93, 90, 89, 89] } df = pd.DataFrame(data,columns=['score', 'hours', 'prep','grade']) df.head() score hours prep grade 0 91 16 3 70 1 93 6 4 88 2 72 3 0 80 3 87 1 3 83 4 86 2 4 88
Step 2: Calculate the Mahalanobis distance for each observation.
Next, we will write a short function to calculate the Mahalanobis distance.
#create function to calculate Mahalanobis distance def mahalanobis(x=None, data=None, cov=None): x_mu = x - np.mean(data) if not cov: cov = np.cov(data.values.T) inv_covmat = np.linalg.inv(cov) left = np.dot(x_mu, inv_covmat) mahal = np.dot(left, x_mu.T) return mahal.diagonal() #create new column in dataframe that contains Mahalanobis distance for each row df['mahalanobis'] = mahalanobis(x=df, data=df[['score', 'hours', 'prep', 'grade']]) #display first five rows of dataframe df.head() score hours prep grade mahalanobis 0 91 16 3 70 16.501963 1 93 6 4 88 2.639286 2 72 3 0 80 4.850797 3 87 1 3 83 5.201261 4 86 2 4 88 3.828734
Step 3: Calculate the p-value for each Mahalanobis distance.
We can see that some of the Mahalanobis distances are much larger than others. To determine if any of the distances are statistically significant, we need to calculate their p-values.
The p-value for each distance is calculated as the p-value that corresponds to the Chi-Square statistic of the Mahalanobis distance with k-1 degrees of freedom, where k = number of variables. So, in this case we’ll use a degrees of freedom of 4-1 = 3.
from scipy.stats import chi2 #calculate p-value for each mahalanobis distance df['p'] = 1 - chi2.cdf(df['mahalanobis'], 3) #display p-values for first five rows in dataframe df.head() score hours prep grade mahalanobis p 0 91 16 3 70 16.501963 0.000895 1 93 6 4 88 2.639286 0.450644 2 72 3 0 80 4.850797 0.183054 3 87 1 3 83 5.201261 0.157639 4 86 2 4 88 3.828734 0.280562
Typically a p-value that is less than .001 is considered to be an outlier. We can see that the first observation is an outlier in the dataset because it has a p-value less than .001.
Depending on the context of the problem, you may decide to remove this observation from the dataset since it’s an outlier and could affect the results of the analysis.