9.1 C
London
Friday, December 20, 2024
HomeRDescriptive Statistics in RHow to Calculate Cohen’s Kappa in R

How to Calculate Cohen’s Kappa in R

Related stories

Learn About Opening an Automobile Repair Shop in India

Starting a car repair shop is quite a good...

Unlocking the Power: Embracing the Benefits of Tax-Free Investing

  Unlocking the Power: Embracing the Benefits of Tax-Free Investing For...

Income Splitting in Canada for 2023

  Income Splitting in Canada for 2023 The federal government’s expanded...

Can I Deduct Home Office Expenses on my Tax Return 2023?

Can I Deduct Home Office Expenses on my Tax...

Canadian Tax – Personal Tax Deadline 2022

  Canadian Tax – Personal Tax Deadline 2022 Resources and Tools...

In statistics, Cohen’s Kappa is used to measure the level of agreement between two raters or judges who each classify items into mutually exclusive categories.

The formula for Cohen’s kappa is calculated as:

k = (po – pe) / (1 – pe)

where:

  • po: Relative observed agreement among raters
  • pe: Hypothetical probability of chance agreement

Rather than just calculating the percentage of items that the raters agree on, Cohen’s Kappa attempts to account for the fact that the raters may happen to agree on some items purely by chance.

The value for Cohen’s Kappa always ranges between 0 and 1where:

  • 0 indicates no agreement between the two raters
  • 1 indicates perfect agreement between the two raters

The following table summarizes how to interpret different values for Cohen’s Kappa:

Cohen's Kappa

The easiest way to calculate Cohen’s Kappa in R is by using the cohen.kappa() function from the psych package.

The following example shows how to use this function in practice.

Example: Calculating Cohen’s Kappa in R

Suppose two art museum curators are asked to rate 15 paintings on whether they’re good enough to be shown in a new exhibit.

The following code shows how to use the cohen.kappa() function from the psych package to calculate Cohen’s Kappa for the two raters:

library(psych)

#define vector of ratings for both raters
rater1 = [0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0]
rater2 = [0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0]

#calculate Cohen's Kappa
cohen.kappa(x=cbind(rater1,rater2))

Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries 
                 lower estimate upper
unweighted kappa -0.14     0.34  0.81
weighted kappa   -0.14     0.34  0.81

 Number of subjects = 15 

The estimate column displays the value for Cohen’s Kappa.

From the output we can see that Cohen’s Kappa turns out to be 0.34.

Based on the table from earlier, we would say that the two raters only had a “fair” level of agreement.

If you want to calculate the level of agreement between three or more raters, it’s recommended to use Fleiss’ Kappa instead.

Additional Resources

The following tutorials offer additional resources on Cohen’s Kappa:

Introduction to Cohen’s Kappa
Online Cohen’s Kappa Calculator
How to Calculate Cohen’s Kappa in Excel
How to Calculate Cohen’s Kappa in Python

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories