Bias in Facial Classification ML Models

python
r
r-studio
analysis
exploratory
quarto
dashboard
shiny
plotly
hypothesis-testing
machine-learning
statistics
latex
youtube
Author

Carl Klein, Patrick Cooper, Bhavana Jonnalagadda, Piya (Leo) Ngamkam, Dhairya Veera

Published

December 18, 2023

An exploratory and statistical analysis on the biases prevalent in facial recognition machine learning models.

GitHub Repository

Project Website

YouTube Video

My team, consisting of six members, performed a litany of analyses to test if there was bias in facial recognition machine learning models. We tested the DeepFace and FairFace algorithms against a large and diverse dataset. Performing both the classic performance measurements such as accuracy and f1-score, and categorical hypothesis testing (proportionality testing), we were able to find some instances of bias. Ironically, perhaps our biggest discovery was that categorial hypoothesis testing (proportionality testing) was not a strong indicator to identify issues and errors in machine learning models.

See the links above for the complete analysis and documentation.