PhD Student in Machine Learning & AI
This project evaluates the racial biases present in popular facial expression recognition (FER) datasets and assesses how these biases impact model performance and fairness across demographic groups. The goal is to promote equitable computer vision systems through better data curation and training.
The study revealed significant performance gaps, particularly underrepresented groups facing higher error rates. Proposed strategies include dataset augmentation and integrating fairness constraints during training. Findings contribute to fairness awareness in facial recognition technologies.