Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups

Input samples are projected into a latent space and augmentations are generated. The loss function minimizes the distance between the embedding along with the classification loss.

Abstract

Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically , unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender classifiers, the vision community has developed several strategies. However, the efficacy of these mitiga-tion strategies is demonstrated for a limited number of races mostly, Caucasian and African-American. Further, these strategies often offer a trade-off between bias and classification accuracy. To further advance the state-of-the-art, we leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias. We demonstrate the superiority of our bias mitigation strategy in improving classification accuracy and reducing bias across gender-racial groups through extensive experimental validation, resulting in state-of-the-art performance in intra-and cross dataset evaluations.

Publication
In Workshop on Understanding and Mitigrating Demographic Bias in Biometric Systems, International Conference on Pattern Recognition At Montreal, Canada

Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates

Click the Cite button above to import publication metadata into their reference management software.
comments powered by Disqus

Related