Bias

Demographic Fairness and Accountability of Audio and Video-based Unimodal and Bi-modal Deepfake Detectors

With the advances in deep generative models, facial forgery by advanced deepfake generation techniques has posed a severe societal and political threat. Recently, a new problem of generating a synthesized human voice of a person is emerging. With the …

Leveraging Diffusion and Flow Matching Models for Demographic Bias Mitigation of Facial Attribute Classifiers

Published research highlights the presence of demographic bias in automated facial attribute classification algorithms, notably im-pacting women and individuals with darker skin tones. Proposed bias mitigation techniques are not generalizable, need …

GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection

Facial forgery by deepfakes has raised severe societal concerns. Several solutions have been proposed by the vision community to effectively combat the misinformation on the internet via automated deepfake detection systems. Recent studies have …

Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups

Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically , unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender …

Is Facial Recognition Biased at Near-Infrared Spectrum as Well?

Published academic research and media articles suggest face recognition is biased across demographics. Specifically, unequal performance is obtained for women, dark-skinned people, and older adults. However, these published studies have examined the …

Investigating Fairness of Ocular Biometrics Among Young, Middle-Aged, and Older Adults

A number of studies suggest bias of the face biometrics, i.e., face recognition and soft-biometric estimation methods, across gender, race, and age-groups. There is a recent urge to investigate the bias of different biometric modalities toward the …

A novel approach for bias mitigation of gender classification algorithms using consistency regularization

Published research has confirmed the bias of automated face-based gender classification algorithms across gender-racial groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people for face-based automated gender …

Demographic Bias Mitigation at Test-Time Using Uncertainty Estimation and Human-Machine Partnership

Facial attribute classification algorithms frequently manifest demographic biases by obtaining differential performance across gender and racial groups. Existing bias mitigation techniques are mostly in-processing techniques, i.e., implemented during …

ProActive DeepFake Detection using GAN-based Visible Watermarking

With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics …

An Examination and Comparison of Fairness of Face and Ocular Recognition Across Gender at NIR Spectrum

Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically , unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender …