ABRobOcular: Adversarial benchmarking and robustness analysis of datasets and tools for ocular-based user recognition

Abstract

Ocular biometrics, leveraging the unique traits of the eye region, has gained significant adoption for identity verification and healthcare applications, including integration into products by tech giants like Meta and Apple. Despite their widespread implementation, the vulnerability of ocular biometrics to sophisticated adversarial attacks remains inadequately addressed in the existing literature, and most of the security evaluations focus on traditional physical spoofing rather than digital manipulation. Adversarial attacks involve using digitally altered data to deceive deep learning models into producing incorrect outputs. We investigate white-box and black-box-based adversarial attacks against ocular-based user recognition algorithms, selected based on their prevalence in the literature, computational feasibility, and demonstrated effectiveness in related biometric domains. This research is paramount to address gaps in the adversarial robustness of ocular biometrics in high-security applications. We benchmark the impact of 5 popular white-box adversarial attacks (FGSM, BIM, MIM, PGD, and CW) alongside their counterpart 9 defenses and 4 black-box based attacks (Patch Occlusion, Monocle Blending, Transfer-Based Attacks, and GAN-Based attacks) alongside 4 defense strategies, including our proposed Strip and Patch Augmentations, across multiple datasets. Our experiments reveal that white-box attacks can obtain an average attack success rate of about 40%–50% for the ocular verification system. While, black-box Monocle Blending attacks demonstrate vulnerabilities with success rates ranging from 44.89% to 99.58%. Our proposed unified defenses, particularly Strip Augmentation, show impressive effectiveness with attack success rate reduction of 75–97% against Patch Occlusion attacks and up to 92% against white-box attacks like BIM. To the best of our knowledge, this is the first comprehensive study of its kind. To facilitate further research, we are releasing the ocular image datasets with adversarial attacks, as well as the evaluation code, under the ABRobOcular library, providing a valuable resource for future work in the field.

Project Poster
"}