loading page

Balancing Biases and Preserving Privacy on Balanced Faces in the Wild
  • +2
  • Joseph Robinson ,
  • Yun Fu ,
  • Samson Timoner, ,
  • Yann Henon ,
  • Can qin
Joseph Robinson
Northeastern University

Corresponding Author:[email protected]

Author Profile
Samson Timoner,
Author Profile
Yann Henon
Author Profile


There are demographic biases in current models used for facial recognition (FR). Our Balanced Faces In the Wild (BFW) dataset serves as a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup. We show performances are non-optimal when a single score threshold is used to determine whether sample pairs are genuine or imposter. Across subgroups, performance ratings vary from the reported across the entire dataset. Thus, claims of specific error rates only hold true for populations matching that of the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted using state-of-the-art. Not only does this technique balance performance, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. The removal of demographic knowledge prevents future potential biases from being injected into decision-making. This removal satisfies privacy concerns. We explore why this works qualitatively; we also show quantitatively that subgroup classifiers can no longer learn from the features mapped by the proposed.
2023Published in IEEE Transactions on Image Processing volume 32 on pages 4365-4377. 10.1109/TIP.2023.3282837