Facial Privacy Preservation using FGSM and Universal Perturbation attacks
Recent research has established the possibility of deducing soft-biometric attributes such as age, gender and race from an individual’s face image with high accuracy. Many techniques have been proposed to ensure user privacy, such as visible distortions to the images, manipulation of the original image with new face attributes, face swapping etc. Though these techniques achieve the goal of user privacy by fooling face recognition models, they don’t help the user when they want to upload original images without visible distortions or manipulation. The objective of this work is to implement techniques to ensure the privacy of user’s sensitive or personal data in face images by creating minimum pixel level distortions using white-box and black-box perturbation algorithms to fool AI models while maintaining the integrity of the image, so as to appear the same to a human eye.