Abstract
Protecting the privacy of personal information, including emotions, is
essential, and organizations must comply with relevant regulations to
ensure privacy. Unfortunately, some organizations do not respect these
regulations, or they lack transparency, leaving human privacy at risk.
These privacy violations often occur when unauthorized organizations
misuse machine learning (ML) technology, such as facial expression
recognition (FER) systems. Therefore, researchers and practitioners must
take action and use ML technology for social good to protect human
privacy. One emerging research area that can help address privacy
violations is the use of adversarial ML for social good. Evasion
attacks, which are used to fool ML systems, can be repurposed to prevent
misused ML technology, such as ML-based FER, from recognizing true
emotions. By leveraging adversarial ML for social good, we can prevent
organizations from violating human privacy by misusing ML technology,
particularly FER systems, and protect individuals’ personal and
emotional privacy. In this work, we propose an approach called Chaining
of Adversarial ML Attacks (CAA) to create a robust attack that fools
misused technology and prevents it from detecting true emotions. To
validate our proposed approach, we conduct extensive experiments using
various evaluation metrics and baselines. Our results show that CAA
significantly contributes to emotional privacy preservation, with the
fool rate of emotions increasing proportionally to the chaining length.
In our experiments, the fool rate increases by 48% in each subsequent
chaining stage of the chaining targeted attacks (CTA) while keeping the
perturbations imperceptible (ε = 0.0001).