Building an Adversarial Attack Hat to Fool Facial Recognition
The use of deep learning for human identification and object detection is becoming ever more prevalent in the surveillance industry. These systems have been trained to identify human body’s or faces with a high degree of accuracy. However, there have been successful attempts to fool these systems with different techniques called adversarial attacks. This paper presents an adversarial attack using infrared light on facial recognition systems. The relevance of this research is to exploit the physical downfalls of deep neural networks. This demonstration of weakness within these systems are in hopes that this research will be used in the future to improve the training models for object recognition. A research outline on infrared light and facial recognition are presented within this paper. A detailed analyzation of the current design phase and future steps of the of the project are presented including initial testing of the device. Any challenges are explored and evaluated such that the deliverables of the project remain consistent to its timeline. The project specifications may be subject to change overtime based on the outcomes of testing stages.
Email Address of Submitting Authormorgan.firstname.lastname@example.org
Submitting Author's InstitutionQueensland University of Technology
Submitting Author's Country