TechRxiv
IEEE Paper First.pdf (1.6 MB)
Download file

Building an Adversarial Attack Hat to Fool Facial Recognition

Download (1.6 MB)
preprint
posted on 2021-02-09, 18:53 authored by Morgan FrearsonMorgan Frearson

The use of deep learning for human identification and object detection is becoming ever more prevalent in the surveillance industry. These systems have been trained to identify human body’s or faces with a high degree of accuracy. However, there have been successful attempts to fool these systems with different techniques called adversarial attacks. This paper presents an adversarial attack using infrared light on facial recognition systems. The relevance of this research is to exploit the physical downfalls of deep neural networks. This demonstration of weakness within these systems are in hopes that this research will be used in the future to improve the training models for object recognition. A research outline on infrared light and facial recognition are presented within this paper. A detailed analyzation of the current design phase and future steps of the of the project are presented including initial testing of the device. Any challenges are explored and evaluated such that the deliverables of the project remain consistent to its timeline. The project specifications may be subject to change overtime based on the outcomes of testing stages.

History

Email Address of Submitting Author

morgan.frearson@connect.qut.edu.au

Submitting Author's Institution

Queensland University of Technology

Submitting Author's Country

  • Australia

Usage metrics

Licence

Exports