Adversarial AI threats on digitization of Nuclear Power Plants
The future digitalization of Nuclear power plants (NPP) involves uses of sensors data in digitalize formats and analysis by AI based techniques. The planned and existing network architectures used by U.S. nuclear facilities, particularly in high-security zones near the reactor cores, typically rely on a private intranet that connects multiple computers in a peer-to-peer (P2P) or similar network configuration. These intranets are considered highly secure due to their isolation from the outside world (Internet) and the implementation of data diodes to regulate one-way data flow. Additionally, these facilities employ various security measures, including physical security, authorized access, pre-screened employees, antivirus software, and supply chain verification. While these "air gap" systems and their security frameworks are effective in protecting against traditional threats like malware, ransomware, trojans, and intrusion detection, they may overlook a growing vulnerability related to AI models within these air gap systems. AI models or decision systems utilized within nuclear facilities collect sensor data through the secure network and make critical decisions. However, despite the network's robust security measures, the threat to AI models posed by adversarial attacks such as troj-AI, evasion-based attacks, backdoor attacks, and pre-trained poisoned attacks is often ignored by conventional virus scanners. These attacks exploit the vulnerabilities of AI models and can be difficult to detect without domain-specific knowledge related to the model data. As the security of nuclear power plants is paramount, it is crucial that we proactively scan and monitor AI models used in different sectors of these facilities. Unfortunately, there is currently no established framework to monitor and scan the behaviors and architectures of AI models, which poses a significant vulnerability for nuclear power plants. To ensure the comprehensive security of nuclear facilities, it is necessary to address this gap by developing specialized frameworks and mechanisms to monitor and assess the security of AI models. These frameworks should be capable of detecting and mitigating adversarial attacks targeting AI models, providing an additional layer of protection alongside existing security measures. By proactively addressing this emerging threat, we can enhance the overall security posture of nuclear power plants and better safeguard against potential risks.
Email Address of Submitting Authorkgupta@cau.edi
Submitting Author's InstitutionClark Atlanta University
Submitting Author's Country
- United States of America