loading page

SwinAnomaly: Real-Time Video Anomaly Detection Using Video Swin Transformer and SORT
  • +3
  • Arpit Bajgoti,
  • Rishik Gupta,
  • Prasanalakshmi Balaji,
  • Rinky Dwivedi,
  • Meena Siwach,
  • Deepak Gupta
Arpit Bajgoti
Department of Computer Science and Engineering, Maharaja Surajmal Institute of Technology
Rishik Gupta
Department of Computer Science and Engineering, Maharaja Surajmal Institute of Technology
Prasanalakshmi Balaji
Department of Computer Science, College of Computer Science, King Khalid University

Corresponding Author:[email protected]

Author Profile
Rinky Dwivedi
Department of Computer Science and Engineering, Maharaja Surajmal Institute of Technology
Meena Siwach
Department of Information Technology, Maharaja Surajmal Institute of Technology
Deepak Gupta
Department of Computer Science and Engineering, Maharaja Agrasen Institute of Technology

Abstract

Detecting anomalous events in videos is a challenging task due to their infrequent and unpredictable nature in real-world scenarios. In this paper, we propose SwinAnomaly, a video anomaly detection approach based on a conditional GAN-based autoencoder with feature extractors based on Swin Transformers. Our approach encodes spatiotemporal features from a sequence of video frames using a 3D encoder and upsamples them to predict a future frame using a 2D decoder. We utilize patch-wise mean squared error and Simple Online and Real-time Tracking (SORT) for real-time anomaly detection and tracking. Our approach outperforms existing prediction-based video anomaly detection methods and offers flexibility in localizing anomalies through several parameters. Extensive testing shows that SwinAnomaly achieves state-of-the-art performance on public benchmarks, demonstrating the effectiveness of our approach for real-world video anomaly detection. Furthermore, our proposed approach has the potential to enhance public safety and security in various applications, including crowd surveillance, traffic monitoring, and industrial safety.
22 Mar 2024Submitted to TechRxiv
29 Mar 2024Published in TechRxiv