loading page

Toxic Comment Detection Using Bidirectional Sequence Classifiers
  • +2
  • Amit Maity ,
  • Rishi More ,
  • Abhijit Patil ,
  • Jay Oza ,
  • Gitesh Kambli
Amit Maity
Author Profile
Rishi More
K.J. Somaiya Institute of Technology

Corresponding Author:[email protected]

Author Profile
Abhijit Patil
Author Profile
Gitesh Kambli
Author Profile


With the rising surge of online toxicity, automating the identification of abusive language becomes crucial for improving online discourse. This study proposes a deep learning system that efficiently uses multiple labels to classify harmful comments using bi-directional Long Short-Term Memory (LSTM) networks. By leveraging contextual information, the bi-LSTM model achieves state-of-the-art performance in classifying subtle forms of toxicity such as threats, insults, identity hate, and obscenity. The model achieves above 95\% accuracy on benchmark datasets with rigorous data processing, optimized neural architecture, and the utilization of FastText embeddings to handle words that are not in the vocabulary. This technique can automatically filter different levels of toxicity, promoting positive online interactions when integrated into online platforms. The proposed study outlines an end-to-end pipeline incorporating recent NLP advancements and deep contextualized language models to address contemporary challenges in AI-enabled content moderation.