loading page

Interpretability of Fake News Detection Model
  • +1
  • Likhitha Pulluru,
  • Laxmi Shravani Mamidala,
  • Ramanathan Subramanian,
  • Abhinav Dhall
Likhitha Pulluru
Laxmi Shravani Mamidala

Corresponding Author:[email protected]

Author Profile
Ramanathan Subramanian
Abhinav Dhall


The authenticity of information has been a longstanding issue with it's potential to impact millions of users in the blink of an eye. Recent years has seen a growth in development of fake news detection models. Model Interpretability especially in NLP domain is still challenging, yet it helps in adopting models to various domains. In this study we tried post hoc interpretation techniques like local and global interpretations using LIME and SHAP, Topic modeling and Keyword extraction techniques. We identified these are simple yet powerful techniques to better understand the tagging behavior of fake news detection models used in this paper. With the help of these interpretation and data augmentation techniques we measured model robustness and identified that models built on ML algorithms are not robust to covariance shift in input data. Also, we tried to derive some of the characteristics that better represents style of fake tweets.
18 Apr 2024Submitted to TechRxiv
24 Apr 2024Published in TechRxiv