Interpreting the predictions of Deep Network build to identify early detection of Covid-19 in X-Ray Images
AI is proven technology which is currently serving many different industries. Weather forecasting, recommendation system, autonomous car are few of the examples where AI driven solutions are successfully used. Availability of intensive computing makes it possible to design and develop highly complicated deep learning architecture which is desire to reach human level of accuracy. Because of this reason it become possible to utilize AI technology in healthcare industry where accuracy is utmost important. Healthcare industry generates various types of Electronic Health Records (EHR) like patient medical history, hospital administration data, biological data, radiological data etc. These type of EHR data can have huge potential in diagnosis various diseases and potentially avoiding any critical risk. AI is also contributing significantly on drug discovery, understanding genetic disorder, cancer detection and many more. All such complex use case needs a complex AI-Deep Neural Network (DNN). Due to its complex architecture these DNN models considered as black box and it became difficult to explain the outcome of such models. Entrust on such solutions considered as a major concern area. Various techniques have been evolved that try to explain the reasoning behind the outcome of such DNN. On this paper two such explanation techniques LIME and LRP is used on explain the prediction made on custom build CNN model. Custom CNN model is trained on Covid-19 patients X-RAY images. The main objective of this paper is to present the explanation difference made by LIME and LRP. Later domain experts can analyze the model predictions and facilities in improving the explanation techniques.