loading page

Explainable Intelligent Fault Diagnosis for Nonlinear Dynamic Systems: From Unsupervised to Supervised Learning
  • +2
  • Hongtian Chen ,
  • Zhigang Liu ,
  • Cesare Alippi ,
  • Biao Huang ,
  • Derong Liu
Hongtian Chen
University of Alberta

Corresponding Author:[email protected]

Author Profile
Zhigang Liu
Author Profile
Cesare Alippi
Author Profile
Biao Huang
Author Profile
Derong Liu
Author Profile

Abstract

The increased complexity and intelligence of automation systems require the development of intelligent fault diagnosis (IFD) methodologies. By relying on the concept of a suspected space, this study develops explainable data-driven IFD approaches for nonlinear dynamic systems. More in detail, we parameterize nonlinear systems through a generalized kernel representation used for system modeling and the associated fault diagnosis. An important result obtained is a unified form of kernel representations, applicable to both unsupervised and supervised learning. More importantly, through a rigorous theoretical analysis we discover the existence of \emph{a bridge} (i.e., a bijective mapping) between some supervised and unsupervised learning-based entities. Notably, the designed IFD approaches achieve the same performance by the use of this bridge. In order to have a better understanding of the results obtained, unsupervised and supervised neural networks are chosen as the learning tools to identify generalized kernel representations and design the IFD schemes; an invertible neural network is then employed to build the bridge between them. This study is a perspective article, whose contribution lies in proposing and detailing the fundamental concepts for explainable intelligent learning methods, contributing to system modeling and data-driven IFD designs for nonlinear dynamic systems.
2022Published in IEEE Transactions on Neural Networks and Learning Systems on pages 1-14. 10.1109/TNNLS.2022.3201511