Abstract
Unsupervised neural networks (NNs) specialize in mining potential
patterns from unlabeled data in a self-organizing manner. Recently, they
have also been employed as observers for process monitoring using the
generated residual signals. However, few studies have explained the
model behavior and analyzed the monitoring performance of unsupervised
NNs awing to their multilayer nonlinear structure. The interpretability
of NN covers the analysis of its working principle to seek better
network design and achieve improved performance. Thus, this paper
develops explainable residual generators based on unsupervised NNs,
which are applicable to deep autoencoder (DAE) and variational
autoencoder (VAE). Through Taylor expansion, the residual deviation
caused by the fault signal, a.k.a. the fault-affected term, is proven
not to disappear in the presence of a non-zero Hessian matrix. Then, the
consistency of achieving the optimal monitoring performance and the
training loss of NNs is presented. A new indicator function is
established based on the sum martingale, a representative of a weakly
dependent stochastic process. Freedman’s inequality is first applied to
describe the reliability of the learned thresholds during fault
evaluation, which requires a smaller sample size for training. Finally,
simulations on the continuous stirred tank reactor verify the
effectiveness of the proposed methods.