Interpretable Neural Networks and Their Application to Inferring Inter-well Connectivity
2020-04-15T12:16:16Z (GMT) by
The demand for understandable and accountable
machine learning models is becoming more and more important with time. In this paper, we propose a sparsity-based interpretable neural network model and a constrained interpretable neural network model. Both of them are easier to interpret, providing more accurate and comprehensive overview of the relationships between the inputs and the outputs of the networks. We use some effective evaluation measures to assess the contribution from each input to each output. Clear interpretations of the learned models are found, along with intuitive heat-maps for visualization of the connection weights. Furthermore, the proposed methods are applied to infer the inter-well connectivity between the injectors and the producers in reservoir engineering.
After training the networks by Water Injection Rate (WIR) and Liquid Production Rate (LPR) data, the reservoir connectivity is efficiently characterized with dynamic parameters. To our knowledge, this is the first time to emphasize on special interpretable neural networks to handle this problem. The empirical results demonstrate the effectiveness of the proposed methods and validate their interpretations.