Chun-Kai Hwang

and 1 more

For classification problems, neural networks are well known for high accuracy in comparison to traditional statistical methods such as logistic regression and discriminant analysis. It is even better than other algorithms such as decision trees and Bayesian networks. However, the knowledge learned by the neural networks is stored in the hierarchical functional mapping of the structures of neural networks and the weight and bias parameters. It is not easy for people to understand its black-box decision process. In this research, we extract probabilistic Boolean classification rules from neural networks. The ruleset model can be tuned to a specified sensitivity according to different thresholds. In addition, we can compute a weighted important factor for each attribute that composes the Boolean rules. The weighted important factor is a numeric number between 0 and 1. If the weighted important factor is 0, it means the corresponding attribute is a noise signal. Hence, the weighted important features can be filtered out with a given threshold. From the linearly and nonlinearly separable simulation datasets, we find that the accuracy of PBCR1 and PBCR2 are better than neural networks even with a 1/10 training ratio. From UCI machine learning datasets, we find that the AUC of PBCR1 and PBCR2 will be a little lower than the AUC of neural networks. However, on the accuracy metric, from red wine and white wine datasets, PBCR1 and PBCR2 are almost the same with neural networks. The accuracies of PBCR1 and PBCR2 are superior to DT by a statistically significant margin. For the F1 score, PBCR1 and PBCR2 are statistically significantly better than DT on red wine, white wine, and PID datasets.