TechRxiv
Veldhuis21_On Neyman-Pearson optimality of binary neural net classifiers TechRxiv.pdf (3.19 MB)

On Neyman-Pearson optimality of binary neural net classifiers

Download (3.19 MB)
preprint
posted on 2021-06-03, 03:29 authored by Raymond VeldhuisRaymond Veldhuis, Dan Zeng

In classical binary statistical pattern recognition optimality in Neyman-Pearson sense, achieved by a (log) likelihood ratio based classifier, is often desirable. A drawback of a Neyman-Pearson optimal classifier is that it requires full knowledge of the (quotient of the) class-conditional probability densities of the input data, which is often unrealistic. The design of neural net classifiers is data driven, meaning that no explicit use is made of the class-conditional probability densities of the input data. In this paper a proof is presented that a neural net can also be trained to approximate a log-likelihood ratio and be used as a Neyman-Pearson optimal, prior-independent classifier. Properties of the approximation of the log-likelihood ratio are discussed. Examples of neural nets trained on synthetic data with known log-likelihood ratios as ground truth illustrate the results.

History

Email Address of Submitting Author

r.n.j.veldhuis@utwente.nl

ORCID of Submitting Author

https://orcid.org/0000-0002-0381-5235

Submitting Author's Institution

University of Twente, Enschede, the Netherlands

Submitting Author's Country

  • Netherlands