Getu_D-ReLU-FNNs_error_bounds.pdf (544.13 kB)
Download fileError Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks
Abstract—Inspired by the depth and breadth of developments on the theory of deep learning, we pose these fundamental questions: can we accurately approximate an arbitrary matrix-vector product using deep rectified linear unit (ReLU) feedforward neural networks (FNNs)? If so, can we bound the resulting approximation error? Attempting to answer these questions, we derive error bounds in Lebesgue and Sobolev norms for a matrix-vector product approximation with deep ReLU FNNs. Since a matrix-vector product models several problems in wireless communications and signal processing; network science and graph signal processing; and network neuroscience and brain physics, we discuss various applications that are motivated by an accurate matrix-vector product approximation with deep ReLU FNNs. Toward this end, the derived error bounds offer a theoretical insight and guarantee in the development of algorithms based on deep ReLU FNNs.
Funding
National Institute of Standards and Technology (NIST)
History
Email Address of Submitting Author
tilahun.getu@nist.govORCID of Submitting Author
0000-0002-0759-4118Submitting Author's Institution
National Institute of Standards and Technology (NIST), Gaithersburg, MD, USA and École de Technologie Supérieure (ÉTS), Montreal, QC, CanadaSubmitting Author's Country
- United States of America