loading page

Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks
  • Tilahun Getu
Tilahun Getu
Author Profile

Abstract

Abstract—Inspired by the depth and breadth of developments on the theory of deep learning, we pose these fundamental questions: can we accurately approximate an arbitrary matrix-vector product using deep rectified linear unit (ReLU) feedforward neural networks (FNNs)? If so, can we bound the resulting approximation error? Attempting to answer these questions, we derive error bounds in Lebesgue and Sobolev norms for a matrix-vector product approximation with deep ReLU FNNs. Since a matrix-vector product models several problems in wireless communications and signal processing; network science and graph signal processing; and network neuroscience and brain physics, we discuss various applications that are motivated by an accurate matrix-vector product approximation with deep ReLU FNNs. Toward this end, the derived error bounds offer a theoretical insight and guarantee in the development of algorithms based on deep ReLU FNNs.