loading page

On Neuron Activation Pattern and Applications
  • +4
  • Ziping Jiang,
  • Plamen Angelov,
  • Dmitry Kangin,
  • Zhaonian Zhang,
  • Richard Jiang ,
  • Dmitry Kangin,
  • plamen angelov
Ziping Jiang
Plamen Angelov
Dmitry Kangin

Corresponding Author:

Zhaonian Zhang
Richard Jiang

Corresponding Author:[email protected]

Author Profile
Dmitry Kangin
Author Profile
plamen angelov
Author Profile

Abstract

As various deep learning applications have been deployed in diverse areas, the explainability of neural networks is becoming increasingly important in the research field. Besides being desirable on its own account, explainability also often helps further improve performance of deep learning models. In this work, we introduce float neurons and fixed neurons to describe the neuron-level stability in a network based on the activation pattern of neurons on given input. With the proposed concept, we quantify the expressive ability and robustness of a neural network with a neuron entropy metric and illustrate their relationship by decomposing the computational graph of a neural network. We find theoretically that networks with better generalization have more diverse activation patterns across the input space, which results in a higher neuron entropy globally. On the other hand, the prediction of neural networks is prone to be affected by perturbation when there are locally more float neurons, which respond with additional impulses to local stimuli. Empirically, we show that the proposed analytical framework can be applied to downstream applications, including network pruning and randomized smoothing of network prediction.
26 Dec 2023Submitted to TechRxiv
02 Jan 2024Published in TechRxiv