loading page

Non-linear Neurons with Human-like Apical Dendrite Activations
  • +1
  • Mariana-Iuliana Georgescu ,
  • Radu Tudor Ionescu ,
  • Nicolae-Catalin Ristea ,
  • Nicu Sebe
Mariana-Iuliana Georgescu
Author Profile
Radu Tudor Ionescu
University of Bucharest

Corresponding Author:[email protected]

Author Profile
Nicolae-Catalin Ristea
Author Profile
Nicu Sebe
Author Profile

Abstract

In order to classify linearly non-separable data, neurons are typically organized into multi-layer neural networks that are equipped with at least one hidden layer. Inspired by some recent discoveries in neuroscience, we propose a new neuron model along with a novel activation function enabling learning of non-linear decision boundaries using a single neuron. We show that a standard neuron followed by the novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy. Furthermore, we conduct experiments on three benchmark data sets from computer vision and natural language processing, i.e. Fashion-MNIST, UTKFace and MOROCO, showing that the ADA and the leaky ADA functions provide superior results to Rectified Liner Units (ReLU) and leaky ReLU, for various neural network architectures, e.g. 1-hidden layer or 2-hidden layers multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs) such as LeNet, VGG, ResNet and Character-level CNN. We also obtain further improvements when we change the standard model of the neuron with our pyramidal neuron with apical dendrite activations (PyNADA).