loading page

CNNs Improve Decoding of Selective Attention to Speech in Cochlear Implant Users
  • +2
  • Constantin Jehn,
  • Adrian Kossmann,
  • Niki Katerina Vavatzanidis,
  • Anja Hahne,
  • Tobias Reichenbach
Constantin Jehn

Corresponding Author:[email protected]

Author Profile
Adrian Kossmann
Niki Katerina Vavatzanidis
Anja Hahne
Tobias Reichenbach

Abstract

Understanding speech in the presence of background noise such as other speech streams is a difficult problem for people with hearing impairment, and in particular for users of cochlear implants (CIs). To improve their listening experience, auditory attention decoding (AAD) aims to decode the target speaker of a listener from electroencephalography (EEG), and then use this information to steer an auditory prosthesis towards this speech signal. In normal-hearing individuals, deep neural networks (DNNs) have been shown to improve AAD compared to simpler linear models. AAD has also been shown to be feasible in CI users using linear models, however, it has not yet been shown that DNNs can yield enhanced decoding accuracies for this patient group. Here we show that attention decoding in CI users can be significantly improved through the usage of a convolutional neural network (CNN). To this end, we first collected an EEG dataset on selective auditory attention from 25 CI users, and then implemented both a linear model as well as a CNN for attention decoding. We observed superior performance of the CNN across all considered decision window sizes, ranging from 1 s to 60 s. Boosted by a Support Vector Machine (SVM) as a trainable classifier, the CNN decoder achieved a maximal mean decoding accuracy of 74% at the population level for a decision window of 60 s duration. Our findings illustrate that the progress made in AAD among normal hearing participants, facilitated by the integration of DNNs, extends to cochlear implant (CI) users.
23 May 2024Submitted to TechRxiv
30 May 2024Published in TechRxiv