TASLP20_flow_based_deep_latent_variable_model.pdf (3.47 MB)
Download file

A Flow-Based Deep Latent Variable Model for Speech Spectrogram Modeling and Enhancement

Download (3.47 MB)
posted on 2020-05-27, 13:44 authored by Aditya Arie NugrahaAditya Arie Nugraha, Kouhei Sekiguchi, Kazuyoshi Yoshii
This paper describes a deep latent variable model of speech power spectrograms and its application to semi-supervised speech enhancement with a deep speech prior. By integrating two major deep generative models, a variational autoencoder (VAE) and a normalizing flow (NF), in a mutually-beneficial manner, we formulate a flexible latent variable model called the NF-VAE that can extract low-dimensional latent representations from high-dimensional observations, akin to the VAE, and does not need to explicitly represent the distribution of the observations, akin to the NF. In this paper, we consider a variant of NF called the generative flow (GF a.k.a. Glow) and formulate a latent variable model called the GF-VAE. We experimentally show that the proposed GF-VAE is better than the standard VAE at capturing fine-structured harmonics of speech spectrograms, especially in the high-frequency range. A similar finding is also obtained when the GF-VAE and the VAE are used to generate speech spectrograms from latent variables randomly sampled from the standard Gaussian distribution. Lastly, when these models are used as speech priors for statistical multichannel speech enhancement, the GF-VAE outperforms the VAE and the GF.



NII CRIS-Line Collaborative Research


Email Address of Submitting Author

ORCID of Submitting Author


Submitting Author's Institution


Submitting Author's Country

  • Japan

Usage metrics