loading page

Speech-driven Personalized Gesture Synthetics: Harnessing Automatic Fuzzy Feature Inference
  • +8
  • Fan Zhang ,
  • Zhaohan Wang ,
  • Xin Lyu ,
  • Siyuan Zhao ,
  • Mengjian Li ,
  • Weidong Geng ,
  • Naye Ji ,
  • Hui Du ,
  • Fuxing Gao ,
  • Hao Wu ,
  • Shunman Li
Fan Zhang
the Faculty of Humanities and Arts, the Faculty of Humanities and Arts

Corresponding Author:[email protected]

Author Profile
Zhaohan Wang
Author Profile
Siyuan Zhao
Author Profile
Mengjian Li
Author Profile
Weidong Geng
Author Profile
Fuxing Gao
Author Profile
Shunman Li
Author Profile


Speech-driven gesture generation is an emerging field within virtual human creation. However, a significant challenge lies in accurately determining and processing the multitude of input features (such as acoustic, semantic, emotional, personality, and even subtle unknown features). Traditional approaches, reliant on various explicit feature inputs and complex multimodal processing, constrain the expressiveness of resulting gestures and limit their applicability. To address these challenges, we present Persona-Gestor, a novel end-to-end generative model designed to generate highly personalized 3D full-body gestures solely relying on raw speech audio. The model combines a fuzzy feature extractor and a non-autoregressive Adaptive Layer Normalization (AdaLN) transformer diffusion architecture. The fuzzy feature extractor harnesses a fuzzy inference strategy that automatically infers implicit, continuous fuzzy features. These fuzzy features, represented as a unified latent feature, are fed into the AdaLN transformer. The AdaLN transformer introduces a conditional mechanism that applies a uniform function across all tokens, thereby effectively modeling the correlation between the fuzzy features and the gesture sequence. This module ensures a high level of gesture-speech synchronization while preserving naturalness. Finally, we employ the diffusion model to train and infer various gestures. Extensive subjective and objective evaluations on the Trinity, ZEGGS, and BEAT datasets confirm our model's superior performance to the current state-of-the-art approaches. \Persona-Gestor improves the system's usability and generalization capabilities, setting a new benchmark in speech-driven gesture synthesis and broadening the horizon for virtual human technology. Supplementary videos and code can be accessed at https://zf223669.github.io/Diffmotion-v2-website/
27 Feb 2024Submitted to TechRxiv
27 Feb 2024Published in TechRxiv