loading page

Conditional Diffusion Models for Semantic 3D Medical Image Synthesis
  • +1
  • Zolnamar Dorjsembe ,
  • Hsing-Kuo Pao ,
  • Sodtavilan Odonchimed ,
  • Furen Xiao
Zolnamar Dorjsembe
National Taiwan University of Science and Technology, National Taiwan University of Science and Technology

Corresponding Author:[email protected]

Author Profile
Hsing-Kuo Pao
Author Profile
Sodtavilan Odonchimed
Author Profile
Furen Xiao
Author Profile

Abstract

The demand for artificial intelligence (AI) in healthcare is rapidly increasing. However, significant challenges arise from data scarcity and privacy concerns, particularly in medical imaging. While existing generative models have achieved success in image synthesis and image-to-image translation tasks, there remains a gap in the generation of 3D semantic medical images. To address this gap, we introduce Med-DDPM, a diffusion model specifically designed for semantic 3D medical image synthesis, effectively tackling data scarcity and privacy issues.
The novelty of Med-DDPM lies in its incorporation of semantic conditioning, enabling precise control during the image generation process. Our model outperforms Generative Adversarial Networks (GANs) in terms of stability and performance, generating diverse and anatomically coherent images with high visual fidelity. Comparative analysis against state-of-the-art augmentation techniques demonstrates that Med-DDPM produces comparable results, highlighting its potential as a data augmentation tool for enhancing model accuracy.
In conclusion, Med-DDPM pioneers 3D semantic medical image synthesis by delivering high-quality and anatomically coherent images. Furthermore, the integration of semantic conditioning with Med-DDPM holds promise for image anonymization in the field of biomedical imaging, showcasing the capabilities of the model in addressing challenges related to data scarcity and privacy concerns.