Conditional Diffusion Models for Semantic 3D Medical Image Synthesis
The demand for artificial intelligence (AI) in healthcare is rapidly increasing. However, significant challenges arise from data scarcity and privacy concerns, particularly in medical imaging. While existing generative models have achieved success in image synthesis and image-to-image translation tasks, there remains a gap in the generation of 3D semantic medical images. To address this gap, we introduce Med-DDPM, a diffusion model specifically designed for semantic 3D medical image synthesis, effectively tackling data scarcity and privacy issues. The novelty of Med-DDPM lies in its incorporation of semantic conditioning, enabling precise control during the image generation process. Our model outperforms Generative Adversarial Networks (GANs) in terms of stability and performance, generating diverse and anatomically coherent images with high visual fidelity. Comparative analysis against state-of-the-art augmentation techniques demonstrates that Med-DDPM produces comparable results, highlighting its potential as a data augmentation tool for enhancing model accuracy. In conclusion, Med-DDPM pioneers 3D semantic medical image synthesis by delivering high-quality and anatomically coherent images. Furthermore, the integration of semantic conditioning with Med-DDPM holds promise for image anonymization in the field of biomedical imaging, showcasing the capabilities of the model in addressing challenges related to data scarcity and privacy concerns. Our code and model weights are publicly accessible on our GitHub repository at https://github.com/mobaidoctor/med-ddpm/, facilitating reproducibility.
Email Address of Submitting Authorzolnamar@gmail.com
ORCID of Submitting Authorhttps://orcid.org/0000-0002-6823-7712
Submitting Author's InstitutionNational Taiwan University of Science and Technology
Submitting Author's Country