loading page

Computation Overhead Optimization Strategy and Implementation for Dual-Domain Sparse-View CT Reconstruction
  • +2
  • Zihan Deng,
  • Zhisheng Wang,
  • Legeng Lin,
  • Shunli Wang,
  • Junning Cui
Zihan Deng

Corresponding Author:[email protected]

Author Profile
Zhisheng Wang
Legeng Lin
Shunli Wang
Junning Cui


Sparse-view computed tomography (CT) significantly reduces radiation doses to the human body, whereas its analytical reconstruction exhibits severe streak artifacts. Recently, deep learning methods have shown exciting effects in CT reconstruction. The Dual-Domain (DuDo) deep learning method is one of the representative methods, and it can process the information in both the sinogram and image domains. However, the existing DuDo methods do not pay enough attention to the allocation of training costs and strategies for the two domains. In this paper, we propose a Computation-Overhead Optimization (COO) DuDo training strategy for sparse-view CT reconstruction, i.e., COO-DuDo. The training ratio of different domains is controlled by calculating their computation overhead, loss, and gradient variation of the loss. To make our COO-DuDo strategy enable sparse-view CT reconstruction better, we adopt a DuDo-Network (COO-DDNet) structure based on two coding-decoding-type subnetworks. As specific contributions, we design a Multilevel Cross-domain Connection (MCC) method to connect the decoding layers of the same scale in the two subnetworks and adopt the two-channel method for upsampling, which enables more fine-grained control of model updates and suppresses checkerboard artifacts. The evaluation results validate the effectiveness of our training strategy and methods. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the reconstruction results increased by 38.8% and 0.37%, respectively, and the model convergence time decreased by 11.8%. Our research provides a broader perspective for dual-domain image restoration tasks from the perspective of computational overhead.
06 Jan 2024Submitted to TechRxiv
10 Jan 2024Published in TechRxiv