loading page

Efficient Deep-Learning-Assisted Annotation for Medical Image Segmentation
  • +4
  • Lichun Zhang ,
  • Zhi Chen ,
  • Honghai Zhang ,
  • Fahim Ahmed Zaman ,
  • Andreas Wahle ,
  • Xiaodong Wu ,
  • Milan Sonka
Lichun Zhang
Iowa Institute for Biomedical Imaging

Corresponding Author:[email protected]

Author Profile
Honghai Zhang
Author Profile
Fahim Ahmed Zaman
Author Profile
Andreas Wahle
Author Profile
Xiaodong Wu
Author Profile
Milan Sonka
Author Profile

Abstract

Image segmentation is a fundamental problem in medical image analysis. Deep learning (DL) methods have achieved state-of-the-art (SOTA) results in various medical image segmentation tasks. This success is largely attributable to the use of large annotated datasets for training. However, due to anatomical variations and complexity of medical image data, annotations of large medical image datasets are not only labor-intensive and time-consuming but also demand specialty-oriented skills. In this paper, we report a novel segmentation quality assessment (SQA) framework that combines active learning and assisted annotation to dramatically reduce annotation effort both in image selection and annotation querying from human experts. We propose a two-branch network that integrates a spatial and channel-wise probability attention module into the segmentation network to perform segmentation and predict potential segmentation errors simultaneously. By directly assessing the segmentation quality of unannotated images, human experts can focus on the most relevant image samples, judiciously determine the most ‘valuable’ images for annotation and effectively employ adjudicated segmentations as the next-batch training annotations with the assistance of the automatically predicted salient erroneous areas. The model performance is thus incrementally boosted via fine-tuning on the newly annotated datasets. Extensive experiments on intravascular ultrasound (IVUS) image data demonstrate that our approach achieves SOTA segmentation performance using no more than 10% of training data and significantly reduces the annotation effort.