Lichun Zhang

and 6 more

Image segmentation is a fundamental problem in medical image analysis. Deep learning (DL) methods have achieved state-of-the-art (SOTA) results in various medical image segmentation tasks. This success is largely attributable to the use of large annotated datasets for training. However, due to anatomical variations and complexity of medical image data, annotations of large medical image datasets are not only labor-intensive and time-consuming but also demand specialty-oriented skills. In this paper, we report a novel segmentation quality assessment (SQA) framework that combines active learning and assisted annotation to dramatically reduce annotation effort both in image selection and annotation querying from human experts. We propose a two-branch network that integrates a spatial and channel-wise probability attention module into the segmentation network to perform segmentation and predict potential segmentation errors simultaneously. By directly assessing the segmentation quality of unannotated images, human experts can focus on the most relevant image samples, judiciously determine the most ‘valuable’ images for annotation and effectively employ adjudicated segmentations as the next-batch training annotations with the assistance of the automatically predicted salient erroneous areas. The model performance is thus incrementally boosted via fine-tuning on the newly annotated datasets. Extensive experiments on intravascular ultrasound (IVUS) image data demonstrate that our approach achieves SOTA segmentation performance using no more than 10% of training data and significantly reduces the annotation effort.

Fahim Ahmed Zaman

and 4 more

Despite the advancement in deep learning-based semantic segmentation methods, which have achieved accuracy levels of field experts in many computer vision applications, the same general approaches may frequently fail in 3D medical image segmentation due to complex tissue structures, noisy acquisition, disease-related pathologies, as well as the lack of sufficiently large datasets with associated annotations. For expeditious diagnosis and quantitative image analysis in large-scale clinical trials, there is a compelling need to predict segmentation quality without ground truth. In this paper, we propose a deep learning framework to locate erroneous regions on the boundary surfaces of segmented objects for quality control and assessment of segmentation. A Convolutional Neural Network (CNN) is explored to learn the boundary related image features of multi-objects that can be used to identify location-specific inaccurate segmentation. The predicted error locations can facilitate efficient user interaction for interactive image segmentation (IIS). We evaluated the proposed method on two data sets: Osteoarthritis Initiative (OAI) 3D knee MRI and 3D calf muscle MRI. The average sensitivity scores of $0.96\pm0.00$ and $0.96\pm0.02$, and the average positive predictive values of $0.87\pm0.01$ and $0.93\pm0.03$ were achieved, respectively, for erroneous surface region detection of knee cartilage segmentation and calf muscle segmentation. Our experiment demonstrated promising performance of the proposed method for segmentation quality assessment by automated detection of erroneous surface regions in medical images.

Yaopeng Peng

and 6 more