loading page

YoloCurvSeg: You Only Label One Noisy Skeleton for Vessel-style Curvilinear Structure Segmentation
  • +4
  • Li Lin ,
  • Linkai Peng ,
  • Huaqing He ,
  • Pujin Cheng ,
  • Jiewei Wu ,
  • Kenneth Kin-Yip Wong ,
  • Xiaoying Tang
Li Lin
the Department of Electronic and Electrical Engineering, the Department of Electronic and Electrical Engineering, the Department of Electronic and Electrical Engineering

Corresponding Author:[email protected]

Author Profile
Linkai Peng
Author Profile
Huaqing He
Author Profile
Pujin Cheng
Author Profile
Jiewei Wu
Author Profile
Kenneth Kin-Yip Wong
Author Profile
Xiaoying Tang
Author Profile

Abstract

Weakly-supervised learning (WSL) has been proposed to alleviate the conflict between data annotation cost and model performance through employing sparsely-grained (i.e., point-, box-, scribble-wise) supervision and has shown promising performance, particularly in the image segmentation field. However, it is still a very challenging problem due to the limited supervision, especially when only a small number of labeled samples are available. Additionally, almost all existing WSL segmentation methods are designed for star-convex structures which are very different from curvilinear structures such as vessels and nerves. In this paper, we propose a novel sparsely annotated segmentation framework for curvilinear structures, named YoloCurvSeg, based on image synthesis. A background generator delivers image backgrounds that closely match real distributions through inpainting dilated skeletons. The extracted backgrounds are then combined with randomly emulated curves generated by a Space Colonization Algorithm-based foreground generator and through a multilayer patch-wise contrastive learning synthesizer. In this way, a synthetic dataset with both images and curve segmentation labels is obtained, at the cost of only one or a few noisy skeleton annotations.
Finally, a segmenter is trained with the generated dataset and possibly an unlabeled dataset. The proposed YoloCurvSeg is evaluated on four publicly available datasets (OCTA500, CORN, DRIVE and CHASEDB1) and the results show that YoloCurvSeg outperforms state-of-the-art WSL segmentation methods by large margins. With only one noisy skeleton annotation (respectively 0.14%, 0.03%, 1.40%, and 0.65% of the full annotation), YoloCurvSeg achieves more than 97% of the fully-supervised performance on each dataset. Code and datasets will be released at https://github.com/llmir/YoloCurvSeg.
Dec 2023Published in Medical Image Analysis volume 90 on pages 102937. 10.1016/j.media.2023.102937