loading page

RSSNet: A Fine-tuned Deep Learning Network for Robotic Surgical-tool Segmentation
  • Muhammad Sadaqat Janjua ,
  • Hussam Ali ,
  • Qaisar Farooq
Muhammad Sadaqat Janjua
University of Sargodha, University of Sargodha, University of Sargodha

Corresponding Author:[email protected]

Author Profile
Hussam Ali
Author Profile
Qaisar Farooq
Author Profile

Abstract

Robotics have significantly transformed surgical practices by improving precision and minimizing invasiveness. Accurate segmentation is crucial to the surgical systems, which remains challenging due to the complex and diverse nature of surgical scenarios and equipment. This paper presents a more robust architecture, the Robotic Surgical-tool Segmentation Network (RSSNet), to enhance the accuracy metrics of surgical-tool segmentation in robot-assisted surgery. The proposed RSSNet model combines the power of Atrous Spatial Pyramid Pooling (ASPP) with the efficiency of average pooling parameter to extract fine details through multiscale manifold convolution kernels for accurate segmentation of diverse-scale instruments. The proposed method outperformed the base U-Net architecture for the enhancement to segment biomedical images with surgical toolsets with intricate details. The model was rigorously tested on three primary datasets, Kvasir-Instrument and EndoVis 2017 & 2018, demonstrating robustness and versatility. An ablation study further confirmed the effectiveness of each component in the model architecture, particularly for generalizability. This study offers an effective, and high-performance low-end solution for surgical tool segmentation, an essential step toward fully autonomous robotic surgeries. Compared with the U-Net baseline method, our proposed method improved the Dice coefficient (DSC) by 5.93% and the mIoU by 6.84% due to multi-scale kernel processing with different dilation rates. While juxtaposed with the best model yet, i.e., Surg_Net, the DSC of the proposed method increased by 1.01%, and mIoU by 1.04%.