loading page

Performance Analysis of YOLO-NAS SOTA Models on CAL Tool Detection
  • Muhammad Adil Raja ,
  • Róisín Loughran,
  • Fergal Mccaffery
Muhammad Adil Raja

Corresponding Author:[email protected]

Author Profile
Róisín Loughran
Regulated Software Research Center (RSRC) Dundalk Institute of Technology (DkIT) Dundalk
Fergal Mccaffery
Regulated Software Research Center (RSRC) Dundalk Institute of Technology (DkIT) Dundalk

Abstract

Every now and then, we witness significant improvements in the performance of Deep Learning models. A typical cycle of improvement involves enhanced accuracy followed by reduced computing time. As algorithms get better at their job, it is worthwhile to try to evaluate their performance on problems that are affected by them. Computationally intense problems, such as object detection for Computer Aided Laparoscopy (CAL), can benefit from such improvements in such technologies. Recently a new set of variants of You Look Only Once (YOLO) models based on Neural Architecture Search (NAS) technique have been released. Deci, the enterprise behind this new development, touts a much better performance both in terms of accuracy as well as computational efficiency. In this paper, we have analyzed the performance YOLO-NAS on a well-known benchmark dataset related to CAL. We found that the performance of all the NAS-based YOLO was inferior as compared to other State-of-the-Art (SoTA) YOLO models. We compare our results against the YOLOv7 model too.
02 Jan 2024Submitted to TechRxiv
08 Jan 2024Published in TechRxiv