Abstract
Measurements of cardiac function such as left ventricular ejection
fraction and myocardial strain are typically based on 2D ultrasound
imaging. The reliability of these measurements strongly depends on the
correct pose of the transducer such that the 2D imaging plane properly
aligns with the heart for standard measurement views, and is thus
dependent on the operator’s skills. In this work, we propose a deep
learning-based tool that provides real-time feedback on how to move the
transducer to obtain the required views. We believe this method can aid
less-experienced users to acquire recordings of better quality for
measurements and diagnosis, and to improve standardization of images for
more experienced users. Training data was generated by slicing 3D
ultrasound volumes, which permits to simulate movements of a transducer
and 2D imaging plane. Each slice was labelled with an anatomical
reference obtained through a semi-automatic annotation procedure, which
allowed us to generate substantial amounts of training data. The method
was validated and tested on 2D images from several datasets
representative of a prospective clinical setting. We proposed a new
metric to score the correctness of the transducer movement feedback
according to several given criteria, and achieved a success rate of 75%
for all models and 95% for the rotational movement. A real-time
prototype application was developed based on data streaming from a
clinical ultrasound system, which demonstrated the ability of the method
to robustly predict the apical rotation and tilt of the 2D ultrasound
image plane relative to the heart.