Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

We present a method for classifying tasks in fetal ultrasound scans using the eye-tracking data of sonographers. The visual attention of a sonographer captured by eye-tracking data over time is defined by a scanpath. In routine fetal ultrasound, the captured standard imaging planes are visually inconsistent due to fetal position, movements, and sonographer scanning experience. To address this challenge, we propose a scale and position invariant task classification method using normalised visual scanpaths. We describe a normalisation method that uses bounding boxes to provide the gaze with a reference to the position and scale of the imaging plane and use the normalised scanpath sequences to train machine learning models for discriminating between ultrasound tasks. We compare the proposed method to existing work considering raw eyetracking data. The best performing model achieves the F1-score of 84% and outperforms existing models.

Original publication

DOI

10.1007/978-3-030-87583-1_13

Type

Conference paper

Publication Date

27/09/2021

Volume

12967

Pages

129 - 138

Keywords

Eye-tracking, fetal ultrasound, time-series classification, visual scanpath