Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.


PULSE stands for Perception Ultrasound By Learning Sonographic Experience. It is an ambitious, innovative, interdisciplinary research project exploring the use of Artificial Intelligence-based technologies to reduce the need for highly trained ultrasound operators. It is a joint research project with the Department of Engineering Science at the University of Oxford.


(Image: Front cover of the MICCAI magazine, featuring PULSE).

The greatest barrier to the universal implementation of ultrasound in clinical obstetric medicine today is the need to train sonographers to the highest level to ensure diagnostic images are of consistently high quality and fit for purpose. Unfortunately, the non-expert finds ultrasound images very difficult to interpret by eye alone. We apply the latest ideas from machine learning and computer vision to build, from real-world ultrasound scanning videos, eye-tracking and probe movement data, computational models of visual search and navigation.

Our work is motivated by the observation that sonographers find it easier to interpret their own scans than review those taken by others. The innovation in PULSE is to apply the latest ideas from machine learning and computer vision to build, from real world training video data, computational models that describe how an expert sonographer performs a diagnostic study of a subject from multiple perceptual cues.

Novel machine-learning based computational models will be derived based on probe and eye motion tracking, image processing, and knowledge of how to interpret real-world clinical images and videos acquired to a standardised protocol. By building models that more closely mimic how a human makes decisions from ultrasound images we believe we will build considerably more powerful assistive interpretation methods than have previously been possible from still ultrasound images and videos alone. For more information see the study website

Funding: European Research Council (ERC).