Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Researchers from the University of Oxford, including clinicians and scientists from the Nuffield Department of Women’s & Reproductive Health (WRH), have developed a new artificial intelligence (AI) system to assist clinicians during fetal ultrasound scans in real time.

The technology, called Sonomate, was developed by the Noble Group at the Oxford Institute of Biomedical Engineering in collaboration with the Papageorghiou Group at NDWRH. Prof Aris Papageorghiou and Jayne Lander, Research Sonographer were key contributors to the study, which has been accepted for publication in Nature Biomedical Engineering.

Addressing the challenges of ultrasound scanning in pregnancy

Ultrasound imaging is among the most commonly used diagnostic tools in prenatal care. However, it is also highly operator-dependent, meaning the quality of scans can vary depending on the experience and confidence of the person conducting the examination.

Trainee sonographers often encounter steep learning curves and limited supervision in busy clinical environments. Even experienced clinicians may struggle to ensure that all necessary anatomical views are obtained during a time-pressured scan.

The researchers' work investigated whether AI could offer live, practical support during the scan itself, rather than analysing images after the examination has concluded, similar to existing AI ultrasound tools.

How Sonomate Works in Practice

Sonomate is an AI assistant designed to work alongside the sonographer during a live fetal ultrasound scan. It analyses real-time ultrasound video while interpreting the clinician's spoken instructions. By combining what it “sees” with what it “hears”, Sonomate can help provide practical guidance to assist the sonographer in delivering a more accurate scan than if they were alone.

 

Sonomate acts as a digital ‘mate’ for the sonographer. It’s designed to guide and support clinicians, particularly those early in their training, much like having an expert supervisor by your side 24/7.

– Dr. Xiaoqing Guo, Technical lead and first author of the study.

Differentiating Sonomate from existing AI tools

What sets Sonomate apart from existing AI tools is its ability to understand both moving ultrasound video and spoken language simultaneously. Previous AI systems have largely focused on analysing still images or reviewing scans retrospectively.

Sonomate instead operates in real time during the scan, even when speech and visuals are not perfectly synchronised.

During testing, the system showed strong ability in recognising fetal anatomy and answering clinically relevant questions at both image and video levels. It is also efficient enough to function in real time on existing hardware.

The future outlook for Sonomate

Sonomate is currently a research prototype, but the team sees significant potential for future development and clinical impact. 

While highly experienced sonographers may need minimal assistance, Sonomate has the potential to significantly transform sonographer training and support clinicians who use ultrasound less frequently. The technology has the potential to reduce unnecessary repeat scans, identify missed anatomical views, assess image quality, and offer real-time contextual guidance, helping early-career sonographers build diagnostic accuracy and confidence.

 

 

 

Sonomate demonstrates how pregnancy ultrasound scanning and diagnosis can be simplified using the latest advances in multi-modal AI. This proof-of-concept study highlights the potential of AI-powered tools to assist ultrasound skills training and to support non-specialists in hospital or community settings in a completely new way. While further work is needed before clinical deployment, we are excited by both the technical progress and the potential to transform ultrasound practice, including in the NHS.

– Professor Alison Noble

Collaborators and contributions

The research was funded by Professor Noble’s UKRI Turing AI World-Leading Researcher Fellowship award (EP/X040186/1) and ERC Advanced Grant (ERC-ADG-2015 694681), and the EPSRC Visual AI Programme Grant (EP/T028572/1). Professors Papageorghiou and Noble also received funding for the work from the NIHR-funded Oxford Biomedical Research Centre. Dr. Guo’s work was additionally supported by the Hong Kong Research Grants Council (RGC) Early Career Scheme grant 22203525.

Contributors: Xiaoqing Guo, Mohammad Alsharid, He Zhao, Yipei Wang, Jayne Lander, Alison Noble, Aris T. Papageorghiou,

Full Publication

A visually grounded language model for fetal ultrasound understanding.

Nature Biomedical Engineering.

Upcoming OxTalks

Our Research Groups