Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

© 2020, Springer Nature Switzerland AG. Domain adaptation is an active area of current medical image analysis research. In this paper, we present a cross-device and cross-anatomy adaptation network (CCAN) for automatically annotating fetal anomaly ultrasound video. In our approach, deep learning models trained on more widely available expert-acquired and manually-labeled free-hand ultrasound video from a high-end ultrasound machine are adapted to a particular scenario where limited and unlabeled ultrasound videos are collected using a simplified sweep protocol suitable for less-experienced users with a low-cost probe. This unsupervised domain adaptation problem is interesting as there are two domain variations between the datasets: (1) cross-device image appearance variation due to using different transducers; and (2) cross-anatomy variation because the simplified scanning protocol does not necessarily contain standard views seen in typical free-hand scanning video. By introducing a novel structure-aware adversarial training module to learn the cross-device variation, together with a novel selective adaptation module to accommodate cross-anatomy variation domain transfer is achieved. Learning from a dataset of high-end machine clinical video and expert labels, we demonstrate the efficacy of the proposed method in anatomy classification on the unlabeled sweep data acquired using the non-expert and low-cost ultrasound probe protocol. Experimental results show that, when cross-device variations are learned and reduced only, CCAN significantly improves the mean recognition accuracy by 20.8% and 10.0%, compared to a method without domain adaptation and a state-of-the-art adaptation method, respectively. When both the cross-device and cross-anatomy variations are reduced, CCAN improves the mean recognition accuracy by a statistically significant 20% compared with these other state-of-the-art adaptation methods.

Original publication

DOI

10.1007/978-3-030-60334-2_5

Type

Conference paper

Publication Date

01/10/2020

Volume

12437 LNCS

Pages

42 - 51