Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

OBJECTIVE: Stillbirth is a potentially preventable complication of pregnancy. Identifying women at high risk of stillbirth can guide decisions on the need for closer surveillance and timing of delivery in order to prevent fetal death. Prognostic models have been developed to predict the risk of stillbirth, but none has yet been validated externally. In this study, we externally validated published prediction models for stillbirth using individual participant data (IPD) meta-analysis to assess their predictive performance. METHODS: MEDLINE, EMBASE, DH-DATA and AMED databases were searched from inception to December 2020 to identify studies reporting stillbirth prediction models. Studies that developed or updated prediction models for stillbirth for use at any time during pregnancy were included. IPD from cohorts within the International Prediction of Pregnancy Complications (IPPIC) Network were used to validate externally the identified prediction models whose individual variables were available in the IPD. The risk of bias of the models and cohorts was assessed using the Prediction study Risk Of Bias ASsessment Tool (PROBAST). The discriminative performance of the models was evaluated using the C-statistic, and calibration was assessed using calibration plots, calibration slope and calibration-in-the-large. Performance measures were estimated separately in each cohort, as well as summarized across cohorts using random-effects meta-analysis. Clinical utility was assessed using net benefit. RESULTS: Seventeen studies reporting the development of 40 prognostic models for stillbirth were identified. None of the models had been previously validated externally, and the full model equation was reported for only one-fifth (20%, 8/40) of the models. External validation was possible for three of these models, using IPD from 19 cohorts (491 201 pregnant women) within the IPPIC Network database. Based on evaluation of the model development studies, all three models had an overall high risk of bias, according to PROBAST. In the IPD meta-analysis, the models had summary C-statistics ranging from 0.53 to 0.65 and summary calibration slopes ranging from 0.40 to 0.88, with risk predictions that were generally too extreme compared with the observed risks. The models had little to no clinical utility, as assessed by net benefit. However, there remained uncertainty in the performance of some models due to small available sample sizes. CONCLUSIONS: The three validated stillbirth prediction models showed generally poor and uncertain predictive performance in new data, with limited evidence to support their clinical application. The findings suggest methodological shortcomings in their development, including overfitting. Further research is needed to further validate these and other models, identify stronger prognostic factors and develop more robust prediction models. © 2021 The Authors. Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

Original publication

DOI

10.1002/uog.23757

Type

Journal article

Journal

Ultrasound Obstet Gynecol

Publication Date

02/2022

Volume

59

Pages

209 - 219

Keywords

external validation, individual participant data, intrauterine death, prediction model, stillbirth, Cohort Studies, Female, Fetal Development, Humans, Infant, Newborn, Models, Statistical, Perinatal Death, Pregnancy, Pregnancy Complications, Prognosis, Regression Analysis, Risk Assessment, Stillbirth, Ultrasonography, Prenatal