Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Confirmation of Status

Abstract:

Prognostication is one of the core tasks in medical practitioners’ daily work. Current evidence has proved the importance of the risk models in prognostication and their benefit in improving medical prescribing, but the unsatisfactory performance of the established clinical models has also raised wide awareness. Given the central role of prediction in prognostication, together with the growing access to large-scale datasets such as electronic health records (EHR) from millions of individuals, it seems very likely that machine intelligence, especially deep learning, will have transformative effects on medical care. Despite the importance of having accurate risk prediction, the model explainability and the ability for uncertainty quantification are also important. These properties are essential to obtain trust and understanding for a model, which is crucial in clinical decision-making. However, the deep learning models are perceived as ‘black-box’ models with little opportunity for explaining medical phenomena which hinders their application in healthcare. Therefore, we intend to discuss these difficulties about risk prediction, explainability, and uncertainty in prognostication.