Xplainer:从X射线观察到可解释的零样本诊断 Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis

作者:Chantal Pellegrini Matthias Keicher Ege Özsoy Petra Jiraskova Rickmer Braren Nassir Navab


Automated diagnosis prediction from medical images is a valuable resource tosupport clinical decision-making. However, such systems usually need to betrained on large amounts of annotated data, which often is scarce in themedical domain. Zero-shot methods address this challenge by allowing a flexibleadaption to new settings with different clinical findings without relying onlabeled data. Further, to integrate automated diagnosis in the clinicalworkflow, methods should be transparent and explainable, increasing medicalprofessionals’ trust and facilitating correctness verification. In this work,we introduce Xplainer, a novel framework for explainable zero-shot diagnosis inthe clinical setting. Xplainer adapts the classification-by-descriptionapproach of contrastive vision-language models to the multi-label medicaldiagnosis task. Specifically, instead of directly predicting a diagnosis, weprompt the model to classify the existence of descriptive observations, which aradiologist would look for on an X-Ray scan, and use the descriptorprobabilities to estimate the likelihood of a diagnosis. Our model isexplainable by design, as the final diagnosis prediction is directly based onthe prediction of the underlying descriptors. We evaluate Xplainer on two chestX-ray datasets, CheXpert and ChestX-ray14, and demonstrate its effectiveness inimproving the performance and explainability of zero-shot diagnosis. Ourresults suggest that Xplainer provides a more detailed understanding of thedecision-making process and can be a valuable tool for clinical diagnosis.



Related posts