Skip to content
a-human-in-the-loop-explanation-framework-for-morphologically-transparent-ai-predictions-from-whole-slide-images-–-npj-digital-medicine

A human-in-the-loop explanation framework for morphologically transparent AI predictions from whole-slide images – npj Digital Medicine

  • Article
  • Open access
  • Published:
  • Peiliang Lou1,
  • Yitan Zhu2,
  • Nicholas Chia2,
  • Roopa Kumari3,
  • William Yang3,
  • Yan Wang4,
  • Brenna C. Novotny1,
  • Stacey J. Winham1,
  • Ruifeng Guo5,
  • Ellen L. Goode1,
  • Yajue Huang3,
  • Wenchao Han6,
  • Tianshu Feng7 &
  • Chen Wang1 

npj Digital Medicine (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

Abstract

Deep learning models enable the prediction of clinical endpoints from whole-slide images (WSIs), but many such models function as “black boxes”, lacking transparency about whether and which histomorphological patterns drive their predictions, hindering interpretability and clinical adoption. Here we propose a human-in-the-loop explanation framework, MorphoXAI, which provides both local and global interpretability for deep learning models by incorporating human-expert interpretations. At the global level, it reveals the histomorphological patterns on which the model consistently relies to distinguish between classes of WSIs, as well as the patterns associated with confusion between classes. At the local level, it indicates which of these patterns are used in the prediction of an individual WSI and which regions within the slide correspond to such patterns. We validated our method across multiple deep learning–based WSI analysis tasks spanning different tissue types. The results show that our framework generates explanations that accurately reflect the histomorphology underlying the model’s predictions at both global and local levels. For interpretability and clinical utility in diagnostic contexts, human evaluation results showed that our explanations were easy to interpret, rich in diagnostic features, and directly helpful for diagnostic decision-making, thereby enhancing pathologist-AI collaboration. Our work highlights that unifying global and local explanations and grounding them in expert-interpreted morphology enhances the interpretability and verifiability of deep learning models, thereby facilitating the transparent deployment of such models in clinical practice.

Acknowledgements

This work has been supported by NCI R01 CA248288, an Ovarian SPORE [NCI P50 CA136393] developmental research grant, and by the generous support of Schmidt Sciences and the Susan Morrow Legacy Foundation. The funders had no role in the study design, data analysis, manuscript preparation, or the decision to submit the work for publication.

Author information

Authors and Affiliations

  1. Division of Computational Biology, Mayo Clinic, Rochester, MN, USA

    Peiliang Lou, Brenna C. Novotny, Stacey J. Winham, Ellen L. Goode & Chen Wang

  2. Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA

    Yitan Zhu & Nicholas Chia

  3. Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA

    Roopa Kumari, William Yang & Yajue Huang

  4. Department of Pathology, Second People’s Hospital of Wuhu, Anhui, China

    Yan Wang

  5. Division of Anatomic Pathology, Department of Laboratory Medicine and Pathology, Mayo Clinic, Jacksonville, FL, USA

    Ruifeng Guo

  6. Division of Computational Pathology and Informatics, Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA

    Wenchao Han

  7. Department of Systems Engineering and Operations Research, George Mason University, Fairfax, VA, USA

    Tianshu Feng

Authors

  1. Peiliang Lou
  2. Yitan Zhu
  3. Nicholas Chia
  4. Roopa Kumari
  5. William Yang
  6. Yan Wang
  7. Brenna C. Novotny
  8. Stacey J. Winham
  9. Ruifeng Guo
  10. Ellen L. Goode
  11. Yajue Huang
  12. Wenchao Han
  13. Tianshu Feng
  14. Chen Wang

Corresponding authors

Correspondence to Yajue Huang or Chen Wang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lou, P., Zhu, Y., Chia, N. et al. A human-in-the-loop explanation framework for morphologically transparent AI predictions from whole-slide images. npj Digit. Med. (2026). https://doi.org/10.1038/s41746-026-02741-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41746-026-02741-z

colind88

Back To Top