BUFFALO, N.Y. — In most individuals, permanent second premolars begin to erupt around age 11, pushing out the baby, or primary, teeth. However, in anywhere between 2 and 11% of the population, these permanent second premolars do not exist. “Tooth agenesis, a congenital condition characterized by the absence of one or more teeth, is among
![[talk]-alina-jade-barnett:-inherently-interpretable-deep-learning-models](https://sapio.asia/wp-content/uploads/2026/02/36509-talk-alina-jade-barnett-inherently-interpretable-deep-learning-models.jpg)
[Talk] Alina Jade Barnett: Inherently Interpretable Deep Learning Models

When: Thursday, March 12, 3:00 PM
Where: Avedisian 205
Abstract
AI models now perform high-stakes tasks traditionally reserved for skilled professionals, often surpassing human expert performance. Despite these advances, the “black box” (i.e., uninterpretable) nature of many machine learning algorithms poses significant challenges. Opaque models resist troubleshooting, cannot justify their decisions, and lack accountability—limitations that have slowed their adoption in critical workflows. My research addresses this challenge by developing interpretable deep learning models for both general tasks and specific clinical decisions in mammography and neurology. Through novel neural network architectures, objective functions, and training regimes, I create models that achieve accuracy comparable to conventional black box systems while remaining inherently interpretable. These models are constrained to provide faithful explanations for their predictions, functioning not merely as decision-makers but as decision aids that communicate their reasoning in human-understandable terms. This human-centered design enables expert users to scrutinize the model’s logic, appropriately calibrate their trust, and intervene when necessary. The result is a more collaborative human-AI partnership that maintains both high performance and meaningful human oversight.
Bio
Alina Jade Barnett is an assistant professor in the Department of Computer Science and Statistics at the University of Rhode Island. Previously, she held postdoctoral researcher and student positions in the Interpretable Machine Learning lab led by Cynthia Rudin at Duke University. She received her undergraduate in physics from McMaster University in Canada. Alina researches interpretable deep learning for computer vision with applications in clinical medicine. Outside of research, she is a classical musician, URI sea shanty social club member and former varsity coxswain.
