This paper was born from a simple question I kept asking myself:
Can we really trust an AI model if we don’t know both why it makes a prediction and how sure it is about it?
In brain imaging, explainable AI and uncertainty quantification have often evolved in parallel worlds — one focusing on transparency, the other on reliability. I wanted to bring them together.
That’s how ICeX (Individual Conformalized Explanation) came to life: a framework that combines SHAP, for feature-level interpretability, and Conformal Prediction, for statistically valid uncertainty estimates. Together, they allow us to look at each prediction not only in terms of its causes, but also its confidence.
We tested ICeX on thalamic nuclei volumes from MRI scans of healthy young adults. The thalamus may not get as much attention as the cortex, but its subnuclei are incredibly sensitive to aging — and this finer anatomical detail turned out to matter.
The model reached a mean absolute error of 2.77 years and revealed the Left Lateral Geniculate, Left Paratenial, and Right Ventromedial nuclei as key contributors to brain aging. More importantly, it showed how each of these features influences not just the predicted brain age, but also the uncertainty around it.
For me, ICeX is a step toward a kind of AI that’s not just powerful, but also honest — an AI that tells you both what it thinks and how confident it is.
👉 Read the article in Computer Methods and Programs in Biomedicine




