When Explainability Meets Uncertainty: The Idea Behind ICeX

This paper was born from a simple question I kept asking myself:

Can we really trust an AI model if we don’t know both why it makes a prediction and how sure it is about it?

In brain imaging, explainable AI and uncertainty quantification have often evolved in parallel worlds — one focusing on transparency, the other on reliability. I wanted to bring them together.

That’s how ICeX (Individual Conformalized Explanation) came to life: a framework that combines SHAP, for feature-level interpretability, and Conformal Prediction, for statistically valid uncertainty estimates. Together, they allow us to look at each prediction not only in terms of its causes, but also its confidence.

We tested ICeX on thalamic nuclei volumes from MRI scans of healthy young adults. The thalamus may not get as much attention as the cortex, but its subnuclei are incredibly sensitive to aging — and this finer anatomical detail turned out to matter.

The model reached a mean absolute error of 2.77 years and revealed the Left Lateral GeniculateLeft Paratenial, and Right Ventromedial nuclei as key contributors to brain aging. More importantly, it showed how each of these features influences not just the predicted brain age, but also the uncertainty around it.

For me, ICeX is a step toward a kind of AI that’s not just powerful, but also honest — an AI that tells you both what it thinks and how confident it is.

👉 Read the article in Computer Methods and Programs in Biomedicine

🔬 Unlocking Brain Insights with AI: Three New Studies on Brain Age, Well-being, and Sex Differences! 🧠📊

🚀 New Research Alert! 🚀
Excited to share that three of my proceedings have just been published! 🎉 These studies leverage large-scale international neuroimaging datasets and cutting-edge interdisciplinary AI techniques to mine knowledge from brain structure and function.

🔍 What’s inside?
⚖️ Sex-Based Brain Morphometry Differences: Conducted by my PhD student Chiara Camastra, this research explores Explainable AI (XGBoost, SHAP, EBM) to identify sex-specific brain structural patterns.
👩‍⚕️ Psychological Well-being Prediction: Conducted by my PhD student Assunta Pelagi, this study applies Machine Learning and SHAP to reveal key emotional and social predictors of well-being.
🧠 Brain Age Estimation: Using Random Forests and Conformal Prediction for uncertainty quantification in brain aging analysis.

These works highlight how AI, neuroscience, and cognitive science converge to uncover new insights into the human brain, driving advancements in precision medicine and neurological research.

💡 The big picture?
🔬 Harnessing large neuroimaging datasets
📊 Integrating AI-driven predictions with uncertainty quantification
🧩 Advancing explainable and interpretable machine learning

🔗 Read more:
📄 Brain Age Estimation: DOI: 10.1007/978-3-031-82487-6_10
📄 Well-being Prediction (by Assunta Pelagi): DOI: 10.1007/978-3-031-82487-6_19
📄 Sex-based Morphometry Analysis (by Chiara Camastra): DOI: 10.1007/978-3-031-82487-6_17

A big thank you to my PhD students Assunta Pelagi and Chiara Camastra for their contributions to these studies 💪💪💪!

#AI #Neuroscience #MachineLearning #ExplainableAI #BrainResearch #Neuroimaging #BigData #PrecisionMedicine #ACAIN2024

Quantification of differences between feature importance rankings in Machine Learning

Quantifying differences in feature importance rankings of #machinelearning #classification could enhance #interpretability and #explainability: we show how through the rank-biased overlap similarity measure. Take a look at my novel work!

https://link.springer.com/chapter/10.1007/978-3-031-15037-1_11

Check also my oral communication at the Brain Informatics 2022

[BI2022] Special Session XAIB – Video recording

In case you missed the live, here the recording of the BI2022 Special Session on EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR UNVEILING THE BRAIN: FROM THE BLACK-BOX TO THE GLASS-BOX (XAIB)

15th July 2022, 14:00-16:00 (GMT+2), with Prof. Monica Hernandez, Dr. Bojan Bogdanovic and Dr. Antonio Parziale

https://drive.google.com/file/d/1yr_tbZ-9QXTQWHrkIlRi_9bohkGCEpzI/view?usp=sharing

[BI2021] Special Session XAIB – Video recording

In case you missed the live, here the recording of the BI2021 Special Session on EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR UNVEILING THE BRAIN: FROM THE BLACK-BOX TO THE GLASS-BOX (XAIB)
18th September 2021, 14:00-16:00 UK time (GMT+1), with Dr. Rich Caruana, Dr. Michele Ferrante and Dr. Dimitris Pinotsis:

https://drive.google.com/file/d/1YHdJu_PHXH_s9To7dZQ66q2OFG4-6VNK/view?usp=sharing