When Explainability Meets Uncertainty: The Idea Behind ICeX

This paper was born from a simple question I kept asking myself:

Can we really trust an AI model if we don’t know both why it makes a prediction and how sure it is about it?

In brain imaging, explainable AI and uncertainty quantification have often evolved in parallel worlds — one focusing on transparency, the other on reliability. I wanted to bring them together.

That’s how ICeX (Individual Conformalized Explanation) came to life: a framework that combines SHAP, for feature-level interpretability, and Conformal Prediction, for statistically valid uncertainty estimates. Together, they allow us to look at each prediction not only in terms of its causes, but also its confidence.

We tested ICeX on thalamic nuclei volumes from MRI scans of healthy young adults. The thalamus may not get as much attention as the cortex, but its subnuclei are incredibly sensitive to aging — and this finer anatomical detail turned out to matter.

The model reached a mean absolute error of 2.77 years and revealed the Left Lateral GeniculateLeft Paratenial, and Right Ventromedial nuclei as key contributors to brain aging. More importantly, it showed how each of these features influences not just the predicted brain age, but also the uncertainty around it.

For me, ICeX is a step toward a kind of AI that’s not just powerful, but also honest — an AI that tells you both what it thinks and how confident it is.

👉 Read the article in Computer Methods and Programs in Biomedicine

XAIB2025 — Explainable AI for Unveiling the Brain

The fifth edition of XAIB – Explainable Artificial Intelligence for Unveiling the Brain was held within the Congress Brain Informatics 2025, continuing a journey that began five years ago with a simple yet ambitious goal: to make Artificial Intelligence not only powerful but also understandable and trustworthy.

The Focus of XAIB2025

This year’s edition explored how explainability and uncertainty quantification can be combined to build AI systems that are both interpretable and reliable — two fundamental requirements for their translation into neuroscience and clinical practice.

XAIB2025 highlighted the importance of moving beyond accuracy, emphasizing reproducibility, transparency, and trust as the pillars of next-generation AI in medicine.

Invited Speakers

The session featured three outstanding invited speakers whose work is shaping the field of explainable and trustworthy AI:

  • Valeriy Manokhin – Conformal Prediction for Trustworthy, Explainable AI
  • Vincenzo Dentamaro – Deterministic Explainability with EVIDENCE and MuPAX Theories
  • Felice Franchini – Beyond Stochastic XAI: Deterministic and Reproducible Explanations in Genomic Data

Their contributions provided complementary perspectives — from theoretical frameworks to methodological rigor and applications in biomedical data — offering a unified view of how explainability and reliability can coexist.


🎥 Watch the full recording of XAIB2025 here:

🔬 Unlocking Brain Insights with AI: Three New Studies on Brain Age, Well-being, and Sex Differences! 🧠📊

🚀 New Research Alert! 🚀
Excited to share that three of my proceedings have just been published! 🎉 These studies leverage large-scale international neuroimaging datasets and cutting-edge interdisciplinary AI techniques to mine knowledge from brain structure and function.

🔍 What’s inside?
⚖️ Sex-Based Brain Morphometry Differences: Conducted by my PhD student Chiara Camastra, this research explores Explainable AI (XGBoost, SHAP, EBM) to identify sex-specific brain structural patterns.
👩‍⚕️ Psychological Well-being Prediction: Conducted by my PhD student Assunta Pelagi, this study applies Machine Learning and SHAP to reveal key emotional and social predictors of well-being.
🧠 Brain Age Estimation: Using Random Forests and Conformal Prediction for uncertainty quantification in brain aging analysis.

These works highlight how AI, neuroscience, and cognitive science converge to uncover new insights into the human brain, driving advancements in precision medicine and neurological research.

💡 The big picture?
🔬 Harnessing large neuroimaging datasets
📊 Integrating AI-driven predictions with uncertainty quantification
🧩 Advancing explainable and interpretable machine learning

🔗 Read more:
📄 Brain Age Estimation: DOI: 10.1007/978-3-031-82487-6_10
📄 Well-being Prediction (by Assunta Pelagi): DOI: 10.1007/978-3-031-82487-6_19
📄 Sex-based Morphometry Analysis (by Chiara Camastra): DOI: 10.1007/978-3-031-82487-6_17

A big thank you to my PhD students Assunta Pelagi and Chiara Camastra for their contributions to these studies 💪💪💪!

#AI #Neuroscience #MachineLearning #ExplainableAI #BrainResearch #Neuroimaging #BigData #PrecisionMedicine #ACAIN2024

AI-Powered Neurology: Enhancing Diagnostic and Prognostic Decision Support” at the 2025 IEEE MetroXRAINE

🚨 Call for Papers! 🚨
📢 Exciting news! I am thrilled to co-organize with Selene Tomassini and FEDERICA ARACRI the Thematic Session (TS) on “AI-Powered Neurology: Enhancing Diagnostic and Prognostic Decision Support” at the 2025 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (MetroXRAINE 2025) in Ancona, Italy (October 22-24, 2025).

🔍 This session will explore cutting-edge AI applications in clinical decision support, focusing on neurological diagnostics and prognostics. We welcome submissions addressing:

✅ Neuroimage processing & analysis
✅ AI-driven early detection of neurodegenerative diseases
✅ Deep learning for neuroanatomical biomarkers
✅ Explainable & generative AI in neuroimaging
✅ Uncertainty quantification in AI models
✅ Data fusion techniques for multi-modal neurological data
…and much more!

📅 Important Deadlines:
📌 March 15, 2025 – Abstract Submission Deadline
📌 April 20, 2025 – Acceptance Notification
📌 May 15, 2025 – Full Paper Submission Deadline
📌 July 31, 2025 – Final Paper Submission

🔗 Join us in advancing AI-powered neurology! Submit your research and contribute to the future of clinical decision support.

For more details, feel free to contact us or check out the official conference page https://metroxraine.org/thematic-session-3

🌟 Join Us at XAIB 2024! 🌟

The 4th Special Session on Explainable Artificial Intelligence for Unveiling the Brain: From Black-Box to Glass-Box

📍 Location: KX Building (11th Floor), Conference Room X11.1 and ONLINE

📅 Date: 15 December 2024

⏰ Time: 4:00 PM – 5:50 PM (UTC+7)

🎓 Session Chair: Chiara Camastra

Neuroscience Research Center, Magna Graecia University of Catanzaro, Italy

Dive into cutting-edge research and discussions at XAIB 2024, where we explore how Explainable AI is transforming our understanding of the brain and advancing neuroscience.

🧠 Program Overview

4:00 PM – 4:15 PM

Opening Remark

🎙 Prof. Alessia Sarica

Neuroscience Research Center, Magna Graecia University of Catanzaro, Italy

4:15 PM – 4:45 PM

Invited Talk

Title: Supervised ML in Science – From Interpretability to Robustness

🎙 Christoph Molnar

Department of Statistics, LMU Munich, Germany; Leibniz Institute for Prevention Research, Germany

4:45 PM – 5:00 PM

Invited Talk

Title: Deep Learning and Explainability: How Far Are We?

🎙 Dr. Sanjay Ghosh

Indian Institute of Technology Kharagpur, India

5:00 PM – 5:15 PM

B278: A Convolutional Neural Network with Feature Selection for Generating Explainable 1D Image Information for Brain Disease Diagnosis (online)

🎙 Luna M. Zhang

Stony Brook University, NY, USA

5:15 PM – 5:30 PM

B244: Probing Temporal Filters of Vision via a Falsifiable Model of Flicker Fusion (online)

🎙 Keerthi S Chandran, Kuntal Ghosh

Indian Statistical Institute, India

5:30 PM – 5:45 PM

B212: A Comparison of ANN-Optimization and Logistic Regression – An Example of the Acceptance of EEG Devices (online)

🎙 Tina Zeilner, Andreas Uphaus, and Bodo Vogt

Otto-von-Guericke-Universität Magdeburg, Germany; Hochschule Bielefeld, Germany

5:45 PM – 5:50 PM

Closing Remark

🎙 Prof. Alessia Sarica

Neuroscience Research Center, Magna Graecia University of Catanzaro, Italy

For more details or inquiries, contact: chiara.camastra at unicz.it

Neurodegenerative Disease Prediction: Impact of Imputation Techniques

The challenges posed by neurodegenerative diseases like Alzheimer’s and Parkinson’s demand sophisticated technological solutions to improve early diagnosis and patient outcomes. Central to these efforts is the effective handling of missing data in longitudinal studies, a common issue that can significantly impact the performance of predictive models.

Alzheimer’s Disease: Enhancing Prediction through Imputation Strategies

Based on the article: “Comparison between External and Internal Imputation of Missing Values in Longitudinal Data for Alzheimer’s Disease Diagnosis”

In the article “Comparison between External and Internal Imputation of Missing Values in Longitudinal Data for Alzheimer’s Disease Diagnosis,” Dr. Federica Aracri explored the impact of various imputation techniques on the accuracy of longitudinal deep learning models designed for predicting Alzheimer’s Disease (AD) progression. Utilizing data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), the study evaluated four models—Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), DeepRNN, and ODE-RGRU—coupled with six imputation strategies, including advanced methods like MissForest and Multiple Imputation by Chained Equations (MICE).

The findings revealed that models such as ODE-RGRU and DeepRNN, when paired with external imputation techniques, significantly outperformed those relying on internal imputation. For instance, the combination of ODE-RGRU with median imputation achieved an mAUC value of 0.9 ± 0.002, and DeepRNN with MissForest reached an mAUC of 0.91 ± 0.004. These results underscore the critical role that robust imputation methods play in enhancing the accuracy of AD progression models.

Based on the article: “Imputation of Missing Clinical, Cognitive and Neuroimaging Data of Dementia using MissForest, a Random Forest Based Algorithm”

Another significant contribution by Dr. Aracri is the study presented in the article “Imputation of Missing Clinical, Cognitive and Neuroimaging Data of Dementia using MissForest, a Random Forest Based Algorithm,” where she assessed the reliability of the MissForest algorithm in handling missing data from Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI) patients. The study compared MissForest with the commonly used Mean Imputation (Imean) method by simulating increasing levels of missing data in the ADNI dataset.

The research concluded that MissForest outperformed Imean in terms of overall imputation accuracy, particularly when considering the average error across all features. However, it was noted that MissForest had slightly higher errors than Imean for specific cognitive tests. These insights highlight the effectiveness of MissForest in handling missing data in dementia research, while also cautioning against its use with highly skewed variables.

Parkinson’s Disease: Classifying Phenotypes with Machine Learning

Based on the article: “Impact of Imputation Methods on Supervised Classification: A Multiclass Study on Patients with Parkinson’s Disease and Subjects with Scans Without Evidence of Dopaminergic Deficit”

Expanding on this work, Dr. Aracri also investigated the impact of imputation methods on supervised classification in the context of Parkinson’s Disease (PD). This study, detailed in the article “Impact of Imputation Methods on Supervised Classification: A Multiclass Study on Patients with Parkinson’s Disease and Subjects with Scans Without Evidence of Dopaminergic Deficit,” focused on the classification of PD, healthy controls, and a unique subgroup known as Scans Without Evidence of Dopaminergic Deficit (SWEDD). Two imputation approaches—MissForest and Mean Imputation (Imean)—were compared to assess their influence on the performance of tree-based algorithms, including Random Forest, XGBoost, and LightGBM.

The results demonstrated that while Mean Imputation occasionally led to overfitting, MissForest consistently retained more accurate information, proving to be the superior method for handling missing data in this context. This finding is particularly valuable for research into rare phenotypes of Parkinson’s Disease, where the accurate imputation of missing data is crucial for reliable classification outcomes.

Broader Implications and Future Directions

These works, conducted by Dr. Federica Aracri under my supervision, contribute significantly to the optimization of machine learning models for neurodegenerative disease research. The insights gained from these studies not only advance the understanding and prediction of diseases like Alzheimer’s and Parkinson’s but also have broader implications for other fields, particularly telemedicine. As healthcare continues to evolve with the integration of telehealth platforms, the methodologies developed in these studies could greatly enhance the reliability and utility of patient data collected remotely.

Moving forward, research will focus on incorporating additional biomarkers and conducting more extensive analyses to further refine these models. The ultimate goal is to improve early detection and personalized treatment strategies for neurodegenerative diseases, thereby enhancing patient outcomes on a global scale.

🚶‍♂️ Bringing Explainability to Gait Disorder Prediction with AI 🚶‍♀️

Highlights from our recent work, presented by Dr. Vera Gramigna at the Explainable AI for Biomedical Images and Signals Special Session of the 32nd Italian Workshop on Neural Networks (WIRN) 2024!

📄 Title of the Paper: Bringing Explainability to the Prediction of Gait Disorders from Ground Reaction Force (GRF): A Machine Learning Study on the GaitRec Dataset

Our research focuses on improving the prediction of gait disorders by analyzing ground reaction force (GRF) patterns using advanced machine learning techniques. We leveraged the GaitRec dataset, which includes GRF measurements from individuals with various musculoskeletal conditions, to develop a model that can distinguish between healthy controls and those with gait disorders.

What makes our work unique is the use of Explainable Boosting Machines (EBMs). Unlike traditional “black-box” models, EBMs provide transparency by showing which specific features of the gait data contribute to the predictions. This not only enhances the model’s accuracy but also allows clinicians to understand the reasoning behind each prediction, making AI tools more trustworthy and easier to integrate into clinical practice.

Key results:

  • Our model achieved an accuracy of 88.2% in predicting gait disorders.
  • We identified that specific frames of the right vertical GRF were crucial in distinguishing between healthy individuals and those with gait disorders.
  • The model’s explainability also revealed potential areas of improvement, such as the challenge in accurately classifying healthy controls, likely due to the diversity within the gait disorder category.

This work is a step forward in combining AI with clinical expertise, paving the way for more precise and understandable diagnostic tools in healthcare.

#AI #MachineLearning #GaitAnalysis #ExplainableAI #BiomedicalEngineering #WIRN2024

Call for Papers: Unveiling the Brain with Explainable AI – XAIB2024 Session

We announce the 4th Special Session on Explainable Artificial Intelligence for Unveiling the Brain: From the Black-Box to the Glass-Box (XAIB2024). This half-day hybrid session (in-person and online) is part of the prestigious Brain Informatics Conference 2024.

📅 Date: December 13-15, 2024
📍 Location: Hybrid (In-person and Online)

About XAIB2024: Artificial Intelligence (AI) and Machine Learning (ML) have revolutionized our understanding of the brain, aiding in the analysis of neuroimages and the diagnosis of neurodegenerative diseases. However, the “black-box” nature of these algorithms poses challenges in clinical trust and decision-making. XAIB2024 focuses on transforming these black boxes into “glass-boxes” with interpretable and explainable AI.

Key Highlights:

Co-chairs: Prof. Alessia Sarica and Dr. Sanjay Ghosh.

Scope: The session invites high-quality papers and abstracts on intelligible, interpretable, and understandable AI approaches applied to Neuroscience, Neuroimaging, and Neuropsychological data.

Topics: Post-hoc explainability techniques, transparent models, global and local explanations, systematic reviews, and meta-analyses.
Notable Past Speakers: Dr. Rich Caruana, Dr. Michele Ferrante, Dr. Dimitris Pinotsis, Prof. Monica Hernandez, and more.

Important Dates:
Submission Deadline: September 30, 2024
Review Deadline: October 15, 2024
Acceptance Notification: October 30, 2024
Camera Ready: November 5, 2024
Program Ready: December 2, 2024

Call for Papers: Submit your research and contribute to the groundbreaking dialogue on explainable AI in neuroscience. All accepted papers will be included in the conference proceedings.

Don’t miss this opportunity to be part of a transformative session that bridges the gap between AI and neuroscience. For more details and to submit your work, visit Brain Informatics Conference 2024.

🔗 Stay tuned for updates and join the conversation using hashtag#XAIB2024 and hashtag#BrainInformatics2024!

For information please contact:
Chiara Camastra, chiara.camastra@unicz.it Chiara Camastra
Assunta Pelagi, assunta.pelagi@studenti.unicz.it Assunta Pelagi

Advancing Alzheimer’s Risk Prediction with Explainable AI: Insights into Sex Differences

I’m thrilled to share our latest research, recently published in Brain Informatics and Brain Sciences. Our studies focus on enhancing the prediction of Alzheimer’s disease (AD) progression from mild cognitive impairment (MCI) using advanced explainable AI techniques.

In Brain Informatics, we demonstrated how Random Survival Forests (RSF) combined with SHapley Additive exPlanations (SHAP) improve the accuracy and interpretability of predicting AD conversion risk. Key biomarkers like FDG-PET, ABETA42, and the Hypometabolic Convergence Index (HCI) emerged as critical factors.

Building on this, our Brain Sciences article delves into the sex-specific differences in AD risk prediction. We found that while men and women share common influential biomarkers, significant differences exist in the importance of hippocampal volume and cognitive measures such as verbal memory and executive function. Our models revealed that females generally have a higher predicted risk of progressing to AD, emphasizing the need for sex-specific diagnostic approaches.

These studies underscore the potential of combining neuroimaging with explainable AI to enhance early diagnosis and personalized treatment for Alzheimer’s patients.

#AlzheimersDisease #AIinHealthcare #Neuroscience #SexDifferences #BrainHealth #MedicalResearch

🧠 Understanding Brain Aging in Parkinson’s Disease: A New Diagnostic Approach 🧠

I’m excited to share our latest research, presented at the Explainable AI for Biomedical Images and Signals Special Session of the 32nd Italian Workshop on Neural Networks (WIRN 2024)! Our work focuses on the crucial role of the thalamus in Parkinson’s disease (PD) and how deviations between brain age and chronological age—known as the brain-age gap (BAG)—can offer insights into disease progression.

Using MRI scans and advanced Explainable Boosting Machines (EBM), we’ve developed a novel, interpretable machine learning model that accurately estimates BAG in PD patients. Our findings reveal a complex pattern of hypertrophy and atrophy in thalamic nuclei volumes in PD patients, highlighting specific nuclei as key predictors of brain age. This approach not only improves early diagnosis and prognosis but also opens doors to personalized treatment plans for those with Parkinson’s disease.

This research underscores the potential of combining neuroimaging with cutting-edge AI to enhance our understanding of neurological disorders. Stay tuned for more updates on how this could revolutionize PD diagnosis and treatment!

#ParkinsonsDisease #BrainHealth #AIinHealthcare #Neuroscience #MRI #MedicalResearch #WIRN2024

Attached the powerpoint presentation.