XAIB2025 — Explainable AI for Unveiling the Brain

The fifth edition of XAIB – Explainable Artificial Intelligence for Unveiling the Brain was held within the Congress Brain Informatics 2025, continuing a journey that began five years ago with a simple yet ambitious goal: to make Artificial Intelligence not only powerful but also understandable and trustworthy.

The Focus of XAIB2025

This year’s edition explored how explainability and uncertainty quantification can be combined to build AI systems that are both interpretable and reliable — two fundamental requirements for their translation into neuroscience and clinical practice.

XAIB2025 highlighted the importance of moving beyond accuracy, emphasizing reproducibility, transparency, and trust as the pillars of next-generation AI in medicine.

Invited Speakers

The session featured three outstanding invited speakers whose work is shaping the field of explainable and trustworthy AI:

  • Valeriy Manokhin – Conformal Prediction for Trustworthy, Explainable AI
  • Vincenzo Dentamaro – Deterministic Explainability with EVIDENCE and MuPAX Theories
  • Felice Franchini – Beyond Stochastic XAI: Deterministic and Reproducible Explanations in Genomic Data

Their contributions provided complementary perspectives — from theoretical frameworks to methodological rigor and applications in biomedical data — offering a unified view of how explainability and reliability can coexist.


🎥 Watch the full recording of XAIB2025 here:

AI-Powered Neurology: Enhancing Diagnostic and Prognostic Decision Support” at the 2025 IEEE MetroXRAINE

🚨 Call for Papers! 🚨
📢 Exciting news! I am thrilled to co-organize with Selene Tomassini and FEDERICA ARACRI the Thematic Session (TS) on “AI-Powered Neurology: Enhancing Diagnostic and Prognostic Decision Support” at the 2025 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (MetroXRAINE 2025) in Ancona, Italy (October 22-24, 2025).

🔍 This session will explore cutting-edge AI applications in clinical decision support, focusing on neurological diagnostics and prognostics. We welcome submissions addressing:

✅ Neuroimage processing & analysis
✅ AI-driven early detection of neurodegenerative diseases
✅ Deep learning for neuroanatomical biomarkers
✅ Explainable & generative AI in neuroimaging
✅ Uncertainty quantification in AI models
✅ Data fusion techniques for multi-modal neurological data
…and much more!

📅 Important Deadlines:
📌 March 15, 2025 – Abstract Submission Deadline
📌 April 20, 2025 – Acceptance Notification
📌 May 15, 2025 – Full Paper Submission Deadline
📌 July 31, 2025 – Final Paper Submission

🔗 Join us in advancing AI-powered neurology! Submit your research and contribute to the future of clinical decision support.

For more details, feel free to contact us or check out the official conference page https://metroxraine.org/thematic-session-3

🌟 Join Us at XAIB 2024! 🌟

The 4th Special Session on Explainable Artificial Intelligence for Unveiling the Brain: From Black-Box to Glass-Box

📍 Location: KX Building (11th Floor), Conference Room X11.1 and ONLINE

📅 Date: 15 December 2024

⏰ Time: 4:00 PM – 5:50 PM (UTC+7)

🎓 Session Chair: Chiara Camastra

Neuroscience Research Center, Magna Graecia University of Catanzaro, Italy

Dive into cutting-edge research and discussions at XAIB 2024, where we explore how Explainable AI is transforming our understanding of the brain and advancing neuroscience.

🧠 Program Overview

4:00 PM – 4:15 PM

Opening Remark

🎙 Prof. Alessia Sarica

Neuroscience Research Center, Magna Graecia University of Catanzaro, Italy

4:15 PM – 4:45 PM

Invited Talk

Title: Supervised ML in Science – From Interpretability to Robustness

🎙 Christoph Molnar

Department of Statistics, LMU Munich, Germany; Leibniz Institute for Prevention Research, Germany

4:45 PM – 5:00 PM

Invited Talk

Title: Deep Learning and Explainability: How Far Are We?

🎙 Dr. Sanjay Ghosh

Indian Institute of Technology Kharagpur, India

5:00 PM – 5:15 PM

B278: A Convolutional Neural Network with Feature Selection for Generating Explainable 1D Image Information for Brain Disease Diagnosis (online)

🎙 Luna M. Zhang

Stony Brook University, NY, USA

5:15 PM – 5:30 PM

B244: Probing Temporal Filters of Vision via a Falsifiable Model of Flicker Fusion (online)

🎙 Keerthi S Chandran, Kuntal Ghosh

Indian Statistical Institute, India

5:30 PM – 5:45 PM

B212: A Comparison of ANN-Optimization and Logistic Regression – An Example of the Acceptance of EEG Devices (online)

🎙 Tina Zeilner, Andreas Uphaus, and Bodo Vogt

Otto-von-Guericke-Universität Magdeburg, Germany; Hochschule Bielefeld, Germany

5:45 PM – 5:50 PM

Closing Remark

🎙 Prof. Alessia Sarica

Neuroscience Research Center, Magna Graecia University of Catanzaro, Italy

For more details or inquiries, contact: chiara.camastra at unicz.it

🚶‍♂️ Bringing Explainability to Gait Disorder Prediction with AI 🚶‍♀️

Highlights from our recent work, presented by Dr. Vera Gramigna at the Explainable AI for Biomedical Images and Signals Special Session of the 32nd Italian Workshop on Neural Networks (WIRN) 2024!

📄 Title of the Paper: Bringing Explainability to the Prediction of Gait Disorders from Ground Reaction Force (GRF): A Machine Learning Study on the GaitRec Dataset

Our research focuses on improving the prediction of gait disorders by analyzing ground reaction force (GRF) patterns using advanced machine learning techniques. We leveraged the GaitRec dataset, which includes GRF measurements from individuals with various musculoskeletal conditions, to develop a model that can distinguish between healthy controls and those with gait disorders.

What makes our work unique is the use of Explainable Boosting Machines (EBMs). Unlike traditional “black-box” models, EBMs provide transparency by showing which specific features of the gait data contribute to the predictions. This not only enhances the model’s accuracy but also allows clinicians to understand the reasoning behind each prediction, making AI tools more trustworthy and easier to integrate into clinical practice.

Key results:

  • Our model achieved an accuracy of 88.2% in predicting gait disorders.
  • We identified that specific frames of the right vertical GRF were crucial in distinguishing between healthy individuals and those with gait disorders.
  • The model’s explainability also revealed potential areas of improvement, such as the challenge in accurately classifying healthy controls, likely due to the diversity within the gait disorder category.

This work is a step forward in combining AI with clinical expertise, paving the way for more precise and understandable diagnostic tools in healthcare.

#AI #MachineLearning #GaitAnalysis #ExplainableAI #BiomedicalEngineering #WIRN2024

Call for Papers: Unveiling the Brain with Explainable AI – XAIB2024 Session

We announce the 4th Special Session on Explainable Artificial Intelligence for Unveiling the Brain: From the Black-Box to the Glass-Box (XAIB2024). This half-day hybrid session (in-person and online) is part of the prestigious Brain Informatics Conference 2024.

📅 Date: December 13-15, 2024
📍 Location: Hybrid (In-person and Online)

About XAIB2024: Artificial Intelligence (AI) and Machine Learning (ML) have revolutionized our understanding of the brain, aiding in the analysis of neuroimages and the diagnosis of neurodegenerative diseases. However, the “black-box” nature of these algorithms poses challenges in clinical trust and decision-making. XAIB2024 focuses on transforming these black boxes into “glass-boxes” with interpretable and explainable AI.

Key Highlights:

Co-chairs: Prof. Alessia Sarica and Dr. Sanjay Ghosh.

Scope: The session invites high-quality papers and abstracts on intelligible, interpretable, and understandable AI approaches applied to Neuroscience, Neuroimaging, and Neuropsychological data.

Topics: Post-hoc explainability techniques, transparent models, global and local explanations, systematic reviews, and meta-analyses.
Notable Past Speakers: Dr. Rich Caruana, Dr. Michele Ferrante, Dr. Dimitris Pinotsis, Prof. Monica Hernandez, and more.

Important Dates:
Submission Deadline: September 30, 2024
Review Deadline: October 15, 2024
Acceptance Notification: October 30, 2024
Camera Ready: November 5, 2024
Program Ready: December 2, 2024

Call for Papers: Submit your research and contribute to the groundbreaking dialogue on explainable AI in neuroscience. All accepted papers will be included in the conference proceedings.

Don’t miss this opportunity to be part of a transformative session that bridges the gap between AI and neuroscience. For more details and to submit your work, visit Brain Informatics Conference 2024.

🔗 Stay tuned for updates and join the conversation using hashtag#XAIB2024 and hashtag#BrainInformatics2024!

For information please contact:
Chiara Camastra, chiara.camastra@unicz.it Chiara Camastra
Assunta Pelagi, assunta.pelagi@studenti.unicz.it Assunta Pelagi

Brain Informatics 2022 – My oral communication

Ever wondered how to quantitatively compare feature importance produced by Machine Learning algorithms?

In this new work presented at the Brain Informatics 2022, we introduce the Rank-Biased Overlap (RBO) as similarity measure for comparing rankings of features ordered by their importance. We used the automatic classification of Parkinson’s disease as case study.

Take a look at my recording if you are curious!

Special Session XAIB – Brain Informatics Congress 2021

https://www.bi2021.org
THE 14TH INTERNATIONAL CONFERENCE ON BRAIN INFORMATICS 2021

SPECIAL SESSION ON 

EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR UNVEILING THE BRAIN: FROM THE BLACK-BOX TO THE GLASS-BOX (XAIB)

Half Day

+++++ CALL FOR PAPERS AND ABSTRACTS +++++

Nowadays, Artificial Intelligence (AI) and Machine Learning (ML) are widely used for the exploration of the Brain and their application ranges from the processing and analysis of neuroimages to the automatic diagnosis of the neurodegenerative diseases. However, without an explanation of the ML findings, the automatic medical and clinical decisions are still hard to be trusted. Indeed, the black-box nature of most algorithms, although providing high accuracy, makes the interpretation of the predictions not immediate. Thus, in recent years the need of interpretable and explainable AI, especially in Healthcare, got stronger, as well as the need of glass-box models able to show a trade-off between intelligibility and optimal performance.

The aim of this Special Session is to collect scientific works devoted to the new challenge of Explainable Artificial Intelligence applied on Neuroscience, Neuroimaging and Neuropsychological data for unveiling the Brain. Researchers are encouraged to submit high quality papers or abstracts on novel or state-of-the-art intelligible, interpretable, and understandable AI approaches, such as post-hoc explainability techniques both model-agnostic (e.g., lime, shap) and model-specific (e.g., CNN, SVM, Random Forests), and transparent models (i.e., linear/logistic regression, decision trees, GAM), with special attention to global and local explanations. Systematic reviews and meta-analyses are also welcome.