Ever wondered how to quantitatively compare feature importance produced by Machine Learning algorithms?
In this new work presented at the Brain Informatics 2022, we introduce the Rank-Biased Overlap (RBO) as similarity measure for comparing rankings of features ordered by their importance. We used the automatic classification of Parkinson’s disease as case study.
A great virtual OHBM this year! I proposed the application of Interpretable Artificial Intelligence on Neuroimaging data through the Explainable Boosting Machine on Alzheimer’s data. Here my poster and a video presenting my work.
THE 14TH INTERNATIONAL CONFERENCE ON BRAIN INFORMATICS 2021
SPECIAL SESSION ON
EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR UNVEILING THE BRAIN: FROM THE BLACK-BOX TO THE GLASS-BOX (XAIB)
+++++ CALL FOR PAPERS AND ABSTRACTS +++++
Nowadays, Artificial Intelligence (AI) and Machine Learning (ML) are widely used for the exploration of the Brain and their application ranges from the processing and analysis of neuroimages to the automatic diagnosis of the neurodegenerative diseases. However, without an explanation of the ML findings, the automatic medical and clinical decisions are still hard to be trusted. Indeed, the black-box nature of most algorithms, although providing high accuracy, makes the interpretation of the predictions not immediate. Thus, in recent years the need of interpretable and explainable AI, especially in Healthcare, got stronger, as well as the need of glass-box models able to show a trade-off between intelligibility and optimal performance.
The aim of this Special Session is to collect scientific works devoted to the new challenge of Explainable Artificial Intelligence applied on Neuroscience, Neuroimaging and Neuropsychological data for unveiling the Brain. Researchers are encouraged to submit high quality papers or abstracts on novel or state-of-the-art intelligible, interpretable, and understandable AI approaches, such as post-hoc explainability techniques both model-agnostic (e.g., lime, shap) and model-specific (e.g., CNN, SVM, Random Forests), and transparent models (i.e., linear/logistic regression, decision trees, GAM), with special attention to global and local explanations. Systematic reviews and meta-analyses are also welcome.