Quantifying differences in feature importance rankings of #machinelearning #classification could enhance #interpretability and #explainability: we show how through the rank-biased overlap similarity measure. Take a look at my novel work!
https://link.springer.com/chapter/10.1007/978-3-031-15037-1_11
Check also my oral communication at the Brain Informatics 2022

Hi Allesia,
We have recently compared RBO to several other methods in similar contexts and found that it fails to differentiate well between rankings compared to our devised method, Latent Personal Analysis (LPA). LPA creates an aggregation and evaluates each vector’s difference from the aggregation using the relative change in its information content. While LPA also finds the “popular features”, it also enables determining if some of them are missing from vectors. Here it is explained for textual dimensions: https://link.springer.com/article/10.1007/s11257-021-09295-7, and here for B-cells distribution similarity: https://www.frontiersin.org/articles/10.3389/fimmu.2021.642673/full.
Here is the open-source code: https://github.com/ScanLab-ossi/LPA.
I will be happy to continue the conversation offline,
Ossi.
LikeLike
Hi Ossi!
Thanks for your comment, it is indeed a useful insight.
I’m really curious to know how your LPA works on feature importance rankings 🙂
Let’s talk by email
LikeLike