Quantification of differences between feature importance rankings in Machine Learning

Quantifying differences in feature importance rankings of #machinelearning #classification could enhance #interpretability and #explainability: we show how through the rank-biased overlap similarity measure. Take a look at my novel work!

https://link.springer.com/chapter/10.1007/978-3-031-15037-1_11

Check also my oral communication at the Brain Informatics 2022

2 responses

  1. Hi Allesia,
    We have recently compared RBO to several other methods in similar contexts and found that it fails to differentiate well between rankings compared to our devised method, Latent Personal Analysis (LPA). LPA creates an aggregation and evaluates each vector’s difference from the aggregation using the relative change in its information content. While LPA also finds the “popular features”, it also enables determining if some of them are missing from vectors. Here it is explained for textual dimensions: https://link.springer.com/article/10.1007/s11257-021-09295-7, and here for B-cells distribution similarity: https://www.frontiersin.org/articles/10.3389/fimmu.2021.642673/full.
    Here is the open-source code: https://github.com/ScanLab-ossi/LPA.
    I will be happy to continue the conversation offline,
    Ossi.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s