This roundup was super helpful—especially in a space where interpretability is no longer optional, but essential. Tools like SHAP and LIME have become foundational for building trust in ML systems, and it’s great to see them highlighted alongside lesser-known gems like InterpretML. Clear, accessible explainability bridges the gap between data science and real-world impact. Thanks for putting this together—it’s a great resource for practitioners who want to build models that don’t just perform well, but communicate well too.
I’d add Dalex to the list: it includes functions to create PDP and ALE plots and wraps around LIME, among other things, to have many intepretability methods within one library. Sklearn on the other side supports ICE and PDP plots and permutation feature importance which can be applied super easy to explain models trained with this library.
This roundup was super helpful—especially in a space where interpretability is no longer optional, but essential. Tools like SHAP and LIME have become foundational for building trust in ML systems, and it’s great to see them highlighted alongside lesser-known gems like InterpretML. Clear, accessible explainability bridges the gap between data science and real-world impact. Thanks for putting this together—it’s a great resource for practitioners who want to build models that don’t just perform well, but communicate well too.
I’d add Dalex to the list: it includes functions to create PDP and ALE plots and wraps around LIME, among other things, to have many intepretability methods within one library. Sklearn on the other side supports ICE and PDP plots and permutation feature importance which can be applied super easy to explain models trained with this library.