Méthode d'interprétation agnostique


en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Model-Agnostic Method


Interpretable models are models who explain themselves, for instance from a decision tree you can easily extract decision rules. Model-agnostic methods are methods you can use for any machine learning model, from support vector machines to neural networks.
It’s not possible to directly interpret most machine learning models. For popular models like random forests, gradient boosted machines and neural networks you need model-agnostic methods. At the moment there are some interesting methods available, like permutation feature importance³, Partial Dependence Plots (PDPs), Individual Conditional Expectation (ICE) plots, global surrogate models, Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). 


Source : towardsdatascience



Contributeurs: Imane Meziani, wiki