Méthode d'interprétation agnostique


Révision datée du 20 décembre 2022 à 08:40 par Pitpitt (discussion | contributions) (Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Model-Agnostic Method ''' Interpretable models are models who exp... »)
(diff) ← Version précédente | Voir la version actuelle (diff) | Version suivante → (diff)

en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Model-Agnostic Method


Interpretable models are models who explain themselves, for instance from a decision tree you can easily extract decision rules. Model-agnostic methods are methods you can use for any machine learning model, from support vector machines to neural networks.
It’s not possible to directly interpret most machine learning models. For popular models like random forests, gradient boosted machines and neural networks you need model-agnostic methods. At the moment there are some interesting methods available, like permutation feature importance³, Partial Dependence Plots (PDPs), Individual Conditional Expectation (ICE) plots, global surrogate models, Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). 


[https://towardsdatascience.com/model-agnostic-methods-for-interpreting-any-machine-learning-model-4f10787ef504

Source : towardsdatascience ]

Contributeurs: Imane Meziani, wiki