« Feature selection » : différence entre les versions


(Page créée avec « == Domaine == Category:Vocabulary == Définition == == Termes privilégiés == == Anglais == === Feature selection === In machine learning and statis... »)
 
Aucun résumé des modifications
Ligne 19 : Ligne 19 :
In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for four reasons:
In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for four reasons:


        simplification of models to make them easier to interpret by researchers/users,[1]
* simplification of models to make them easier to interpret by researchers/users,[1]
        shorter training times,
* shorter training times,
        to avoid the curse of dimensionality,
* to avoid the curse of dimensionality,
        enhanced generalization by reducing overfitting[2] (formally, reduction of variance[1])
* enhanced generalization by reducing overfitting[2] (formally, reduction of variance[1])


The central premise when using a feature selection technique is that the data contains many features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information.[2] Redundant or irrelevant features are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.[3]
The central premise when using a feature selection technique is that the data contains many features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information.[2] Redundant or irrelevant features are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.[3]

Version du 31 décembre 2018 à 13:11

Domaine

Définition

Termes privilégiés

Anglais

Feature selection

In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for four reasons:

  • simplification of models to make them easier to interpret by researchers/users,[1]
  • shorter training times,
  • to avoid the curse of dimensionality,
  • enhanced generalization by reducing overfitting[2] (formally, reduction of variance[1])

The central premise when using a feature selection technique is that the data contains many features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information.[2] Redundant or irrelevant features are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.[3]

Feature selection techniques should be distinguished from feature extraction. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features. Feature selection techniques are often used in domains where there are many features and comparatively few samples (or data points). Archetypal cases for the application of feature selection include the analysis of written texts and DNA microarray data, where there are many thousands of features, and a few tens to hundreds of samples.








Contributeurs: wiki