« Bias-variance dilemma » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Bias-variance dilemma''' In statistics and machine learning, the bi... »)
 
Aucun résumé des modifications
Ligne 9 : Ligne 9 :
== Anglais ==
== Anglais ==
''' Bias-variance dilemma'''
''' Bias-variance dilemma'''
''' Bias-variance tradeoff'''


In statistics and machine learning, the bias–variance tradeoff is the property of a model that the variance of the parameter estimates across samples can be reduced by increasing the bias in the estimated parameters. The bias–variance dilemma or bias–variance problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:[1][2]
In statistics and machine learning, the bias–variance tradeoff is the property of a model that the variance of the parameter estimates across samples can be reduced by increasing the bias in the estimated parameters. The bias–variance dilemma or bias–variance problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:[1][2]
Ligne 19 : Ligne 21 :
[https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff  Source : Wikipedia  Machine Learning ]
[https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff  Source : Wikipedia  Machine Learning ]


https://www.kdnuggets.com/2020/12/20-core-data-science-concepts-beginners.html


[[Catégorie:vocabulary]]
[[Catégorie:vocabulary]]
[[Catégorie:Wikipedia-IA‎]]
[[Catégorie:Wikipedia-IA‎]]

Version du 6 janvier 2021 à 23:52

en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Bias-variance dilemma

Bias-variance tradeoff

In statistics and machine learning, the bias–variance tradeoff is the property of a model that the variance of the parameter estimates across samples can be reduced by increasing the bias in the estimated parameters. The bias–variance dilemma or bias–variance problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:[1][2]

The bias error is an error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting). The variance is an error from sensitivity to small fluctuations in the training set. High variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs (overfitting).

Source : Wikipedia Machine Learning

https://www.kdnuggets.com/2020/12/20-core-data-science-concepts-beginners.html



Contributeurs: Isaline Hodecent, wiki