« Denoising Autoencoders » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Denoising Autoencoders''' Denoising autoencoders are a stochastic v... »)
 
m (Remplacement de texte — « DeepAI.org ] » par « DeepAI.org ] Catégorie:DeepAI.org  »)
Ligne 19 : Ligne 19 :


[https://deepai.org/machine-learning-glossary-and-terms/denoising-autoencoder  Source : DeepAI.org ]
[https://deepai.org/machine-learning-glossary-and-terms/denoising-autoencoder  Source : DeepAI.org ]
[[Catégorie:DeepAI.org]]


[[Catégorie:vocabulary]]
[[Catégorie:vocabulary]]

Version du 15 décembre 2020 à 18:08

en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Denoising Autoencoders

Denoising autoencoders are a stochastic version of standard autoencoders that reduces the risk of learning the identity function. Autoencoders are a class of neural networks used for feature selection and extraction, also called dimensionality reduction. In general, the more hidden layers in an autoencoder, the more refined this dimensional reduction can be. However, if an autoencoder has more hidden layers than inputs there is a risk the algorithm only learns the identity function during training, the point where the output simply equals the input, and then becomes useless.

Denoising autoencoders attempt to get around this risk of identity-function affiliation by introducing noise, i.e. randomly corrupting input so that the autoencoder must then “denoise” or reconstruct the original input.



Source : DeepAI.org

Contributeurs: wiki