« Autoencodeurs peu denses » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Sparse Auto Encoder''' In sparse autoencoders, we can still use f... »)
 
Aucun résumé des modifications
Ligne 12 : Ligne 12 :


  In sparse autoencoders, we can still use fully connected neurons with numbers equal to the image dimensionality. But still, by adding a sparsity regularization, we will be able to stop the neural network from copying the input.
  In sparse autoencoders, we can still use fully connected neurons with numbers equal to the image dimensionality. But still, by adding a sparsity regularization, we will be able to stop the neural network from copying the input.
 
  Mainly, there are two ways to add sparsity constraints to deep autoencoders.
  Mainly, there are two ways to add sparsity constraints to deep autoencoders.
  = L1 regularization, which we will use in this article.
  = L1 regularization, which we will use in this article.

Version du 23 octobre 2023 à 08:13

en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Sparse Auto Encoder


In sparse autoencoders, we can still use fully connected neurons with numbers equal to the image dimensionality. But still, by adding a sparsity regularization, we will be able to stop the neural network from copying the input.

Mainly, there are two ways to add sparsity constraints to deep autoencoders.
= L1 regularization, which we will use in this article.
* KL divergence, which we will address in the next article.


Source : debuggercafe

Contributeurs: Marie Alfaro, wiki