« Propagation avant » : différence entre les versions
m (Remplacement de texte — « Catégorie:Apprentissage machine » par « Catégorie:Apprentissage automatique ») |
Aucun résumé des modifications |
||
Ligne 1 : | Ligne 1 : | ||
== en construction == | == en construction == | ||
== Définition == | |||
In the forward pass in backpropagation, each training pattern is presented to the input units of the network. | |||
The hidden unit activations are computed from the inputs and input-to-hidden unit weights, and then (in the case of a 3-layer network, with only a single layer of hidden units) the outputs are computed using the hidden layer activations and the current hidden-to-output weights. | |||
Certain statistics are kept from this computation, and used in the backward pass. The target outputs from each training pattern are compared with the actual activation levels of the output units - the difference between the two is termed the error. Training may be pattern-by-pattern or epoch-by-epoch. With pattern-by-pattern training, the pattern error is provided directly to the backward pass. With epoch-by-epoch training, the pattern errors are summed across all training patterns, and the total error is provided to the backward pass. | |||
== Français == | == Français == | ||
'''propagation avant''' | |||
'''propagation directe''' | |||
'''propagation vers l'avant''' | |||
== Anglais == | == Anglais == | ||
'''forward pass | '''forward pass''' | ||
<small> | <small> | ||
[http://www.cse.unsw.edu.au/~billw/dictionaries/mldict.html Source : INWS machine learning dictionary] ] | [http://www.cse.unsw.edu.au/~billw/dictionaries/mldict.html Source : INWS machine learning dictionary] ] | ||
[[Catégorie:Vocabulary]] | |||
[[Catégorie:Apprentissage automatique]] | |||
[[Catégorie:UNSW]] |
Version du 15 décembre 2021 à 10:47
en construction
Définition
In the forward pass in backpropagation, each training pattern is presented to the input units of the network.
The hidden unit activations are computed from the inputs and input-to-hidden unit weights, and then (in the case of a 3-layer network, with only a single layer of hidden units) the outputs are computed using the hidden layer activations and the current hidden-to-output weights.
Certain statistics are kept from this computation, and used in the backward pass. The target outputs from each training pattern are compared with the actual activation levels of the output units - the difference between the two is termed the error. Training may be pattern-by-pattern or epoch-by-epoch. With pattern-by-pattern training, the pattern error is provided directly to the backward pass. With epoch-by-epoch training, the pattern errors are summed across all training patterns, and the total error is provided to the backward pass.
Français
propagation avant
propagation directe
propagation vers l'avant
Anglais
forward pass
Contributeurs: Jean Benoît Morel, wiki