« Rétropropagation récurrente d'Almeida-Pineda » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Almeida–Pineda recurrent backpropagation''' Almeida–Pineda recu... »)
 
m (Remplacement de texte — «  [http » par «  * [http »)
Ligne 20 : Ligne 20 :
<small>
<small>


[https://en.wikipedia.org/wiki/Almeida%E2%80%93Pineda_recurrent_backpropagation Source :  Source : Wikipedia  ]
* [https://en.wikipedia.org/wiki/Almeida%E2%80%93Pineda_recurrent_backpropagation Source :  Source : Wikipedia  ]


[https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_algorithms  Source : Wikipedia Machine learning algorithms  ]
* [https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_algorithms  Source : Wikipedia Machine learning algorithms  ]


[[Catégorie:vocabulary]]
[[Catégorie:vocabulary]]
[[Catégorie:Wikipedia-IA‏‎]]
[[Catégorie:Wikipedia-IA‏‎]]

Version du 4 février 2021 à 23:06

en construction

Définition

XXXXXXXXX

Français

XXXXXXXXX

Anglais

Almeida–Pineda recurrent backpropagation

Almeida–Pineda recurrent backpropagation is an extension to the backpropagation algorithm that is applicable to recurrent neural networks. It is a type of supervised learning. It was described somewhat cryptically in Richard Feynman's senior thesis, and rediscovered independently in the context of artificial neural networks by both Fernando Pineda and Luis B. Almeida.[1][2][3]

A recurrent neural network for this algorithm consists of some input units, some output units and eventually some hidden units.

For a given set of (input, target) states, the network is trained to settle into a stable activation state with the output units in the target state, based on a given input state clamped on the input units.




Contributeurs: Claire Gorjux, Imane Meziani, wiki