« Réseau récurrent à portes » : différence entre les versions


Aucun résumé des modifications
Aucun résumé des modifications
Ligne 2 : Ligne 2 :


[[Category:Vocabulary]]  Vocabulary<br />
[[Category:Vocabulary]]  Vocabulary<br />
 
[[Category:Claude]]Claude<br />
[[Catégorie:Apprentissage profond]] Apprentissage profond
[[Catégorie:Apprentissage profond]] Apprentissage profond


== Définition ==
== Définition ==
   
   
<br />


   
   


== Termes privilégiés ==
== Termes privilégiés ==
===terme===
===réseau récurrent à portes===
=== réseau de neurones récurrents à portes===
===unités récurrentes à porte===
<br />
 


== Anglais ==
== Anglais ==
'''GRU'''
===Gated Recurrent Unit===
===GRU===


The Gated Recurrent Unit is a simplified version of an LSTM unit with fewer parameters. Just like an LSTM cell, it uses a gating mechanism to allow RNNs to efficiently learn long-range dependency by preventing the vanishing gradient problem. The GRU consists of a reset and update gate that determine which part of the old memory to keep vs. update with new values at the current time step.
The Gated Recurrent Unit is a simplified version of an LSTM unit with fewer parameters. Just like an LSTM cell, it uses a gating mechanism to allow RNNs to efficiently learn long-range dependency by preventing the vanishing gradient problem. The GRU consists of a reset and update gate that determine which part of the old memory to keep vs. update with new values at the current time step.
• Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
• Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
• Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano
• Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano

Version du 28 mars 2018 à 14:34

Domaine

Vocabulary
Claude
Apprentissage profond

Définition



Termes privilégiés

réseau récurrent à portes

réseau de neurones récurrents à portes

unités récurrentes à porte



Anglais

Gated Recurrent Unit

GRU

The Gated Recurrent Unit is a simplified version of an LSTM unit with fewer parameters. Just like an LSTM cell, it uses a gating mechanism to allow RNNs to efficiently learn long-range dependency by preventing the vanishing gradient problem. The GRU consists of a reset and update gate that determine which part of the old memory to keep vs. update with new values at the current time step. • Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation • Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano