« Unité linéaire rectifiée » : différence entre les versions


Aucun résumé des modifications
Ligne 21 : Ligne 21 :


Short for Rectified Linear Unit(s). ReLUs are often used as activation functions in Deep Neural Networks. They are defined by f(x) = max(0, x). The advantages of ReLUs over functions like tanhinclude that they tend to be sparse (their activation easily be set to 0), and that they suffer less from the vanishing gradient problem. ReLUs are the most commonly used activation function in Convolutional Neural Networks. There exist several variations of ReLUs, such as Leaky ReLUs, Parametric ReLU (PReLU) or a smoother softplusapproximation.
Short for Rectified Linear Unit(s). ReLUs are often used as activation functions in Deep Neural Networks. They are defined by f(x) = max(0, x). The advantages of ReLUs over functions like tanhinclude that they tend to be sparse (their activation easily be set to 0), and that they suffer less from the vanishing gradient problem. ReLUs are the most commonly used activation function in Convolutional Neural Networks. There exist several variations of ReLUs, such as Leaky ReLUs, Parametric ReLU (PReLU) or a smoother softplusapproximation.
• Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
• Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
• Rectifier Nonlinearities Improve Neural Network Acoustic Models
• Rectifier Nonlinearities Improve Neural Network Acoustic Models
• Rectified Linear Units Improve Restricted Boltzmann Machines
• Rectified Linear Units Improve Restricted Boltzmann Machines

Version du 27 février 2018 à 09:21

Domaine

Vocabulary Apprentissage profond

Définition

Termes privilégiés

Anglais

ReLU

Short for Rectified Linear Unit(s). ReLUs are often used as activation functions in Deep Neural Networks. They are defined by f(x) = max(0, x). The advantages of ReLUs over functions like tanhinclude that they tend to be sparse (their activation easily be set to 0), and that they suffer less from the vanishing gradient problem. ReLUs are the most commonly used activation function in Convolutional Neural Networks. There exist several variations of ReLUs, such as Leaky ReLUs, Parametric ReLU (PReLU) or a smoother softplusapproximation.

• Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification


• Rectifier Nonlinearities Improve Neural Network Acoustic Models


• Rectified Linear Units Improve Restricted Boltzmann Machines