« Représentation sémantique compacte » : différence entre les versions
Aucun résumé des modifications |
m (Remplacement de texte — « Termes privilégiés » par « Français ») |
||
Ligne 9 : | Ligne 9 : | ||
== | == Français == | ||
Version du 31 décembre 2018 à 14:45
Domaine
Vocabulary Apprentissage profond
Définition
Français
Anglais
Embedding
An embedding maps an input representation, such as a word or sentence, into a vector. A popular type of embedding are word embeddings such as word2vec or GloVe. We can also embed sentences, paragraphs or images. For example, by mapping images and their textual descriptions into a common embedding space and minimizing the distance between them, we can match labels with images. Embeddings can be learned explicitly, such as in word2vec, or as part of a supervised task, such as Sentiment Analysis. Often, the input layer of a network is initialized with pre-trained embeddings, which are then fine-tuned to the task at hand.
Contributeurs: Claude Coulombe, Jacques Barolet, Patrick Drouin, wiki