« BERTScore » : différence entre les versions
Aucun résumé des modifications |
Aucun résumé des modifications |
||
| Ligne 12 : | Ligne 12 : | ||
'''BERTScore''' | '''BERTScore''' | ||
''A metric for automatic evaluation of machine translation that calculates the similarity between a machine translation output and a reference translation using embeddings. It was invented as an improvement on n-gram-based metrics (see BLEU), and addresses two common pitfalls in these: 1) Such methods often fail to robustly match paraphrases. 2) | ''A metric for automatic evaluation of machine translation that calculates the similarity between a machine translation output and a reference translation using embeddings. It was invented as an improvement on n-gram-based metrics (see BLEU), and addresses two common pitfalls in these: 1) Such methods often fail to robustly match paraphrases. 2) N-gram models fail to capture distant dependencies and penalize semantically-critical ordering changes.'' | ||
<!--Advantages : semantic awareness, contextual understanding, flexibility across language and domains, robustness to paraphrasing, model agnostic evaluation, high correlation with human judgments. | <!--Advantages : semantic awareness, contextual understanding, flexibility across language and domains, robustness to paraphrasing, model agnostic evaluation, high correlation with human judgments. | ||
Version du 19 mars 2026 à 15:11
en construction
Définition
Métrique d'évaluation sémantique de traduction automatique qui calcule la similarité des mots entre une traduction automatique et une traduction de référence à l'aide de vecteurs sémantiques compacts. Elle a été inventée pour améliorer les mesures basées sur les n-grammes (voir BLEU) et remédie à deux défauts courants de ces dernières :
- souvent, ces méthodes ne parviennent pas à faire correspondre les paraphrases et les synonymes de manière fiable ;
- les modèles n-grammes ne parviennent pas à saisir les dépendances distantes et pénalisent les changements d'ordre sémantiquement critiques.
Français
BERTScore
Anglais
BERTScore
A metric for automatic evaluation of machine translation that calculates the similarity between a machine translation output and a reference translation using embeddings. It was invented as an improvement on n-gram-based metrics (see BLEU), and addresses two common pitfalls in these: 1) Such methods often fail to robustly match paraphrases. 2) N-gram models fail to capture distant dependencies and penalize semantically-critical ordering changes.
Sources
Contributeurs: Arianne Arel, wiki





