« Verbalized Sampling » : différence entre les versions
(Page créée avec « == EN CONSTRUCTION == == Définition == xxxxx == Français == ''' xxxxx ''' == Anglais == '''Verbalized Sampling''' How to Mitigate Mode Collapse and Unlock LLM Diversity https://arxiv.org/abs/2510.01171 A training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity improvement while maintaining quality. Model-agnostic framework with CLI/API for creative writing, synthetic data gen... ») |
Aucun résumé des modifications |
||
| Ligne 3 : | Ligne 3 : | ||
== Définition == | == Définition == | ||
xxxxx | xxxxx | ||
Voir aussi '''[[LLM-as-a-judge]]''' | |||
== Français == | == Français == | ||
''' xxxxx ''' | ''' xxxxx ''' | ||
== Compléments == | |||
<!--This method is orthogonal to temperature.--> | |||
== Anglais == | == Anglais == | ||
'''Verbalized Sampling''' | '''Verbalized Sampling''' | ||
''' VS''' | |||
<!--A prompting strategy that improves LLM diversity by asking the model to generate multiple responses with their probabilities, then sampling from this distribution. It is training free, model agnostic and effective across tasks and it improves performance without sacrificing the models’ factual accuracy or safety.--> | |||
==Sources== | ==Sources== | ||
[https:// | [https://arxiv.org/abs/2510.01171 Source : arxiv] | ||
[https://github.com/CHATS-lab/verbalized-sampling Source : GitHub] | |||
[https://www.verbalized-sampling.com/ Source : Verbalized Sampling] | |||
[[Catégorie:vocabulary]] | [[Catégorie:vocabulary]] | ||
Dernière version du 24 février 2026 à 11:02
EN CONSTRUCTION
Définition
xxxxx
Voir aussi LLM-as-a-judge
Français
xxxxx
Compléments
Anglais
Verbalized Sampling
VS
Sources
Contributeurs: Arianne Arel, wiki





