|
|
Ligne 1 : |
Ligne 1 : |
| == En chantier ==
| | #REDIRECTION [[Peaufinage de représentations]] |
| == Définition ==
| | [[Catégorie:ENGLISH]] |
| XXXXXXXXX
| |
| | |
| == Français ==
| |
| ''' peaufinage de représentations '''
| |
| | |
| ''' peaufinage d'espace latent '''
| |
| | |
| | |
| == Anglais ==
| |
| ''' representation fine-tuning '''
| |
| | |
| <!-- PYREFT, a representation fine-tuning (ReFT) library that supports adapting internal language model representations via trainable interventions. With fewer fine-tuning parameters and more robust performance, pyreft can boost fine-tuning efficiency, decrease fine-tuning cost, while opening the doors to study the interpretability of adapting parameters. -->
| |
| | |
| | |
| == Source ==
| |
| | |
| [https://arxiv.org/pdf/2404.03592 ''ReFT: Representation Finetuning for Language Models'', Wu et al. (2024)]
| |
| | |
| [https://blog.paperspace.com/reft-representation-finetuning-for-language-models/ ''ReFT: Representation Finetuning for Language Models'', Shaoni Mukherjee (2024)]
| |
| | |
| [https://medium.com/@techsachin/representation-fine-tuning-reft-a-powerful-parameter-efficient-way-to-fine-tune-language-models-3bc6dd14e8b5 ''Representation fine-tuning (ReFT): A Powerful Parameter-Efficient Way to Fine-tune Language Models'', Sachin Mukar]
| |
| | |
| [https://github.com/stanfordnlp/pyreft Github - PYREFT]
| |
| | |
| | |
| [[Catégorie:Publication]] | |