« Peaufinage de l'espace latent » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Representation Fine-Tuning ''' PYREFT, a representation fine-tuning (ReFT) library that supports adapting internal language model representations via trainable interventions. With fewer fine-tuning parameters and more robust performance, pyreft can boost fine-tuning efficiency, decrease fine-tuning cost, while opening the doors to study the interpretabilit... »)
 
Aucun résumé des modifications
Ligne 1 : Ligne 1 :
==en construction==
== En chantier ==  
 
== Définition ==
== Définition ==
XXXXXXXXX
XXXXXXXXX


== Français ==
== Français ==
''' XXXXXXXXX '''
''' peaufinage de représentations '''
 
''' peaufinage d'espace latent '''
 


== Anglais ==
== Anglais ==
''' Representation Fine-Tuning '''
''' representation fine-tuning '''


PYREFT, a representation fine-tuning (ReFT) library that supports adapting internal language model representations via trainable interventions. With fewer fine-tuning parameters and more robust performance, pyreft can boost fine-tuning efficiency, decrease fine-tuning cost, while opening the doors to study the interpretability of adapting parameters.
<!-- PYREFT, a representation fine-tuning (ReFT) library that supports adapting internal language model representations via trainable interventions. With fewer fine-tuning parameters and more robust performance, pyreft can boost fine-tuning efficiency, decrease fine-tuning cost, while opening the doors to study the interpretability of adapting parameters. -->




== Source ==
== Source ==


[https://github.com/stanfordnlp/pyreft  Source : github]
[https://arxiv.org/pdf/2404.03592  Source : ''ReFT: Representation Finetuning for Language Models'', Wu et al. (2024)]
 
[https://blog.paperspace.com/reft-representation-finetuning-for-language-models/ Source : ''ReFT: Representation Finetuning for Language Models'', Shaoni Mukherjee (2024)]
 
[https://medium.com/@techsachin/representation-fine-tuning-reft-a-powerful-parameter-efficient-way-to-fine-tune-language-models-3bc6dd14e8b5  Source: ''Representation fine-tuning (ReFT): A Powerful Parameter-Efficient Way to Fine-tune Language Models'', Sachin Mukar]


[https://github.com/stanfordnlp/pyreft  Source : github - PYREFT]




[[Catégorie:vocabulary]]
[[Catégorie:Publication]]

Version du 12 juin 2024 à 14:32