« Orca » : différence entre les versions


(Page créée avec « ==en construction== == Définition == XXXXXXXXX == Français == ''' XXXXXXXXX ''' == Anglais == ''' Orca''' '''Progressive Learning from Complex Explanation Traces of... »)
 
Aucun résumé des modifications
Ligne 1 : Ligne 1 :
==en construction==
== Définition ==
== Définition ==
XXXXXXXXX
Orca est un modèle de 13 milliards de paramètres qui apprend les traces d'explications, les processus de pensée étape par étape et les instructions complexes de GPT-4, guidés par l'assistance de l'enseignant à partir de ChatGPT, afin d'améliorer de manière significative les modèles d'instruction SOTA.


== Français ==
== Français ==
''' XXXXXXXXX '''
''' Orca '''


== Anglais ==
== Anglais ==
''' Orca'''
''' Orca'''
  '''Progressive Learning from Complex Explanation Traces of GPT-4'''
  Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca, a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT 4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT–4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills.




Ligne 18 : Ligne 13 :
[https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4  Source : microsoft]
[https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4  Source : microsoft]


[https://syncedreview.com/2023/06/09/microsofts-orca-learns-from-complex-explanation-traces-of-gpt-4-to-significantly-enhance-smaller-models/  Source : Synced ]




[[Catégorie:vocabulary]]
[[Catégorie:vocabulary]]

Version du 9 juillet 2023 à 13:44

Définition

Orca est un modèle de 13 milliards de paramètres qui apprend les traces d'explications, les processus de pensée étape par étape et les instructions complexes de GPT-4, guidés par l'assistance de l'enseignant à partir de ChatGPT, afin d'améliorer de manière significative les modèles d'instruction SOTA.

Français

Orca

Anglais

Orca


Source : microsoft

Source : Synced



Contributeurs: Imane Meziani, Maya Pentsch, wiki