« Requête enrichie par graphes de connaissances » : différence entre les versions


Aucun résumé des modifications
Aucun résumé des modifications
 
(5 versions intermédiaires par un autre utilisateur non affichées)
Ligne 11 : Ligne 11 :
== Français ==
== Français ==
'''requête enrichie par graphes de connaissances'''
'''requête enrichie par graphes de connaissances'''
'''requête enrichie par chaine(s) de connaissances'''


'''requête de résolution par chaine(s) de connaissances'''  
'''requête de résolution par chaine(s) de connaissances'''  
Ligne 28 : Ligne 26 :


'''CoK'''
'''CoK'''
<!-- The CHAIN-OF-KNOWLEDGE framework has two main components: dataset construction and model learning. For dataset construction, the authors first mine compositional rules from knowledge graphs. These rules represent patterns of how different facts can be combined to infer new knowledge. They then select knowledge triples from the graph that match these rules. Finally, they use advanced language models to transform the structured knowledge into natural language questions and reasoning steps.
 
For model learning, they initially tried simply fine-tuning LLMs on this data. However, this led to "rule overfitting" where models would apply rules even without supporting facts. To address this, they introduced a trial-and-error mechanism. This simulates how humans explore their internal knowledge when reasoning, by having the model try different rules and backtrack if it lacks key facts.
-->


== Source ==
== Source ==
[https://arxiv.org/pdf/2407.00653 Source : Zhang et al. 2024]
[https://arxiv.org/pdf/2407.00653 Source : Zhang et al. 2024]


[https://cobusgreyling.medium.com/chain-of-knowledge-prompting-0285ac879ede  Source :  Chain-Of-Knowledge Prompting
[https://cobusgreyling.medium.com/chain-of-knowledge-prompting-0285ac879ede  Source :  Chain-Of-Knowledge Prompting - Medium]
- Medium]


[https://huggingface.co/papers/2407.00653? Source: Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs - Hugging Face - 30 juin 2024]


[[Catégorie:Publication]]
<!-- The CHAIN-OF-KNOWLEDGE framework has two main components: dataset construction and model learning. For dataset construction, the authors first mine compositional rules from knowledge graphs. These rules represent patterns of how different facts can be combined to infer new knowledge. They then select knowledge triples from the graph that match these rules. Finally, they use advanced language models to transform the structured knowledge into natural language questions and reasoning steps.
 
For model learning, they initially tried simply fine-tuning LLMs on this data. However, this led to "rule overfitting" where models would apply rules even without supporting facts. To address this, they introduced a trial-and-error mechanism. This simulates how humans explore their internal knowledge when reasoning, by having the model try different rules and backtrack if it lacks key facts.
-->
[[Catégorie:Intelligence artificielle]]
[[Catégorie:GRAND LEXIQUE FRANÇAIS]]

Dernière version du 27 septembre 2024 à 10:20

Définition

Peaufinage des résultats d'un grand modèle de langues en enrichissant les requêtes avec des connaissances factuelles générées à partir de règles extraites de graphes de connaissances. Ceci permet de générer des résultats plus précis et contextuellement plus pertinents.

Complément

Entrée en cours de travail.

Les règles sont extraites d'un graphe de connaissances à l'aide d'un algorithme classique de parcours en largeur.

Cette technique se rapproche de la génération augmentée d'information applicative (GAIA).

Français

requête enrichie par graphes de connaissances

requête de résolution par chaine(s) de connaissances

requête par chaine(s) de connaissances

Anglais

Chain-of-Knowledge prompting

Chain-of-Knowledge framework prompting

CoK promptig

Chain-of-Knowledge

CoK

Source

Source : Zhang et al. 2024

Source : Chain-Of-Knowledge Prompting - Medium

Source: Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs - Hugging Face - 30 juin 2024