« AutoRT » : différence entre les versions


m (Remplacement de texte : « ↵↵<small> » par « ==Sources== »)
Aucun résumé des modifications
 
(2 versions intermédiaires par un autre utilisateur non affichées)
Ligne 2 : Ligne 2 :


== Définition ==
== Définition ==
XXXXXXXXX
Modèle fondateur qui incorpore le modèle VLM (à définir) ainsi qu'un grand modèle de langue et qui propose diverses tâches de manipulation adapté à ce qui est perçu par les robots. Cependant, un des défis de l'entraînement de modèles fondateurs concrets est le maque de données basées dans le monde réel.


== Français ==
== Français ==
Ligne 12 : Ligne 12 :
   Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world.==Sources==
   Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world.==Sources==


[https://auto-rt.github.io/  Source : github]
  AutoRT is a peptide retention time prediction tool using deep learning. It supports peptide retention prediction for tryptic peptides (global proteome experiments), MHC bound peptides (immunopeptidomics experiment) and PTM peptides (such as phosphoproteomics experiment, ubiquitome or acetylome experiment).


  It works by combining a visual language model (VLM) to perceive the surroundings with a large language model (LLM) that proposes diverse manipulation tasks suited to what the robots see in its environment. The system directs robots, each equipped with a camera and an end effector, to perform diverse tasks in various settings. Before execution, the tasks are filtered for safety by an "LLM legislator" guided by a "Robot Constitution" inspired by Asimov's Three Laws of Robotics.


[https://auto-rt.github.io/  Source : github]
[https://github.com/bzhanglab/AutoRT    Source: github]
[https://www.maginative.com/article/google-deepmind-unveils-latest-research-in-advanced-robotics-with-autort-sara-rt-and-rt-trajectory/  Source: Maginative, "Google DeepMind Unveils Latest Research in Advanced Robotics with AutoRT, SARA-RT, and RT-Trajectory"]




[[Catégorie:vocabulary]]
[[Catégorie:vocabulary]]

Dernière version du 13 août 2024 à 17:04

en construction

Définition

Modèle fondateur qui incorpore le modèle VLM (à définir) ainsi qu'un grand modèle de langue et qui propose diverses tâches de manipulation adapté à ce qui est perçu par les robots. Cependant, un des défis de l'entraînement de modèles fondateurs concrets est le maque de données basées dans le monde réel.

Français

AutoRT

Anglais

AutoRT

 Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world.==Sources==
 AutoRT is a peptide retention time prediction tool using deep learning. It supports peptide retention prediction for tryptic peptides (global proteome experiments), MHC bound peptides (immunopeptidomics experiment) and PTM peptides (such as phosphoproteomics experiment, ubiquitome or acetylome experiment).
 It works by combining a visual language model (VLM) to perceive the surroundings with a large language model (LLM) that proposes diverse manipulation tasks suited to what the robots see in its environment. The system directs robots, each equipped with a camera and an end effector, to perform diverse tasks in various settings. Before execution, the tasks are filtered for safety by an "LLM legislator" guided by a "Robot Constitution" inspired by Asimov's Three Laws of Robotics.

Source : github Source: github Source: Maginative, "Google DeepMind Unveils Latest Research in Advanced Robotics with AutoRT, SARA-RT, and RT-Trajectory"

Contributeurs: Arianne , wiki