One of the main problems in robotics is the need for individual training models for each robot, task and environment. The new project, developed by Google Deepmind and 33 other research institutions, offers a solution to this problem. The goal is to create a universal AI system capable of working with various physical robots and perform many tasks.
Pannag Sanketi, a senior engineer-programmer at Google Robotics, noted: “Robots do a great job of specialized tasks, but are poorly adapted to new conditions. Usually for each task, robot and environment, you need to train a separate model.”
A project called open-x embodiment enters two key components: a set of data containing information about various types of robots, And the family of models capable of transmitting skills for a wide range of tasks. These models were tested in robotics laboratories on different types of robots and showed excellent results compared to conventional teaching methods.
The Open X-Embodiment project was inspired by large language models (LLMS), which, when learning on large, general data sets, can be compared or even exceeding smaller models trained in highly specialized data sets. Researchers found that this principle is also applicable to robotics.
The RT-1-X model was tested on various tasks in five research laboratories on five common robots. Compared to specialized models developed for each robot, RT-1-X showed 50% more percentage of success in tasks, such as moving objects and opening doors.
Sergey Levin, Associate Professor UC Berkeley and the co -author of the article, wrote: “Such models usually” never “work on the first attempt, but this one worked.”
Researchers consider the possibility of integrating current achievements with the innovations of the Robocat model from DeepMind. In addition, the team presented the public set of data from Open X-Embodiment and a reduced version of the RT-1-X model.
Sanketi concluded: “We hope that the provision of data and models will accelerate research. The future of robotics depends on the training of robots from each other and, more importantly, allows researchers to learn from each other.”