Artificial Intelligence Researchers Challenge Robot to Skateboard in Simulation
We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20 through August 3. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Learn more about Transformer 2022
Artificial intelligence researchers claim to have created a four-legged robot control framework that promises better energy efficiency and adaptability than more traditional model-based robotic leg gait control. To demonstrate the frame’s rugged nature that adapts to real-time conditions, AI researchers slid the system across frictionless surfaces to mimic a banana peel, skateboard and climb a bridge while walking. on a treadmill. An Nvidia spokesperson told VentureBeat that only the frictionless surface test was performed in real life due to limitations on the size of office staff due to COVID-19. The spokesperson said all other challenges took place in simulation. (Simulations are often used as training data for robotic systems before those systems are used in real life.)
“Our framework learns a controller that can adapt on the fly to challenging environmental changes, including new scenarios not seen during training. The learned controller is up to 85% more energy efficient and more robust than methods baseline,” the paper states. “At inference time, the high-level controller only needs to evaluate a small, multi-layered neural network, avoiding the use of an expensive model predictive control strategy. (MPC) that might otherwise be needed to optimize long-term performance.”
The quadruped model is simulation trained using a split-belt treadmill with two tracks that can change speed independently. This simulation training is then transferred to a Laikago robot in the real world. Nvidia released a video of simulations and lab work on Monday, when it also unveiled the AI-powered video conferencing service Maxine and the Omniverse simulated environment for engineers in beta.
An article detailing the Quadruped Paws Control Framework has been published a week ago on the arXiv pre-release repository. artificial intelligence researchers from Nvidia; Caltech; University of Texas, Austin; and the Vector Institute at the University of Toronto contributed to the article. The framework combines a high-level controller that uses reinforcement learning with a lower-level model-based controller.
“By taking advantage of the advantages of both paradigms, we obtain a contact adaptive controller that is more robust and energy efficient than those using a fixed contact sequence,” the paper says.
The researchers say that a number of robotic leg control networks are fixed and therefore unable to adapt to new circumstances, while adaptive networks are often energy-intensive. They say locomotion systems created with reinforcement learning are often less robust than model-based approaches, require lots of training samples, or use a complicated approach to rewarding agents.
Earlier this year at the International Conference on Robotics and Automation (CRA), AI researchers from ETH Zurich detailed DeepGait, an AI trained with reinforcement learning to do things like bridging unusually long gaps and walking over rough terrain.
VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more about membership.