
NVIDIA is accelerating robotics research and development with new open models and simulation libraries. Credit: NVIDIA
Today, NVIDIA Corp. announced the beta release of Newton, an open-source, GPU-accelerated physics engine managed by the Linux Foundation. Built on the NVIDIA Warp and OpenUSD frameworks, and co-developed by Google DeepMind, Disney Research, and NVIDIA, the beta version of Newton is now available to all robotics developers.
The Conference on Robot Learning (CoRL) 2025 is taking place this week in Seoul, South Korea. The event brings together experts in robotics and machine learning to discuss cutting-edge research and applications. NVIDIA said the Newton Physics Engine beta includes the latest release of the open Isaac GR00T N1.6 robot foundation model, which will be available shortly on Hugging Face.
This latest generation of GR00T will integrate NVIDIA Cosmos Reason, an open, customizable reasoning vision language model (VLM) built for physical AI. “Acting as the robot’s deep-thinking brain, Cosmos Reason turns vague instructions into step-by-step plans, using prior knowledge, common sense, and physics to handle new situations and generalize across many tasks,” said NVIDIA.
Newton to simulate bodies in physical AI
Jetson Thor, powered by the NVIDIA Blackwell GPU, supports real-time reasoning. Cosmos Reason enhances a robot’s ability to handle ambiguous or novel instructions by using multi-step inference and AI reasoning, the company asserted.
When a robot encounters a new scene or task, Cosmos Reason helps it extrapolate from previous experiences, break down complex instructions, and construct a plan using prior knowledge and common sense. Similar to how language models reason about text, Cosmos Reason applies reasoning techniques to physical scenarios, allowing robots to understand and adapt to unfamiliar situations by using reasoning as a tool to extend beyond their initial training data.
“Humanoids are the next frontier of physical AI, requiring the ability to reason, adapt, and act safely in an unpredictable world,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “With these latest updates, developers now have the three computers to bring robots from research into everyday life — with Isaac GR00T serving as the robot’s brains, Newton simulating their body, and NVIDIA Omniverse as their training ground.”
Cosmos world foundation models cut complexity
Leading robot makers such as AeiROBOT, Franka Robotics, LG Electronics, Lightwheel, Mentee Robotics, Neura Robotics, Solomon, Techman Robot, and UCR are evaluating Isaac GR00T N models for building general-purpose robots.
At CoRL, NVIDIA also announced new updates to its open Cosmos world foundation models (WFMs), which let developers generate diverse data to accelerate robot training at scale using text, image, and video prompts.
Cosmos Predict 2.5, coming soon, combines the power of three Cosmos WFMs into one powerful model, cutting complexity, saving time, and boosting efficiency. It supports longer video generation — capable of creating up to 30-second videos — as well as multi-view camera outputs for richer world simulations.
Cosmos Transfer 2.5 will deliver faster, higher-quality results than previous models, while being 3.5x smaller, according to NVIDIA. It can generate photorealistic synthetic data from ground-truth 3D simulation scenes or spatial control inputs like depth, segmentation, edges and high-definition maps.
New workflow helps teach robot grasping
Teaching a robot to grasp an object is one of the most difficult challenges in robotics. It is not just about moving an arm but turning a thought into a precise action — a skill robots must learn through trial and error, said NVIDIA.
The new dexterous grasping workflow in the developer preview of Isaac Lab 2.3, built on the NVIDIA Omniverse platform, trains multi-fingered hand and arm robots in a virtual world using an automated curriculum. It starts with simple tasks and gradually ramps up the complexity. The workflow changes aspects like gravity, friction, and the weight of an object, training robots to learn skills even in unpredictable environments.
Boston Dynamics’ Atlas humanoid learned grasping using this workflow to significantly improve its manipulation capabilities. Scott Kuindersma, vice president of robotics research at Boston Dynamics, was a guest on a recent episode of The Robot Report Podcast and discussed the development and testing of large behavior models (LBMs) for Atlas.
The company’s team collected 20 hours of teleoperation data to train LBMs, which can generalize manipulation tasks. The team demonstrated the LBMs with Atlas performing bi-manual manipulation tasks, such as picking and placing parts for the company’s Spot quadruped. The process involved data collection, annotation, model training, and evaluation.
Simulation helps evaluate learned robot skills
Getting a robot to master a new skill — like picking up a cup or walking across a room — is incredibly difficult, and testing these skills on a physical robot is slow and expensive.
The solution lies in simulation, which NVIDIA said offers a way to test a robot’s learned skills against countless scenarios, tasks, and environments. But even in simulation, developers tend to build fragmented, simplified tests that do not reflect the real world. A robot that learns to navigate a perfect, simple simulation will fail the moment it faces real-world complexity.
To let developers run complex, large-scale evaluations in a simulated environment without having to build the system from scratch, NVIDIA and Lightwheel are co-developing Isaac Lab – Arena, an open-source policy evaluation framework for scalable experimentation and standardized testing. The framework will be available soon.
Humanoid robotics will be a featured track at the upcoming RoboBusiness event on Oct. 15 and 16 in Santa Clara, Calif. Deepu Talla, vice president of robotics and edge AI at NVIDIA, will kick off the event with a keynote titled: “Physical AI for the New Era of Robotics.”
Jim Fan, director of AI and a distinguished Scientist at NVIDIA, and Amit Goel, director of product management for autonomous Machines at NVIDIA, are also featured keynote speakers at the event. There is still time to register and attend the event.
SITE AD for the 2025 RoboBusiness registration open.