Home Blog Newsfeed Robotic Helper Making Mistakes? MIT and NVIDIA’s New Framework Lets You Nudge It in the Right Direction
Robotic Helper Making Mistakes? MIT and NVIDIA’s New Framework Lets You Nudge It in the Right Direction

Robotic Helper Making Mistakes? MIT and NVIDIA’s New Framework Lets You Nudge It in the Right Direction

Robots are increasingly becoming our helpers, from cleaning dishes to assisting in factories. But what happens when these robotic helpers make mistakes? Imagine a robot trying to grab a soapy bowl but slightly missing the mark. A new framework developed by researchers at MIT and NVIDIA offers an intuitive solution: simply nudge it in the right direction.

This innovative method allows users to correct a robot’s behavior through simple interactions. Users can point to the desired object on a screen, trace a trajectory, or even physically nudge the robot’s arm. The key advantage is that this technique doesn’t require retraining the machine-learning model, saving time and resources.

Unlike other methods for correcting robot behavior, this technique does not require users to collect new data and retrain the machine-learning model that powers the robot’s brain. It enables a robot to use intuitive, real-time human feedback to choose a feasible action sequence that gets as close as possible to satisfying the user’s intent.

According to the research, the framework’s success rate was 21% higher compared to alternative methods that didn’t involve human intervention. This framework could eventually allow users to easily guide robots to perform a variety of household tasks, even in unfamiliar environments.

“We can’t expect laypeople to perform data collection and fine-tune a neural network model. The consumer will expect the robot to work right out of the box, and if it doesn’t, they would want an intuitive mechanism to customize it. That is the challenge we tackled in this work,” says Felix Yanwei Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this method.

The research is a collaborative effort between MIT and NVIDIA, with key contributions from Lirui Wang PhD ’24 and Yilun Du PhD ’24; senior author Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino PhD ’19, and Dieter Fox of NVIDIA. The findings are set to be presented at the International Conference on Robots and Automation.

Generative AI models are increasingly used to train robots, teaching them a “policy” or set of rules to complete tasks. These models learn feasible robot motions during training, ensuring valid trajectories. However, these trajectories might not always align with a user’s intent. For example, a robot trained to grab boxes might struggle with a differently oriented bookshelf.

Instead of expensive retraining, the MIT researchers focused on enabling users to correct the robot’s behavior in real-time. The framework offers three correction methods: pointing to the object in the robot’s camera view, tracing a trajectory, or physically moving the robot’s arm.

“When you are mapping a 2D image of the environment to actions in a 3D space, some information is lost. Physically nudging the robot is the most direct way to specifying user intent without losing any of the information,” says Wang.

To prevent these interactions from causing invalid actions, the researchers employ a sampling procedure. This method allows the model to choose from valid actions that align with the user’s goal.

This sampling method enabled the researchers’ framework to outperform the other methods they compared it to during simulations and experiments with a real robot arm in a toy kitchen.

The researchers plan to improve the speed of the sampling procedure and explore robot policy generation in new environments. This could pave the way for robots that continuously learn and adapt to user preferences, making them more effective and user-friendly.

Add comment

Sign Up to receive the latest updates and news

Newsletter

Bengaluru, Karnataka, India.
Follow our social media
© 2025 Proaitools. All rights reserved.