Friday, December 27, 2024

Can robots learn from videos?

- Advertisement -

Researchers at Carnegie Mellon University have enabled robots to learn household chores by observing home videos depicting people engaging in everyday tasks.

A team from Carnegie Mellon University’s Robotics Institute used affordances to teach robots how to interact with objects. Credit: Carnegie Mellon University
A team from Carnegie Mellon University’s Robotics Institute used affordances to teach robots how to interact with objects. Credit: Carnegie Mellon University

Current robot training methods rely on human demonstrations or simulated environments, which are time-consuming and prone to failure. Previously it was seen that robots learn by observing humans performing tasks. However, the method, known as In-the-Wild Human Imitating Robot Learning (WHIRL), necessitated task completion by humans in the same environment as the robot.

Researchers at Carnegie Mellon University have enabled robots to learn household chores by watching home videos of people performing everyday tasks. The researchers have enhanced home robot utility, enabling cooking, cleaning, and more assistance. Two robots master 12 tasks, including opening drawers, oven doors, and lids; removing pots from stoves; and handling telephones, vegetables, and cans of soup.

- Advertisement -

The latest model removes the requirement for human demonstrations and the need for the robot to function in an identical environment. Like WHIRL, the robot still needs practice to excel at a task. The team’s research demonstrated that it can acquire a new task in as little as 25 minutes. Robots can use this model to explore the world around them curiously. To instruct the robot on object interaction, the team implemented the concept of affordances. Derived from psychology, affordances pertain to the opportunities an environment presents to an individual. This notion has been expanded to encompass design and human-computer interaction, denoting the potential actions perceived by an individual.

In Virtual Robotic Behavior (VRB) context, affordances serve as guidelines for determining the location and manner in which a robot can interact with an object, drawing insights from human behaviour. For instance, when observing a human opening a drawer, the robot discerns the contact points, such as the handle, and the direction of the drawer’s movement, typically straight out from the starting position. By analysing multiple videos of humans opening drawers, the robot can acquire the ability to open any drawer. The team used video datasets like Ego4D and Epic Kitchens to research.

The researchers believe that this research has the potential to empower robots with the ability to acquire knowledge from the extensive array of Internet and YouTube videos accessible to them.

Reference: More information is available on the project’s website and in a paper presented in June at the Conference on Vision and Pattern Recognition.

Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a journalist at EFY. She is an Electronics and Communication Engineer with over five years of academic experience. Her expertise lies in working with development boards and IoT cloud. She enjoys writing as it enables her to share her knowledge and insights related to electronics, with like-minded techies.

2 COMMENTS

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics