Conference on Robotic Learning: Teaching Robots to Cook, Navigate and Learn From Mistakes

| December 7, 2020

From robots that help with everyday tasks, move objects in complex environments, and learn “on the job,” USC computer scientists presented their research at 4th Conference on Robotic Learning.

Researchers from labs across the Department of Computer Science presented their work virtually at the 4<sup>th</sup> Conference on Robotic Learning (CoRL).

Researchers from labs across the Department of Computer Science presented their work virtually at the 4th Conference on Robotic Learning (CoRL).

USC computer science faculty and students virtually presented their research in the field of intelligent robotics at the 4th Conference on Robotic Learning (CoRL), which ran Nov. 16 -18. Robot learning is a research field at the intersection of machine learning and robotics, which aims to build more intelligent systems.

Since launching in 2017, CoRL has quickly become one of the world’s top academic gatherings at the intersection of robotics and machine learning, described as “a selective, single-track conference for robot learning research, covering a broad range of topics spanning robotics, machine learning and control, and including theory and applications.” In total, seven USC-affiliated papers were presented.

Learning from past experience

The Best Paper Presentation award was received by a team of USC Viterbi computer scientists, led by Joseph Lim, assistant professor of computer science.

The team explored how robots can learn everyday tasks, like setting a table or cooking, by leveraging experience from solving other related tasks. As a result, the team’s agent was able to quickly learn complex manipulation tasks, which prior work struggled with.

“Think of it as something similar to how you can learn to cook a new dish very quickly when you are already an experienced chef,” said lead author Karl Pertsch, a Ph.D. student in the Cognitive Learning for Vision and Robotics Lab (CLVR), supervised by Lim. “In particular, we showed how the robot can extract reusable subskills from its prior experience, like opening microwaves or turning on stoves, and efficiently transfer them to solve new tasks.”

 

A robot that never forgets its training

Building machines that can learn from their mistakes is one of the great promises of robot learning research. “If we can build robots which automatically adapt themselves to ever-changing environments and requests from humans, we can greatly expand where and for whom robots can improve everyday life,” said Ryan Julian, a Ph.D. student in computer science.

But most of today’s learning robots don’t do this—they are trained once in the lab and do not learn new things once deployed in the real world. In a new paper, Julian and a team of researchers from the USC Viterbi School of Engineering and Google, demonstrate how a real-world robot can learn to do new things, or work in new places, by combining its experiences from training with its successes and failures from the real world. “Using our method, the robot can learn new things over and over again, without ever forgetting its training,” said Julian.

 

Learning from demonstrations

Learning from demonstrations is becoming increasingly popular in obtaining effective robot control policies for complex tasks. But it is susceptible to imperfections in demonstrations and also raises safety concerns as robots may learn unsafe or undesirable actions.

A trio of researchers, including assistant professors Stefanos Nikolaidis and Jyo Deshmukh,  developed a method that could allow robots to learn new tasks from observing a small number of demonstrations—even imperfect ones—using signal temporal logic (STL). While current state-of-art methods need at least 100 demonstrations to nail a specific task, this new method allows robots to learn from only a handful of demonstrations.

Above: Using the USC researchers’ method, an autonomous driving system would still be able to learn safe driving skills from “watching” imperfect demonstrations, such this driving demonstration on a racetrack. Source credits: Driver demonstrations were provided through the Udacity Self-Driving Car Simulator.

Moving objects in complex environments

How can a robot learn to move objects around as humans do? In contrast to a controlled lab setting, in the real-world robots often encounter cluttered and obstructed environments, such as a messy kitchen with foods, dishes and other cooking tools.

To ensure a robot can function in unorganized environments, a team of USC computer science researchers combined motion planning and reinforcement learning to safely navigate through obstructed environments and learn sophisticated object manipulation by trial and error. The proposed algorithm can efficiently and safely learn to pick up an object hidden inside a deep box and assemble a table in a cluttered environment.  Researchers from Joseph Lim’s Cognitive Learning for Vision and Robotics Lab and Gaurav Sukhatme’s Robotic Embedded Systems Laboratory joined forces on this work.

Planning motions that follow constraints

In order for robots to be useful in the real-world, they must be able to plan motions that follow various constraints, for instance: holding an object, maintaining an orientation, or staying within a certain distance of an object.

Usually, these constraints are hand-designed by humans, which can be tedious. A paper by USC computer scientists in the Robotic Embedded Systems Laboratory, led by Sukhatme, the Fletcher Jones Foundation Endowed Chair in Computer Science, introduces a method to extract these constraints automatically from demonstrations of a task.

“Our method uses manifold learning, a subfield of machine learning, to produce an artificial neural network that represents the constraint and whose local characteristics are enforced by the dataset of demonstrations,” said co-author Isabel Rayas, a computer science Ph.D. student.

“It can then automatically detect whether or not a robot pose adheres to the constraint, and the robot can produce a valid motion plan.”

 

Translating natural language commands into action, deep reinforcement learning reality gap

In the area of language and movement, incoming USC professor Jesse Thomason, who will join the Department of Computer Science in Spring 2021, explores vision-and-language navigation (VLN), the task of translating natural language commands—like “go into the hallway and take a left at the second door to find the master bedroom”—into sequences of movement actions. Thomason and his colleagues present the “RobotSlang Benchmark,” a corpus of human-human dialogs to localize and control a physical robot car in a search for target objects in a maze.

USC Information Sciences Institute computer scientist, Luis Garcia, studied the reality gap in Deep Reinforcement Learning (RL) between simulations and real robots in an Amazon Science publication to improve the robustness of Deep RL policies.

List of papers:

Accelerating Reinforcement Learning with Learned Skill Priors
Karl Pertsch (University of Southern California)*; Youngwoon Lee (University of Southern California); Joseph J Lim (USC)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments

Jun Yamada (University of Southern California); Youngwoon Lee (University of Southern California)*; Gautam Salhotra (University of Southern California); Karl Pertsch (University of Southern California); Max Pflueger (University of Southern California); Gaurav Sukhatme (University of Southern California); Joseph J Lim (USC); Peter Englert (University of Southern California)

Sim2Real Transfer for Deep Reinforcement Learning with Stochastic State Transition Delays
Sandeep Singh Sandha (UCLA)*;  (USC Information Sciences Institute); Bharathan Balaji (Amazon); Fatima Anwar (University of Massachusetts, Amherst); Mani Srivastava (UC Los Angeles)

Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning
Ryan C Julian (University of Southern California)*; Benjamin Swanson (Google); Gaurav Sukhatme (University of Southern California); Sergey Levine (Google); Chelsea Finn (Google Brain); Karol Hausman (Google Brain)

Learning from Demonstrations using Signal Temporal Logic
Aniruddh G Puranic (University of Southern California)*; Jyotirmoy Deshmukh (USC); Stefanos Nikolaidis (University of Southern California)

Learning Equality Constraints for Motion Planning on Manifolds

Isabel M Rayas Fernández (University of Southern California)*; Giovanni Sutanto (USC); Peter Englert (University of Southern California); Ragesh Kumar Ramachandran (University of Southern California); Gaurav Sukhatme (University of Southern California)

The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation
Shurjo Banerjee (University of Michigan)*; Jesse Thomason (University of Washington, incoming USC Spring 2021); Jason J Corso (University of Michigan)

Published on December 7th, 2020

Last updated on May 16th, 2023

Share This Story