USC at the Robotics: Science and Systems (RSS) Conference

USC Viterbi Staff | July 7, 2023 

USC researchers will present four papers on topics including scalable robot data collection, simulating complex materials and transferring assembly tasks from simulation to reality.

This year’s Robotics: Science and Systems (RSS) conference, held in Daegu, Republic of Korea, will showcase four research papers from USC highlighting the latest breakthroughs in robotics, from improving data collection efficiency and simulating complex granular materials, to enabling dexterous object manipulation and transferring assembly tasks from simulation to reality.

RSS is an annual conference that focuses on the intersection of robotics and science, covering a broad range of topics including robot design, perception, planning, control, and more. It has a strong reputation for showcasing cutting-edge research and attracting top researchers in the robotics community.

Accepted papers with USC affiliation:

PATO: Policy Assisted TeleOperation for Scalable Robot Data Collection Poster Session Tuesday, July 11

Shivin Dass (University of Southern California), Karl Pertsch (University of Southern California)*, Hejia Zhang (University of Southern California), Youngwoon Lee (University of California, Berkeley), Joseph J Lim (University of Southern California), Stefanos Nikolaidis (University of Southern California)

Abstract: Large-scale data is an essential component of machine learning as demonstrated in recent advances in natural language processing and computer vision research. However, collecting large-scale robotic data is much more expensive and slower as each operator can control only a single robot at a time. To make this costly data collection process efficient and scalable, we propose Policy Assisted TeleOperation (PATO), a system which automates part of the demonstration collection process using a learned assistive policy. PATO autonomously executes repetitive behaviors in data collection and asks for human input only when it is uncertain about which subtask or behavior to execute. We conduct teleoperation user studies both with a real robot and a simulated robot fleet and demonstrate that our assisted teleoperation system reduces human operators’ mental load while improving data collection efficiency. Further, it enables a single operator to control multiple robots in parallel, which is a first step towards scalable robotic data collection. For code and video results, see https://clvrai.com/pato

GranularGym: High-Performance Simulation for Robotic Tasks with Granular Materials Poster Session Wednesday, July 12

David R Millard (University of Southern California)*, Daniel Pastor (Jet Propulsion Laboratory), Joseph Bowkett (Jet Propulsion Laboratory), Paul Backes (Jet Propulsion Laboratory), Gaurav S Sukhatme (University of Southern California, Amazon)

Abstract: Granular materials are of critical interest to many robotic tasks in planetary science, construction, and manufacturing. However, the dynamics of granular materials are complex and often computationally very expensive to simulate. We propose a set of methodologies and a system for the fast simulation of granular materials on Graphics Processing Units (GPUs), and show that this simulation is fast enough for basic training with Reinforcement Learning algorithms, which currently require many dynamics samples to achieve acceptable performance. Our method models granular material dynamics using implicit timestepping methods for multibody rigid contacts, as well as algorithmic techniques for efficient parallel collision detection between pairs of particles and between particle and arbitrarily shaped rigid bodies, and programming techniques for minimizing warp divergence on Single-Instruction, Multiple-Thread (SIMT) chip architectures. We showcase our simulation system on several environments targeted toward robotic tasks, and release our simulator as an open-source tool.

DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training Poster Session Wednesday, July 12

Aleksei Petrenko (University of Southern California)*, Arthur Allshire (University of Toronto), Gavriel State (NVIDIA), Ankur Handa (NVIDIA), Viktor Makoviychuk (NVIDIA)

Abstract: In this work, we propose algorithms and methods that enable learning dexterous object manipulation using simulated one- or two-armed robots equipped with multi-fingered hand end-effectors. Using a parallel GPU-accelerated physics simulator (Isaac Gym), we implement challenging tasks for these robots, including regrasping, grasp-and-throw, and object reorientation. To solve these problems we introduce a decentralized Population-Based Training (PBT) algorithm that allows us to massively amplify the exploration capabilities of deep reinforcement learning. We find that this method significantly outperforms regular end-to-end learning and is able to discover robust control policies in challenging tasks. Video demonstrations of learned behaviors and the code can be found at https://sites.google.com/view/dexpbt

IndustReal: Transferring Contact-Rich Assembly Tasks from Simulation to Reality Poster Session Wednesday, July 12

Bingjie Tang (University of Southern California)*, Michael A Lin (Stanford University), Iretiayo A Akinola (NVIDIA), Ankur Handa (NVIDIA), Gaurav S Sukhatme (University of Southern California, Amazon), Fabio Ramos (NVIDIA), Dieter Fox (NVIDIA), Yashraj S Narang (NVIDIA)

Abstract: Robotic assembly is a longstanding challenge, requiring contact-rich interaction and high precision and accuracy. Many applications also require adaptivity to diverse parts, poses, and environments, as well as low cycle times. In other areas of robotics, simulation is a powerful tool to develop algorithms, generate datasets, and train agents. However, simulation has had a more limited impact on assembly. We present IndustReal, a set of algorithms, systems, and tools that solve assembly tasks in simulation with reinforcement learning (RL) and successfully achieve policy transfer to the real world. Specifically, we propose 1) simulation-aware policy updates, 2) signed-distance-field rewards, and 3) sampling-based curricula for robotic RL agents. We use these algorithms to enable robots to solve contact-rich pick, place, and insertion tasks in simulation. We then propose 4) a policy-level action integrator to minimize error at policy deployment time. We build and demonstrate a real-world robotic assembly system that uses the trained policies and action integrator to achieve repeatable performance in the real world. Finally, we present hardware and software tools that allow other researchers to reproduce our system and results. For videos and additional details, please see our project website at https://sites.google.com/nvidia.com/industreal.

Published on July 7th, 2023

Last updated on May 16th, 2024

Share this Story