
RSS 2025 Organizing Committee
The Robotics: Science and Systems (RSS) Conference returned to Los Angeles as the five-day conference kicked off at the University of Southern California (USC) on June 21.
This year, the conference featured 32 workshops, seven poster sessions, more than 160 paper presentations, two keynote talks, award ceremonies, lab tours and daily live demonstrations.
RSS 2025 was the largest in the conference’s 21-year history, with 594 papers submitted and almost 1,300 attendees from around the world.
USC has played a central role in RSS history — including this year’s conference leadership — as well as the rapidly growing field of robotics. In 2004, Professor Gaurav Sukhatme, now director of the USC School of Advanced Computing, co-founded RSS and its governing nonprofit, the Robotics: Science and Systems Foundation (RSSF). He was the program chair for the first edition of the conference in 2005 and the local arrangements chair, the first time RSS was held at USC in 2011.
The 2025 edition of RSS was led by Professor Luca Carlone from the Massachusetts Institute of Technology, who served as the program chair. Together with Sukhatme, six faculty members from the USC Viterbi School of Engineering and the USC School of Advanced Computing, served as local arrangements chairs: Professors Erdem Bıyık, Stefanos Nikolaidis, Quan Nguyen, Feifei Qian, Daniel Seita and Yue Wang — their involvement further helped bring the conference to life. USC faculty, researchers and students also presented papers at the conference, chaired sessions and led workshops, while several USC faculty and alumni-led startups also contributed as sponsors, further positioning the university as a global leader in robotics.

RSS 2025 Organizing Committee (RSS co-founder, RSS Foundation president, local arrangement chairs from USC)
Robots in action: live demos and poster sessions took center stage
From robots doing backflips to life-size humanoids running across campus, the demo and poster sessions brought RSS to life. Held at Epstein Family Plaza and USC Associates Park, the sessions featured hands-on research showcases from leading labs and sponsors, including Amazon Robotics, Google and the Toyota Research Institute, highlighting the latest in academic and industry innovation.
USC opened its cutting-edge robotics labs to RSS attendees
RSS attendees had the opportunity to tour robotics labs across campus, each showcasing exclusive live demonstrations and presentations on the latest breakthroughs in its robotics research. Led by faculty from USC Viterbi’s departments, including the Ming Hsieh Department of Electrical and Computer Engineering (ECE), the Thomas Lord Department of Computer Science and the Department of Aerospace and Mechanical Engineering, the tours highlighted innovations across the following labs: RoboLAND, ICAROS, GVL, SLURM, HaRVI, Dynamic Robotics and Control Laboratory, Lira Lab, the Interaction Lab and the Center for Advanced Manufacturing.
The conference featured 32 workshops covering a wide range of emerging topics in robotics, from learning-based control and physical intelligence to human-robot interaction and embodied artificial intelligence. Held on the first and last days of RSS, the workshops brought together researchers for in-depth discussions and collaboration on future directions in the field. Many workshops were organized by USC faculty and students, like the Space Robotics Workshop organized by Keenan Albee, a robotics technologist at NASA’s Jet Propulsion Lab and incoming USC assistant professor.

RSS attendees participate in this year’s conference workshops presented and organized by robotics industry leaders.
USC researchers presented breakthroughs in humanoids and terrestrial robots at RSS 2025
Two USC-led research papers were accepted to this year’s conference. Professor Qian and her RoboLAND lab presented work on terrain-aware robotics, and Professor Nguyen and his Dynamic Robotics and Control Lab presented a paper. Nguyen is also the founder and chairman of VinMotion — one of the conference sponsors and a Vietnam-based startup focused on scalable humanoid deployment through Physical AI. A third paper featured authors from Professor Bıyık’s Lira Lab. Additionally, Professor Jesse Thomason served as chair for the Vision, Language, and Action (VLA) Models session.
Junheng Li, Ziwei Duan, Junchao Ma, Quan Nguyen
Session: Humanoids
Abstract:
Current optimization-based control techniques for humanoid locomotion struggle to adapt step duration and placement simultaneously in dynamic walking gaits due to their reliance on fixed-time discretization, which limits responsiveness to terrain conditions and results in suboptimal performance in challenging environments. In this work, we propose a Gait-Net-augmented implicit kino-dynamic model-predictive control (MPC) to simultaneously optimize step location, step duration, and contact forces for natural variable-frequency locomotion. The proposed method incorporates a Gait-Net-augmented Sequential Convex MPC algorithm to solve multi-linearly constrained variables by their step sizes iteratively. At its core, a lightweight Gait-frequency Network (Gait-Net) determines the preferred step duration in terms of variable MPC sampling times, simplifying step duration optimization to the parameter level. Additionally, it enhances and updates the spatial momentum reference trajectory estimation within each sequential iteration by incorporating local solutions, allowing the projection of kinematic constraints to the design of reference trajectories. We validate the proposed algorithm in high-fidelity simulations and on in-house humanoid hardware, demonstrating its capability for variable-frequency and 3-D discrete terrain locomotion with only a one-step preview of terrain data.
Adaptive Locomotion on Mud through Proprioceptive Sensing of Substrate Properties
Shipeng Liu, Jiaze Tang, Siyuan Meng, Feifei Qian
Session: Mobile Manipulation and Locomotion
Abstract:
Muddy terrains present significant challenges for terrestrial robots, as subtle changes in composition and water content can lead to large variations in substrate strength and force responses, causing robot to slip or stuck. This paper presents a method to estimate mud properties using proprioceptive sensing, enabling a flipper-driven robot to adapt its locomotion through muddy substrates of varying strength. First, we characterize mud reaction forces through actuator current and position signals from a statically-mounted robotic flipper, and use the measured force to determine key coefficients that characterize intrinsic mud properties. The proprioceptively estimated coefficients match closely with measurements from a lab-grade load cell, validating the effectiveness of the proposed method. Next, we extend the method to a locomoting robot, to estimate mud properties online as it crawls across different mud mixtures. Experimental data reveals that mud reaction forces depend sensitively on robot motion, requiring joint analysis of robot movement with proprioceptive force to correctly determine mud property. Lastly, we deploy this method in a flipper-driven robot moving across muddy substrates of varying strengths, and demonstrate that the proposed method allow the robot to use the estimated mud properties to adapt its locomotion strategy, and successfully avoid locomotion failures. Our findings highlight the potential of proprioception-based terrain sensing to enhance robot mobility in complex, deformable natural environments, paving the way for more robust field exploration capabilities.
NaVILA: Legged Robot Vision-Language-Action Model for Navigation
An-Chieh Cheng, Yandong Ji, Zhaojing Yang, Zaitian Gongye, Xueyan Zou, Jan Kautz, Erdem Biyik, Hongxu Yin, Sifei Liu, Xiaolong Wang
Session: VLA Models
Abstract:
This paper proposes to solve the problem of Vision-and-Language Navigation with legged robots, which not only provides a flexible way for humans to command but also allows the robot to navigate through more challenging and cluttered scenes. However, it is non-trivial to translate human language instructions all the way to low-level leg joint actions. We propose NaVILA, a 2-level framework that unifies a Vision-Language-Action model (VLA) with locomotion skills. Instead of directly predicting low-level actions from VLA, NaVILA first generates mid-level actions with spatial information in the form of language, (e.g., “moving forward 75cm”), which serves as an input for a visual locomotion RL policy for execution. NaVILA substantially improves previous approaches on existing benchmarks. The same advantages are demonstrated in our newly developed benchmarks with IsaacLab, featuring more realistic scenes, low-level controls, and real-world robot experiments.
Note: Every effort was made to include all USC Viterbi-affiliated papers at RSS 2025. If you believe your work was inadvertently left out, please let us know at ece-comms@usc.edu so we can update the list.
Published on July 2nd, 2025
Last updated on July 2nd, 2025