As robots increasingly join forces to work with humans—from nursing care homes to warehouses to factories—they must be able to proactively offer support. But first, robots have to learn something we know instinctively: how to anticipate people’s needs.
With that goal in mind, researchers at the USC Viterbi School of Engineering have created a new robotic system that accurately predicts how a human will build an IKEA bookcase, and then lends a hand—providing the shelf, bolt or screw necessary to complete the task. The research was presented at the International Conference on Robotics and Automation on May 30, 2021.
“We want to have the human and robot work together – a robot can help you do things faster and better by doing supporting tasks, like fetching things,” said the study’s lead author Heramb Nemlekar. “Humans will still perform the primary actions, but can offload simpler secondary actions to the robot.”
Nemlekar, a PhD student in computer science, is supervised by Stefanos Nikolaidis, an assistant professor of computer science, and co-authored the paper with Nikolaidis and SK Gupta, a professor of aerospace, mechanical engineering and computer science who holds the Smith International Professorship in Mechanical Engineering.
Above: Human-robot IKEA bookcase assembly demonstration showcases the team’s algorithm in a real-world setting. The robot “sees” the human pick up the long and short boards and connect them. Based on this, the robot matches the user with a dominant preference previously learned by observing humans. The robot infers that the user prefers to connect the boards first, and then starts supplying the boards according to the user’s preference. Video/USC Viterbi ICAROS Lab.
Adapting to variations
In 2018, a robot created by researchers in Singapore famously learned to assemble an IKEA chair itself. In this new study, the USC research team aims to focus instead on human-robot collaboration.
There are advantages to combining human intelligence and robot strength. In a factory for instance, a human operator can control and monitor production, while the robot performs the physically strenuous work. Humans are also more adept at those fiddly, delicate tasks, like wiggling a screw around to make it fit.
“We want the robot to be able to infer what the human wants, based on some prior knowledge.” Stefanos Nikolaidis.
The key challenge to overcome: humans tend to perform actions in different orders. For instance, imagine you’re building a bookcase—do you tackle the easy tasks first, or go straight for the difficult ones? How does the robot helper quickly adapt to variations in its human partners?
“Humans can verbally tell the robot what they need, but that’s not efficient,” said Nikolaidis. “We want the robot to be able to infer what the human wants, based on some prior knowledge.”
It turns out, robots can gather knowledge much like we do as humans: by “watching” people, and seeing how they behave. While we all tackle tasks in different ways, people tend to cluster around a handful of dominant preferences. If the robot can learn these preferences, it has a head start on predicting what you might do next.
A good collaborator
Based on this knowledge, the team developed an algorithm that uses artificial intelligence to classify people into dominant “preference groups,” or types, based on their actions. The robot was fed a kind of “manual” on humans: data gathered from an annotated video of 20 people assembling the bookcase. The researchers found people fell into four dominant preference groups.
For instance, do you connect all the shelves to the frame on just one side first; or do you connect each shelf to the frame on both sides, before moving onto the next shelf? Depending on your preference category, the robot should bring you a new shelf, or a new set of screws. In a real-life IKEA furniture assembly task, a human stayed in a “work area” and assembled the bookcase, while the robot—a Kinova Gen 2 robot arm—learned the human’s preferences, and brought the required materials from a storage area.
“The system very quickly associates a new user with a preference, with only a few actions,” said Nemlekar.
“That’s what we do as humans. If I want to work to work with you, I’m not going to start from zero. I’ll watch what you do, and then infer from that what you might do next.”
In this initial version, the researchers entered each action into the robotic system manually, but future iterations could learn by “watching” the human partner using computer vision. The team is also working on a new test-case: humans and robots working together to build—and then fly—a model airplane, a task requiring close attention to detail.
Refining the system is a step towards having “intuitive” helper robots in our daily lives, said Nikolaidis. Although the focus is currently on collaborative manufacturing, the same insights could be used to help people with disabilities, with applications including robot-assisted eating or meal prep.
“If we will soon have robots in our homes, in our work, in care facilities, it’s important for robots to infer and adapt to people’s preferences,” said Nikolaidis. “The robot needs to be a teammate and a good collaborator. I think having some notion of user preference and being able to learn variability is what will make robots more accepted.”
Published on June 29th, 2021
Last updated on May 16th, 2024