For years, automakers, technology companies and prognosticators have promised that fully autonomous self-driving cars were just around the corner. And for just as long, observers have reacted with justifiable anticipation and excitement.
Self-driving vehicles offer several advantages over those driven by people, who are all too often prone to distractions, like scrolling on cellphones instead of watching the road, and sometimes drinking too much before getting behind the wheel. Earlier this year, the Chamber of Progress reported that, in California alone, full deployment of driverless cars could have saved 1,300 lives and prevented nearly 5,000 serious injuries over the last three years. Similarly, The Atlantic has trumpeted that “Self-Driving Cars Could Save 300,000 Lives Per Decade in America.”
Supporters also argue that self-driving vehicles would be better for the environment because they emit fewer emissions; would allow drivers to kick back and relax during commutes; and would make it easier for the elderly and for people with disabilities to get around.
Against this backdrop, Tesla, Waymo, Ford and other companies have developed semi-autonomous self-driving vehicles. Robotaxis already operate in a handful of American cities, including Los Angeles and San Francisco. Still, experts believe that the days when fully autonomous vehicles rule the roads remains years away, despite companies having spent tens of billions on the technology. Several high-profile fatal accidents involving cars and taxis with limited self-driving capabilities have cast a pall over this much-heralded sector. Simply put, self-driving cars appear stuck in neutral.
So, what’s going on? Why haven’t self-driving vehicles hit the road en masse yet? It turns out that it’s really, really expensive and time consuming to train a self-driving vehicle to operate safely. To do so, they must rack up millions of miles on real roads to collect data that could be used to program self-driving algorithms. Even then, it’s practically impossible to encounter enough so-called “edge cases,” or rare events — like an elderly woman in an electric wheelchair chasing a duck with a broom — to ensure that a self-driving vehicle would have enough training data to react correctly in all situations.
Generative AI might help in the quest to make fully autonomous vehicles fully roadworthy, said Rahul Jain, director of the USC Center for Autonomy and AI, which held its annual fall workshop Thursday, Oct. 10 at Michelson Hall on the topic of “Autonomy in the GenAI Era.”
In fact, generative AI — the same technology that can generate text, images and music, among other things — could run millions of simulations and create thousands of models to enhance and expedite the training of self-driving vehicles, said Jain, professor of electrical & computer engineering, and computer science.
“The No. 1 big issue with solving the autonomy problem is the lack of data,” Jain said. “If you want to train ChatGPT, you can scrape the entire internet, and you can train ChatGPT on it, right? But when you want to do something similar for autonomous vehicles or autonomous robots, there is no internet to scrape. And without data, you can’t really solve the problem.”
“And so, one of the new possibilities that has opened up with the generative AI models is the possibility of generating synthetic data for training,” he added.
Researchers are also looking into leveraging generative AI to control the decision-making process of autonomous vehicles and other autonomous systems, Jain said. In this scenario, different variables are fed into ChatGPT or some other large language model, “which, in essence, acts as the brain of an autonomous vehicle, as an example, saying, ‘Go three feet straight, then turn left’ and so on, doing so on the fly.”
Industry and academic thought leaders coming together
Generative AI, artificial intelligence and big data were the stars of the recent USC Center for Autonomy and AI’s fall workshop. At the all-day event, 10 faculty members and 10 industry representatives gave talks on everything from self-driving cars to AI in autonomous flight to integrating large language models in industrial automation systems. Sixteen USC Viterbi doctoral students shared their research in AI and autonomy during lunchtime poster session.
The center, supported by corporate partners Siemens and Toyota, has worked closely with industry; its 2024 workshop featured speakers ranging from Lockheed Martin, Nissan to Rhoman Aerospace.
A roundtable discussion featuring Prakash Sarathy, chief engineer at Northrop Grumman; Gaurav Sukhatme, founding director of the USC School of Advanced Computing; Chetan Gupta, leader of Hitachi’s Industrial AI Lab, North America; and Georgios Fainekos, senior principal scientist at Toyota, explored the promise and challenges of generative AI and autonomy.
“I’m excited about the potential for generative AI,” said Sukhatme, a robotics expert who also serves as USC Viterbi’s executive vice dean. “It has a key role to play in autonomous robotics, including drones, self-driving cars and manufacturing.”
Faculty, industry leaders and the 60-plus audience members on hand said they found the workshop informative and interesting. “It’s quite useful,” Toyota’s Fainekos said. “It’s great for networking, to see what’s happening in the industry and to interact with students and faculty.”
Added Center Co-Director Jyotirmoy Deshmukh, an associate professor of computer science and electrical and computer engineering: “Through the workshop, we saw that industry members really value the research happening at USC, and there were many side conversations that were spurred between industry members and USC faculty. Above all, the industry members got an in-depth look at the high-quality research done by our Ph.D. students and could see the value of these students as future interns and employees.”
Founded in August 2021, the Center for Autonomy and AI includes faculty from almost all USC Viterbi school departments. It aims to bring academic researchers and industry leaders across different sectors together to solve some of the biggest challenges and bottlenecks in AI and autonomy, said Jain, the center director. In addition to the annual workshop, the center holds a weekly seminar series and funds two major research projects annually.
From large language models to data distribution shifts
At the 2024 workshop on generative AI and autonomy, Yue Wang, a USC Viterbi assistant professor of computer science, discussed his work in using large language models to enable autonomous driving anywhere in the world.
“We use LLMs to bring in human-like thinking and knowledge,” he said. “Our method, called ‘Agent-Driver,’ reshapes how self-driving cars make decisions. It gives the system a set of tools to use when needed, a memory that holds general knowledge and experience, and a reasoning engine that helps with decision-making, planning, and self-improvement.”
Michael Warren, leader of the operational autonomy group at Malibu-based HRL, a research lab co-owned by Boeing and General Motors, gave a talked titled “Data and Autonomous System Robustness.” All too often, he said, autonomous systems powered by machine learning models perform differently in different operating environments. For instance, a self-driving vehicle trained in the fog and hills of San Francisco might not fare as well cities with flatter terrain and snow. This phenomenon, in which the data a system encounters during deployment significantly differs from that seen during training, is known as a “data distribution shift.” It can lead “machine learning models to do unexpected things,” he said.
To make it easier to anticipate and possibly address these shifts, Warren and his group are exploring whether the impact of data distribution shifts could be predicted prior to the training of machine learning models via novel algorithms for analysis of data sets. This would allow system designers to better understand the effects of their selection of training data before spending time and money training machine learning models.
Jesse Thomason, a USC Viterbi professor of computer science and a former visiting scholar with Amazon Alexa AI, spoke of how to employ large language models to use “common sense” to suggest how a person might approach a problem. For instance, such a model might reason about how two people making mashed potatoes together can play different roles during that task. In this example, the large language model could hypothesize that one person peels potatoes while their partner boils the water to cook them in.
In a talk titled “Correcting Robots’ Mistakes with Human Feedback,” Erdem Biyik, a USC Viterbi assistant professor of computer science and electrical and computer engineering, talked about how certain human interactions with robots could help them perform tasks better.
For instance, if a robot is charged with putting fruit on a plate, a person could provide direct, comparative feedback on what it’s doing wrong and how it could improve, such as “hold the fruit looser.” This approach, Biyik said, allows researchers “to collect data from humans in a more efficient and quicker way” than other methodologies.
“While we have seen huge advancements in computer vision and natural language processing, such breakthroughs did not happen in robotics,” Biyik said. “This is mostly because the robotics datasets we have are not nearly as large as image or text datasets. Therefore, it is important to make robots learn from every possible source of feedback, including humans.”
During the conference, many participants warned about the need to train AI on robust datasets to tease out bias and of AI’s tendency to make things up or hallucinate. Lauren Perry, principal engineer for A1 and machine learning at The Aerospace Corporation, delivered an engaging talk called “Additional Considerations for Trust in the Advent of GenAI.”
“I’m excited for what’s to come, but with great power comes great responsibility,” she said. “We have to make sure we’re using generative AI in an ethical and responsible manner and that humans continue to be [in] the loop where needed, to be able to take control when necessary.”
Find a photo album of the event here.
Published on October 25th, 2024
Last updated on October 25th, 2024