How Do We Protect Our Future Robot-Driven Systems from Attackers?

| November 5, 2024 

A new $2.1 million project supported by the Office of Naval Research will provide invaluable intelligence for the robot systems that will be essential to our future lives.

A robot patrols in Singapore's Changi Airport. Image/Wikimedia Commons.

A robot patrols in Singapore’s Changi Airport. Image/Wikimedia Commons.

Imagine this future: Our homes and workplaces are protected by robotic security guards that patrol the perimeter at regular intervals, keeping an eye out for, and deterring potential intruders. But what if a nefarious actor closely studied your guard’s behavior and movements over time? Perhaps this criminal could deduce patterns in the guard’s patrolling, predicting the choices the robot makes and how it interacts with its environment. They could then harness this information to understand the robot’s vulnerabilities, bypass your security systems and break into your home or business.

A new $2.1 million research project led by Professor of Industrial and Systems Engineering Johannes Royset aims to safeguard our future intelligent physical systems, such as robots, to ensure they can evade those that would do them harm. The project has been supported by the Office of Naval Research.

You already may have noticed robots gradually making their way into your day-to-day life – from food delivery robots wheeling their tasty goods to customers to information and security bots in shopping malls. For Royset, this technology is only going to increase in the future, which is why it’s so important to ensure it can’t be compromised by bad actors.

“In the future, there will be all these robots walking around, and we’re wondering, how are they thinking? Can we observe them from the sidelines and try to predict what they will do next? We want to understand how they make decisions at a higher level, to understand the vulnerabilities of such systems.” Royset said.

Royset said the vulnerabilities of autonomous robotic systems could be particularly critical in areas like airport security, government and homeland security settings. Robotic technology is already in use in many international airports to scan traveler passports and cross reference this using facial recognition before letting travelers through automated security gates. If nefarious actors were able to understand weaknesses in these systems, it could leave our entry points open to potential terrorists.

Professor in the Daniel J. Epstein Department of Industrial and Systems EngineeringJohannes Royset. Image/Angel Ahabue

Professor in the Daniel J. Epstein Department of Industrial and Systems Engineering Johannes Royset. Image/Angel Ahabue

“Can somebody standing in the dark shadows of the airport observe how these security systems and security robots operate and then try to learn how these things are behaving — maybe even with the help of their own algorithms — and then try to fool the system? And the key thing is, they only need to fool it once,” Royset said. “That guy in the shadows doesn’t need to get it right all the time. He only needs to identify a point in time when he thinks he has a shot at it. One vulnerability could ruin the day for us.”

Royset’s approach is to address the problem like a game — thinking like the potential attacker by studying a system’s vulnerabilities and, in turn, predicting the attacker’s actions and responses.

“So the guy in the shadow at the airport that’s trying to do bad things, and we know what he knows. So we are going to go back and forth in a bit of a game, and that’s where deception can come in.”

Royset said that deception is known as quite a human art form. Take, for instance, the wily card player who can skillfully bluff to gain an advantage during a poker game, changing their expression and performance in the knowledge that their opponent is trying to read them.

“We think that with AI systems, deception can play a role too. Bluffing is not only a human thing,” Royset said. “If we are playing chess with two AI systems, and I have a bigger computer than you, I can look ten steps ahead while you can only look five steps ahead. So then I can fool you in all types of ways because I have a computational advantage.”

In terms of the robot security guard, the research team’s system will put itself in the shoes of the attacker observing the guard’s patrol pattern.

“Now we know its patrol pattern — we know it always walks counterclockwise around the shopping mall, and that’s changed to clockwise on Tuesdays at 2 pm. And we now know that so we can take advantage of that.”

Royset will be working closely with research collaborators UC Berkeley, UC Santa Barbara, and the Naval Postgraduate School. The project will use a testbed environment to study models and algorithms in an adversarial game format in order to understand a physical AI system’s weaknesses and the many ways that attackers may respond to them.

Over the next five years, the insights provided by the project will benefit engineers designing our next-generation autonomous systems.

“I think this project will help us to better understand the very complicated algorithms that we find inside these AI systems,” Royset said. “Are they any good? When are they vulnerable? Is it easy to figure out if it’s vulnerable? Every day we are more and more surrounded by algorithms. Simply trying to understand them better is one of the biggest challenges we have now.”

Published on November 5th, 2024

Last updated on November 22nd, 2024

Share this Post