
Dylan Sun, Jason Chen, Ojas Nimase, and Peiran Qiu each received honorable mentions for the CRA’s Undergraduate Researcher Award.
Artificial intelligence is reshaping nearly every corner of modern life — and at USC Viterbi School of Engineering, four undergraduates are already contributing to that transformation in meaningful ways. This year, they earned national recognition from the Computing Research Association (CRA) for work that cuts to the heart of AI’s most pressing questions: How do we make it transparent? How do we keep it secure? How do we put its most powerful tools in the hands of everyone?
Dylan Sun, Jason Chen, Ojas Nimase, and Peiran Qiu each received honorable mentions for the CRA’s Undergraduate Researcher Award, one of the most prestigious honors in North American computing research. Their projects span robotics, censorship detection, cybersecurity, and real-time image rendering — but together they reflect something larger: a new generation of researchers engaging seriously with the opportunities and responsibilities that advanced AI presents.
Identifying censorship in large language models
Peiran Qiu is a junior double majoring in computer science and applied and computational mathematics. Qiu’s work focuses on the AI chatbot, proposing a framework to understand and test information suppression on large language models.
Qiu’s work was supervised by Emilio Ferrara, a professor of computer science and communication and associate chair at the Thomas Lord Department of Computer Science and the School of Advanced Computing. The study took eight months and concluded that DeepSeek retains critical internal reasoning but outputs text that “leans toward dominant ideological or state-aligned narratives.” It has been published in the Information Sciences journal.
When asked about the impact this work may have on the real world, Qiu explained: “It’s important for the general public to understand that large language models don’t always present every side of an issue, and in some cases may omit alternative perspectives in their responses.”
Teaching robots with image generation models
Jason Chen is a junior majoring in computer science. His study, titled “ROPA: Synthetic Robot Pose Generation for RGB-D Bimanual Data Augmentation to International Conference on Robotics and Automation,” was also recently accepted to the International Conference on Robotics and Automation (ICRA).
Chen’s research was overseen by Daniel Seita, an assistant professor of computer science at the Thomas Lord Department of Computer Science and the School of Advanced Computing, as well as Gaurav Sukhatme, the Donald M. Alstadt Chair in advanced computing and professor of computer science and Electrical and Computer Engineering at the Thomas Lord Department of Computer Science. The focus of the project is training real robots on AI image generation models, in order to more efficiently teach them how to do real-world tasks like opening jars.
Chen explained that data collection is difficult and expensive in robotics, a problem his work hopes to rectify. “By making it more affordable and practical to train robots for complex two-handed tasks, this work could accelerate the development of generalized robotics,” Chen said.
Understanding the tension between AI transparency and security
Ojas Nimase is a sophomore minoring in computer science and majoring in mathematics. He was recognized for his research examining the intersection between AI transparency and model security. This work was also published at the 2025 IEEE International Conference on Data Mining and won the 2nd prize CCC award.
Nimase worked under the guidance of Yue Zhao, assistant professor at the Thomas Lord Department of Computer Science and the School of Advanced Computing, and Florida State University professor Yushun Dong. These projects attempt to tackle the question of AI security, attempting to understand how transparency regulation could inadvertently enable more effective cyberattacks.
Nimase’s work hopes to prevent some of the increased risk associated with commercial AI models in the future by preemptively defending these models against malicious actors trying to steal such models. Nimase says applications could include AI models that work with sensitive info. He stated: “I imagine [my work] would involve medical diagnoses, financial fraud detection, and drug discovery, where a stolen model means stolen intellectual property and potentially compromised sensitive data.”
Making cinematic visuals more accessible than ever
Dylan Sun is a senior majoring in the computer science games program. Sun’s work was accepted to the 2025 SIGGRAPH, the “premier conference and exhibition on computer graphics and interactive techniques.”
Sun’s work was co-supervised by Yue Wang, assistant professor at the Thomas Lord Department of Computer Science, alongside Andrew Feng, research assistant professor at the Thomas Lord Department of Computer Science and the Institute for Creative Technologies. Sun’s recent work focuses on a new concept he calls Deformable Beta Splatting, which is a new method he co-developed to render and generate cinematic images in real time.
Sun is passionate about the future of creativity that he hopes to actively work towards. “I hope to continue this journey as a scientist who builds tools that erase the distance between an idea and its realization — so that anyone, artist or engineer, can bring their vision to life. Because the future of creativity isn’t just faster or prettier — it’s more accessible, more personal, and more human,” Sun said.
Though each pursued a distinct corner of the field, Sun, Chen, Nimase, and Qiu share a common thread: a belief that the most important work in AI isn’t just about building smarter systems, but more responsible, accessible, and secure ones.
Published on March 12th, 2026
Last updated on March 12th, 2026

