USC at ICML 2025

Caitlin Dawson | July 15, 2025 

From health applications to theoretical breakthroughs, USC researchers contribute to a dynamic range of work at this year’s premier machine learning conference

usc campus beauty shot

USC researchers will make a strong showing at ICML 2025, held July 13–19 in Vancouver, BC. Photo/USC

USC researchers will make a strong showing at ICML (the International Conference on Machine Learning) 2025, held July 13–19, with contributions across the poster, spotlight, and oral tracks. Their work reflects the breadth of innovation taking place across the university, spanning theoretical advances, health applications, and the frontiers of large language models. This year’s contributions include researchers from the USC Viterbi School of Engineering, the School of Advanced Computing’s Thomas Lord Department of Computer Science Ming Hsieh Department of Electrical and Computer Engineering, and the USC Marshall School of Business.

One prominent thread is the use of machine learning to better understand human behavior and improve health outcomes.

One prominent thread is the use of machine learning to better understand human behavior and improve health outcomes. From modeling behavioral signals in wearable data to interpreting neural imaging patterns, several papers focus on learning from complex biological and cognitive data. These efforts reflect a growing interest in merging data-driven methods with neuroscience, cognitive science and medicine.

On the technical front, USC-affiliated papers tackle core machine learning challenges, including optimization under uncertainty, causality, fairness, and the scalable training of large models. Researchers explored new strategies to enhance reasoning in language models, improve robustness in survival analysis, and enable vision-based reinforcement learning agents to generalize to novel environments. Together, these contributions underscore USC’s role in advancing both the science and real-world impact of machine learning.

Special thanks to Jing Yang, USC computer science student and founder of Paper Copilot, for assistance with this roundup.

USC-Affiliated Papers

(USC authors boldened)

Beyond Sensor Data: Foundation Models of Behavioral Data from Wearables Improve Health Predictions

Eray Erturk; Fahad Kamran; Salar Abbaspourazad; Sean Jewell; Harsh Sharma; Yujie Li; Sinead Williamson; Nicholas J Foti; Joseph Futoma

Session/area: applications->health medicine


Tightening Causal Bounds via Covariate-Aware Optimal Transport

Sirui Lin; Zijun Gao; Jose Blanchet; Peter Glynn;

Session/area: general machine learning->causality


Doubly Robust Conformalized Survival Analysis with Right-Censored Data (SPOTLIGHT)

Matteo Sesia; Vladimir Svetnik;

Session/area: probabilistic methods


Core Knowledge Deficits in Multi-Modal Language Models

Yijiang Li; Qingying Gao; Tianwei Zhao; Bingyang Wang; Haoran Sun; Haiyun Lyu; Robert D. Hawkins; Nuno Vasconcelos; Tal Golan; Dezhi Luo; Hokin Deng

Session/area: neuroscience, cognitive science


Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time (ORAL)

Gramoz Goranci; Peter Kiss; Neel Patel; Martin P. Seybold; Eva Szilagyi; Da Wei Zheng

Session/area: general machine learning


Poly2Vec: Polymorphic Fourier-Based Encoding of Geospatial Objects for GeoAI Applications

Maria Despoina Siampou; Jialiang Li; John Krumm; Cyrus Shahabi; Hua Lu;

Session/area: general machine learning


Non-Asymptotic and Non-Lipschitzian Bounds on Optimal Values in Stochastic Optimization Under Heavy Tails

Jindong Tong; Hongcheng Liu; Johannes O. Royset;

Session/area: optimization > stochastic


Retraining with Predicted Hard Labels Provably Increases Model Accuracy

Rudrajit Das; Inderjit S Dhillon; Alessandro Epasto; Adel Javanmard; Jieming Mao; Vahab Mirrokni; Sujay Sanghavi; Peilin Zhong;

Session/area: theory->learning theory


Integer Programming for Generalized Causal Bootstrap Designs

Jennifer Rogers Brennan; Sebastien Lahaie; Adel Javanmard; Nick Doudchenko; Jean Pouget-Abadie;

Session/area: general machine learning->causality


Optimal Transport Barycenter via Nonconvex-Concave Minimax Optimization

Kaheon Kim; Rentian Yao; Changbo Zhu; Xiaohui Chen;

Session/area: optimization->nonconvex


Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM’s Reasoning Capability

Zicheng Lin; Tian Liang; Jiahao Xu; Qiuzhi Liu; Xing Wang; Ruilin Luo; Chufan Shi; Siheng Li; Yujiu Yang; Zhaopeng Tu;

Session/area: deep learning->large language models


Stochastic Control for Fine-tuning Diffusion Models: Optimality, Regularity, and Convergence

Yinbin Han; Meisam Razaviyayn; Renyuan Xu;

Session/area: deep learning->theory


Computing Voting Rules with Improvement Feedback

Evi Micha; Vasilis Varsamis

Session/area: theory->game theory


Asymmetric Decision-Making in Online Knowledge Distillation: Unifying Consensus and Divergence

Zhaowei Chen; Borui Zhao; Yuchen Ge; Yuhao Chen; Renjie Song; Jiajun Liang

Session/area: applications->computer vision


Smooth Interpolation for Improved Discrete Graph Generative Models

Yuxuan Song; Juntong Shi; Jingjing Gong; Minkai Xu; Stefano Ermon; Hao Zhou; Wei-Ying Ma

Session/area: deep learning->generative models and autoencoders


Improving the Variance of Differentially Private Randomized Experiments through Clustering

Adel Javanmard; Vahab Mirrokni; Jean Pouget-Abadie

Session/area: general machine learning->causality


Robust Conformal Outlier Detection under Contaminated Reference Data

Meshi Bashari; Matteo Sesia; Yaniv Romano

Session/area: probabilistic methods


Zero Shot Generalization of Vision-Based RL Without Data Augmentation

Sumeet Batra; Gaurav S. Sukhatme;

Session/area: reinforcement learning->deep rl


Test-Time Training Provably Improves Transformers as In-context Learners

Halil Alperen Gozeten; Muhammed Emrullah Ildiz; Xuechen Zhang; Mahdi Soltanolkotabi; Marco Mondelli; Samet Oymak;

Session/area: theory > deep learning


Dynamical Modeling of Behaviorally Relevant Spatiotemporal Patterns in Neural Imaging Data

Sayed Mohammad Hosseini; Maryam Shanechi

Session/area: applications->neuroscience cognitive science


Distilling the Knowledge in Data Pruning

Emanuel Ben Baruch; Adam Botach; Igor Kviatkovsky; Manoj Aggarwal; Gerard Medioni;

Session/area: deep learning->algorithms


DeepCrossAttention: Supercharging Transformer Residual Connections

Mike Heddes; Adel Javanmard; Kyriakos Axiotis; Gang Fu; Mohammadhossein Bateni; Vahab Mirrokni;

Session/area: deep learning->attention mechanisms


Diverging Preferences: When do Annotators Disagree and do Models Know?

Michael JQ Zhang; Zhilin Wang; Jena D. Hwang; Yi Dong; Olivier Delalleau; Yejin Choi; Eunsol Choi; Xiang Ren; Valentina Pyatkin

Session/area: deep learning->large language models


Ladder-Residual: Parallelism-Aware Architecture for Accelerating Large Model Inference with Communication Overlapping

Muru Zhang; Mayank Mishra; Zhongzhu Zhou; William Brandon; Jue WANG; Yoon Kim; Jonathan Ragan-Kelley; Shuaiwen Leon Song; Ben Athiwaratkun; Tri Dao

Session/area: deep learning->large language models


End-to-End Learning Framework for Solving Non-Markovian Optimal Control

Xiaole Zhang; Peiyu Zhang; Xiongye Xiao; Shixuan Li; Vasileios Tzoumas; Vijay Gupta; Paul Bogdan;

Session/area: optimization

EARL-BO: Reinforcement Learning for Multi-Step Lookahead, High-Dimensional Bayesian Optimization

Mujin Cheon; Jay H Lee; Dong-Yeun Koh; Calvin Tsay

Session/area: optimization->zeroorder and blackbox optimization


FACTER: Fairness-Aware Conformal Thresholding and Prompt Engineering for Enabling Fair LLM-Based Recommender Systems

Arya Fayyazi; Mehdi Kamal; Massoud Pedram

Session/area: social aspects->fairness


Contextual Linear Bandits with Delay as Payoff

Mengxiao Zhang; Yingfei Wang; Haipeng Luo

Session/area: theory->online learning and bandits


Synthetic Text Generation for Training Large Language Models via Gradient Matching

Dang Nguyen; Zeman Li; Mohammadhossein Bateni; Vahab Mirrokni; Meisam Razaviyayn; Baharan Mirzasoleiman

Session/area: deep learning->large language models


Position paper: The Artificial Intelligence and Machine Learning Community Should Adopt a More Transparent and Regulated Peer Review Process

Jing Yang

Session/area: social ethical env impact


Note: Every effort was made to include all USC Viterbi-affiliated papers. If you believe your work was inadvertently left out, please let us know at cscomms@usc.edu so we can update the list.

Published on July 15th, 2025

Last updated on July 15th, 2025