USC at the International Conference on Learning Representation (ICLR)

| April 24, 2020

USC researchers will present work virtually at top AI conference, April 25-30.



USC faculty and students will present their research virtually at the International Conference on Learning Representations (ICLR), one of the world’s leading artificial intelligence (AI) conferences, April 25 to 30. Ten papers will be presented, with researchers creating five-minute videos for the virtual poster session.

Improving recommender systems, predicting climate observations

Research includes a paper co-authored by USC computer science researchers and Facebook AI exploring how to explain and improve recommender systems.

“Recommender predictions are important to the economy and society at large,” said lead author Michael Tsang, a computer science PhD student specializing in machine learning interpretability under the supervision of Yan Liu, an associate professor of computer science.

“On one hand, businesses optimize recommender systems to guide our choices of entertainment, food, housing, friendship, dating, and more. On the other hand, we may want explanations about why we receive certain recommendations. This paper addresses both demands.”

Also working under Liu’s supervision, computer science PhD students Sungyong Seo and Chuizheng Meng demonstrate the superiority of their novel architecture, Physics-aware Difference Graph Networks (PA-DGN), to study natural phenomena, including effectively predicting real-world climate observations for weather stations.

Looking inside the “black box” 

In the area of natural language processing (NLP), a paper authored by a USC computer science team explores how neural networks perceive natural language sentences, providing a peek inside the “black box.”

“The explanation improves human trust in neural networks and helps researchers to mitigate bias in classifiers,” said lead author Xisen Xing, a PhD student advised by Xiang Ren, an assistant professor of computer science.

“It facilitates neural networks to be applied in domains where transparency and fairness are essential, such as political domains and legal domains.”

A partnership between USC, Tsinghua University and Snap Research, including Ren and his PhD students, proposes using natural language explanations, in addition to annotations, to achieve better performance in NLP tasks such as relation extraction and sentiment analysis, while reducing the annotation efforts by half.

Coordinating skills to solve challenging robotic tasks

In the area of robotics, a team led by Joseph Lim, an assistant professor of computer science, is working on developing agents that can learn to follow natural language instructions. Results have demonstrated that the proposed framework learns to reliably accomplish program instructions.

Another project authored by Lim and his students proposes a framework to efficiently coordinate skills to solve challenging collaborative robotic tasks such as picking up a long bar, placing a box inside a container with two robot arms and pushing a box with two ant agents.

Image restoration, large scale graph representation learning

In electrical and computer engineering, a paper co-authored by Assistant Professor Mahdi Soltanolkotabi, Andrew and Erna Viterbi Early Career Chair, shows that a neural network can be very effective at image restoration tasks, such as denoising and inpainting, without requiring training data.

Professor Victor Prasanna and his colleagues propose a general framework for large scale graph representation learning, achieving accuracy, efficiency, flexibility and scalability. “Training a deep Graph Neural Network (GNN) on a social connection graph, we significantly improve accuracy with 100 times less computation time,” said Prasanna, the Charles Lee Powell Chair in Electrical and Computer Engineering.


Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks

Yu Bai, Jason Lee (USC)

Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators    

Reinhard Heckel, Mahdi Soltanolkotabi (USC)

Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection

Michael Tsang (USC), Dehua Cheng, Hanpeng Liu (USC), Xue Feng, Yan Liu (USC)

GraphSAINT: Graph Sampling Based Inductive

Hanqing Zeng (USC), Hongkuan Zhou (USC), Ajitesh Srivastava (USC),  Rajgopal Kannan, Viktor Prasanna (USC)

Learning from Explanations with Neural Execution Tree                                                                

Ziqi Wang, , Yujia Qin, Wenxuan Zhou (USC), Jun Yan (USC), Qinyuan Ye (USC), Leonardo Neves, Zhiyuan Liu, Xiang Ren (USC)

Learning to Coordinate Manipulation Skills via Skill Behavior Diversification                              

Youngwoon Lee (USC), Jingyun Yang (USC), Joseph Lim (USC)

Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics

Sungyong Seo (USC), Chuizheng Meng (USC), Yan Liu (USC)

Program Guided Agent                             

Shao-Hua Sun (USC), Te-Lin Wu, Joseph Lim (USC),

Rényi Fair Inference

Sina Baharlouei (USC), Maher Nouiehed (USC), Ahmad Beirami, Meisam Razaviyayn (USC)

Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models 

Xisen Jin (USC), Zhongyu Wei, Junyi Du (USC), Xiangyang Xue, Xiang Ren (USC)                                                                                  

Published on April 24th, 2020

Last updated on April 7th, 2022

Share This Story