USC ISI at the 2024 EMNLP Conference

| November 12, 2024 

Notable research that drives forward AI capabilities in language, ethics, social science and more

Photo of man touching screen that says EMNLP

Photo credit: Funtap/Getty Images

At the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), researchers from USC Viterbi’s Information Sciences Institute (ISI) will present some of their latest work. Held from November 12th to 16th in Miami, Florida, EMNLP is one of the field’s premier conferences, showcasing leading research in artificial intelligence and natural language processing. From exploring how AI can craft human-like stories to designing models that aid social science research, ISI’s contributions reflect the breadth and impact of natural language processing on both technical and societal fronts.

Research Spotlights 2024
Can AI Tell Stories Like Humans Do?

Large language models (LLMs) can generate fluent narratives at the click of the button. But do these texts capture the complexity and emotional depth of human storytelling? Researchers at USC ISI and UCLA set out to answer this question in their paper, “Are language models capable of generating human-level narratives?” Through a qualitative analysis of story arcs, plot progression, and emotional dynamics, they found that AI-generated stories often lack the suspense and narrative diversity of human writing.  

“If you’ve ever tried to generate a story with AI, it’s pretty unsatisfying,” said Jonathan May, a principal scientist at ISI, who worked on the study. “AIs are very bad at pacing, the details are very surface-y, nothing gets too dark.” Understanding these weaknesses is what drives May’s research into improving AI creativity. “If we understand the craft of storytelling, we can systematically increase creativity,” he said.

Making Language Models Speechworthy 

Ask ChatGPT a question, and you’ll get a thorough, detailed response. Ask Siri or Alexa the same question, and you’ll get a basic answer, if any at all. At first glance, the pairing seems ideal: let LLMs power voice assistants. But there’s a problem. “AI models like ChatGPT tend to produce walls of text for even simple requests,” said Justin Cho, a research assistant at ISI supervised by Jonathan May. “It is necessary to adapt LLMs to become more suitable for the speech domain by understanding the unique constraints and features of speech.”

In “Speechworthy Instruction-tuned Language Models”, Cho and colleagues from Amazon adapt LLMs, which are trained on text, to yield responses that can be spoken rather than written. Drawing from radio broadcasting principles and listenability literature, they developed techniques to make LLM responses more speech-friendly — shorter, clearer, and free of visual elements like bullet points that don’t work in conversation. Their approach succeeded, with humans preferring their modified responses more than 75% of the time over standard approaches. This work could help bridge the gap between LLMs’ information quality and Siri’s speech ability, laying the blueprint for better voice assistants.

How Susceptible are Large Language Models to Ideological Manipulation?

Can LLMs be influenced to adopt particular ideological views? In “How Susceptible are Large Language Models to Ideological Manipulation,” researchers at ISI explore this concerning vulnerability, finding that language models can be easily swayed to adopt and generalize ideological biases from only a small amount of training data. “This could have profound societal implications in terms of perpetuating biases and manipulating public opinion,” said Kai Chen, a research assistant at ISI supervised by Kristina Lerman, who worked on the study.

Even more troubling, language models demonstrated an ability to absorb ideology from one topic and apply it to unrelated ones. For example, researchers found that exposing a model to right-leaning views on race could shift its stance on scientific issues rightward as well. This susceptibility raises important questions about safeguarding AI systems from both intentional manipulation and unintended biases during training. “Being selected by EMNLP validates the significance of this work and allows us to raise awareness of these issues within the LLM research community,” Chen said. 

Community Perspectives at Scale

Social scientists often rely on surveys and focus groups to understand the opinions, needs, and concerns of diverse populations. However, this process can be slow and costly. To address the growing need for more efficient and scalable methods in social science research, a team of computer scientists at ISI developed an innovative framework that uses LLMs to create “digital twins” of online communities. This method generates human-like responses that represent a population’s language, style, and attitudes. 

Zihao He, a research assistant at ISI supervised by Kristina Lerman and a co-author of the study “COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities,” explained that the framework could help public health officials quickly identify emerging health concerns or enable social scientists to conduct longitudinal studies of community evolution without the need for repeated, intrusive data collection. “The approach not only saves time and resources but also allows for more frequent and comprehensive analyses of community mindset, which is crucial in our rapidly changing digital landscape,” He said.

Teaching AI to Be a Flavor Scientist 

Creating new flavors requires a lot of guess and check work. What if artificial intelligence could help? The main challenge, according to paper co-author Jonathan May: “The expression of flavor is not easy to define.” 

In “FOODPUZZLE: Developing Large Language Model Agents as Flavor Scientists,” researchers from USC ISI and UC Davis introduce a novel framework for AI-assisted flavor development. The team created a benchmark dataset of food items and their molecular flavor profiles to test models’ ability to understand and predict flavor combinations. They found that, while in general, predicting a flavor given a set of chemicals is a challenging task and one that even powerful language models struggle with, by guiding the models to consult with relevant scholarly works, flavor identification ability significantly increases.

Complete list of accepted USC ISI papers below

OATH-Frames: Characterizing Online Attitudes Towards Homelessness with LLM Assistants
Jaspreet Ranjit, Brihi Joshi, Rebecca Dorn, Laura Petry, Olga Koumoundouros, Jayne Bottarini, Peichen Liu, Eric Rice, Swabha Swayamdipta
*Winner of an Outstanding Paper award

Are Large Language Models Capable of Generating Human-Level Narratives?
Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng

Speechworthy Instruction-tuned Language Models
Hyundong Cho, Nicolaas Jedema, Leonardo F.R. Ribeiro, Karishma Sharma, Pedro Szekely, Alessandro Moschitti, Ruben Janssen, Jonathan May

How Susceptible are Large Language Models to Ideological Manipulation?
Kai Chen, Zihao He, Jun Yan1, Taiwei Shi, Kristina Lerman

COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities
Zihao He, Minh Duc Chu, Rebecca Dorn, Siyi Guo, Kristina Lerman

FOODPUZZLE: Toward Developing Large Language Models as Flavor Scientist 
Tenghao Huang, Dong Hee Lee, John Sweeney, Jiatong Shi, Emily Steliotes, Matthew Lange, Jonathan May, Muhao Chen

Authorship Style Transfer with Policy Optimization
Shuai Liu, Shantanu Agarwal, Jonathan May

Do LLMs Plan Like Human Writers? Comparing Journalist Coverage of Press Releases with LLMs
Alexander Spangher, Nanyun Peng, Sebastian Gehrmann, Mark Dredze

Explaining Mixtures of Sources in News Articles
Alexander Spangher, James Youn, Matt DeButts, Nanyun Peng, Emilio Ferrara, Jonathan May

Guided Profile Generation Improves Personalization with LLMs
Jiarui Zhang

Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Xu Han, Linghao Jin, Xuezhe Ma, Xiaofeng Liu

OATH-Frames: Characterizing Online Attitudes Towards Homelessness with LLM Assistants
Jaspreet Ranjit, Brihi Joshi, Rebecca Dorn, Laura Petry, Olga Koumoundouros, Jayne Bottarini, Peichen Liu, Eric Rice, Swabha Swayamdipta

Published on November 12th, 2024

Last updated on November 15th, 2024

Share this Post