
Credit: Andrii Yalanskyi/iStock
From influence campaign detection to AI policy and safety, USC’s Information Sciences Institute (ISI) is making an impact at the 39th AAAI Conference on Artificial Intelligence (AAAI-25), held February 25 to March 4, 2025, in Philadelphia.
AAAI-25 saw a record-breaking 12,957 paper submissions, surpassing last year’s record by over 3,000. With an acceptance rate of 23.4% (3,032 accepted), researchers from USC Viterbi School of Engineering’s ISI secured four spots, spanning critical research areas. ISI researchers will also present four papers tackling some of AI’s most urgent challenges, from detecting large-scale influence campaigns to ensuring AI systems make legally sound decisions.
Unmasking Influence Campaigns Networks with Graph AI
Online influence campaigns are increasingly sophisticated, often spread by coordinated networks operating across multiple platforms. In their paper, IOHunter: Graph Foundation Model to Uncover Online Information Operations, ISI researchers Luca Luceri and Emilio Ferrara, along with their co-authors, introduce a Graph Foundation Model designed to detect orchestrated inauthentic activity. By combining Graph Neural Networks (GNNs) and Language Models (LMs), their model can analyze large-scale influence campaigns and expose hidden connections between bad actors. Tested in six countries, IOHunter outperforms existing methods in identifying covert online influence operations. “Understanding how online information operations work is key to stopping them,” said Luceri, who is a Research Assistant Professor at the USC Thomas Lord Department of Computer Science and Lead Scientist at ISI. “Our model provides a powerful tool for identifying and analyzing these coordinated networks across different countries, languages, and contexts, also in scenarios with limited data availability and minimal labeled data.”
This method advances and combines the models presented last year at The Web Conference 2024 – Unmasking the Web of Deceit: Uncovering Coordinated Activity to Expose Information Operations on Twitter and Leveraging Large Language Models to Detect Influence Campaigns in Social Media, which received the Best Paper Award. Â
The research will be presented by Ferrara, Professor of Computer Science & Communication at USC and Research Team Leader at ISI, in the AI for Social Impact Track at the AAAI-25 main conference and provides a powerful tool for governments, social media platforms, and researchers working to stop the spread of deceptive content​.
Preventing AI from Generating False or Harmful Content
As Large Language Models (LLMs) are increasingly adopted across various applications, ensuring their ability to assess and respond to risks effectively is critical for AI safety. In their paper, Risk and Response in Large Language Models: Evaluating Key Threat Categories, ISI’s researchers Bahareh Harandizadeh, Abel Salinas, and Fred Morstatter examine how LLMs perceive and categorize different types of risks. Their findings reveal that LLMs tend to underestimate the severity of Information Hazards compared to other risks, such as Malicious Uses and Discrimination/Hateful content. Moreover, the study highlights a significant vulnerability: LLMs are particularly susceptible to jailbreaking attacks in Information Hazard scenarios, exposing critical weaknesses in current AI safety mechanisms. These insights emphasize the urgent need for improved safeguards to enhance the reliability and security of LLMs. “AI safety isn’t just about what models know—it’s about how they evaluate and respond to risks,” said Ph.D. student and ISI Research Assistant Harandizadeh “Our research highlights critical weaknesses and how they can be addressed.”
Making AI Legally Aware: Can AI Follow the Law?
As AI takes on a larger role in law and compliance, a major question arises: Can AI systems accurately interpret and apply legal rules? ISI researchers Abha Jha, Abel Salinas, and Fred Morstatter explore this issue in their paper, Knowledge Graph Analysis of Legal Understanding and Violations in LLMs, which has been accepted in the Legal and Ethical AI workshop. Their study focuses on specific US bioweapon regulations and tests whether AI models can detect legal violations, assess unlawful intent, and prevent generating unsafe outputs. Their findings reveal that while LLMs can recognize legal concepts, they often fail to apply them correctly, sometimes even producing illegal instructions despite built-in safeguards. “AI needs to do more than recognize legal text—it must apply it responsibly,” said Jha. “Our research identifies key weaknesses and how to build safer legal AI systems.” This research underscores the need for stronger legal AI frameworks that ensure AI follows the law as accurately as it recognizes it​.
Editing AI Knowledge to Prevent Misinformation
Keeping AI models factually up-to-date is essential, but retraining them is expensive and inefficient. In their paper, K-Edit: Language Model Editing with Contextual Knowledge Awareness, ISI researcher Elan Markowitz, working with collaborators from USC, Amazon, and UCLA, presents a solution to this problem. Their research introduces K-Edit, a novel method for precisely updating AI knowledge while maintaining contextual consistency.
Traditional model editing methods struggle to maintain logical coherence—for example, if an AI is updated to reflect Rishi Sunak as UK Prime Minister, it may still claim former Prime Minister Boris Johnson is married to the Prime Minister’s wife. K-Edit solves this issue by integrating knowledge graphs, ensuring that edits ripple through related facts, improving multi-hop reasoning and overall model reliability. This breakthrough, which will be presented at the Preventing and Detecting LLM Misinformation (PDLM) Workshop, allows AI to adapt to real-world changes while avoiding the hallucinations and inconsistencies that often arise from isolated edits​. “K-Edit ensures AI models evolve with the world while maintaining accuracy and consistency,” said Markowitz. “This is crucial for reducing misinformation and developing reliable and trustworthy AI.”
Advancing Thought Leadership in AI for Public Good
Beyond its research contributions, ISI continues to play a leadership role at AAAI. Adam Russell, AI Division Director at ISI, is co-organizing the AI for Public Missions workshop, which brings together researchers, policymakers, and practitioners to explore how AI can support governance, disaster response, and public service. The workshop will focus on real-world AI applications, tackling ethical challenges, deployment strategies, and policy considerations for responsible AI.
Published on February 25th, 2025
Last updated on February 26th, 2025