Summer Research Projects
Summer 2025 Projects
Prof. Meisam Razaviyayn
Faculty Email: razaviya@usc.edu
Department: Industrial & Systems Engineering, Electrical Engineering, and Computer Science
Website: Meisam Razaviyayn's Website
Projects:
1. Memory-Efficient Optimization for Democratizing Large Language Model Training on Resource-Constrained Hardwares: This project focuses on enhancing the accessibility of generative AI by creating resource-efficient optimization algorithms that enable large language models (LLMs) to be trained and fine-tuned on memory-limited, lower-cost hardware. Traditional LLM training demands high-performance GPUs with substantial memory, posing a significant barrier to entry. The proposed algorithms target memory reduction specifically in gradient computation to lower memory overhead during training, building on approaches like the Addax optimizer, which balances minimal memory usage with fast convergence rates. By enabling efficient model fine-tuning on affordable devices, this work facilitates broader engagement in AI research, allowing resource-constrained institutions to leverage cutting-edge LLMs and advancing sustainable AI practices by reducing the energy footprint of training large models.
2. Enhancing Differential Privacy in Large Language Models through Resource-Efficient and Zeroth-Order Optimization Techniques: This project seeks to advance privacy-preserving training in large language models (LLMs) by integrating differential privacy (DP) with resource-efficient training techniques. Although DP can theoretically safeguard private data, its implementation often degrades model performance, slows training, and has additional memory burden. In this project, we explore ideas on dimensionality reduction, sparsity, and zeroth-order optimizer to reduce the memory foot print of DP optimizers of large language models.
Prof. Yan Liu
Faculty Email: yanliu.cs@usc.edu
Department: Thomas Lord Department of Computer Science
Projects:
1. Multimodal Large Language Model for Cancer Treatment: Cancer stands as a formidable global health challenge, emerging as one of the leading causes of death in every country in the 21st century. This research project aims to develop multimodal LLMs for cancer survival prediction, which aims to predict the risk for cancer patients given a variety of modalities as input data. These modalities include whole slide images (high-resolution images of a patient's cancer tissue), patient’s genomic data ( e.g., mRNA and miRNA), and clinical data (patient’s general information.) The goal is to make the most accurate prediction of the patient’s survival risk and treatment recommendation, given this multimodal data. Furthermore, we will take these multimodal models a step beyond their current performance by integrating an expert’s feedback. By doing this, we further improve the performance of the model by utilizing a domain expert’s knowledge.
2. AI for Science: This project aims to develop novel AI models to enable scientific discoveries in climate science, materials science, biology and so on. We will work at the intersection of informed and probabilistic machine learning, integrating various forms of structured knowledge with deep learning methods to improve efficiency and generalizability. These efforts are broadly oriented toward overcoming key challenges in applying machine learning for science, with our most recent publication targeting efficient calibration of transportation models. Current projects additionally focus on methods for inverse design and discovery in materials science, climate modeling, and more.
3. Interpretable machine learning Deep learning has revolutionized the field of AI and ML by achieving state-of-the-art results across practically all domains and solving problems which were unthinkable even a few decades ago. Despite these impressive results, other metrics which are secondary to prediction performance have come under increased scrutiny due to continued adoption in real-world, sensitive, or challenging application settings. Considerations such as robustness to domain shifts, invariance under adversarial attacks, algorithm fairness, and causal reasoning have come to the forefront of many challenging AI tasks. In this lab, we are studying how to further develop interpretable and robust tools to be able to answer these challenging questions even if the cutting-edge applications where deep learning are some of the only successful methods. We are exploring various approaches both from the bottom-up interpretability perspective and the top-down explainability perspective.
Prof. Lars Lindemann
Faculty Email: llindema@usc.edu
Website: Lars Lindemann's Webpage
Projects:
1. Learning-Enabled Autonomous Systems: Learning-enabled autonomous systems promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown dynamic environments. In this project, the student will design runtime monitors and reactive motion planning strategies for systems to safely recover from failures of learning-enabled components used within modern autonomous systems. The outcomes of the project will be implemented in a photorealistic, high-fidelity autonomous systems simulator.
2. Data-Driven Optimization for Verifiable Control Laws: Control algorithms are the backbone of autonomous system, ranging from the control of mobile robots, self-driving cars, and fleets of drones. Unfortunately, the design of formally verified controllers is challenging, and the complexity of modern systems often hampers the use of existing model-based techniques. For instance, the use of high-dimensional sensors, the control of large networks of systems, and the use of machine learning in autonomy poses various challenges. In this project, we aim to design data-driven optimization frameworks to learn verifiable control laws for autonomous systems. For many systems such as self-driving cars, safe expert demonstrations in the form of system trajectories that show safe system behavior are readily available or can easily be collected. At the same time, accurate models of these systems can be identified from data or obtained from first order modeling principles. To learn verifiable control laws, the student will formulate constrained optimization problems with constraints on the expert demonstrations and the system model to learn verifiable controllers.
Prof. K. Leslie Gilliard-AbdulAziz
Faculty Email: kabdulaz@usc.edu
Department: Civil and Environmental Engineering
Website: The Sustainable Lab
Projects:
Materials Development for Carbon Capture: This project focuses on materials development for carbon capture, exploring innovative solutions for capturing and utilizing carbon to mitigate environmental impacts. Research areas include carbon capture and utilization, materials science, and environmental engineering.
Prof. Hossein Hashemi
Faculty Email: hosseinh@usc.edu
Projects:
Algorithmic and AI-Enabled Design of Radiofrequency Integrated Circuits (RFIC): We are working on developing algorithms that can design RFICs with performance that is superior to human designs. The algorithmically designed RFICs may not be intuitive or even comprehensible for humans. These algorithms will also expedite the RFIC design process. This research uses a combination of optimization algorithms, neural networks, artificial intelligence (AI), and radiofrequency circuit design.
Prof. Justin Haldar
Faculty Email: jhaldar@usc.edu
Website: Magnetic Resonance Engineering Laboratory
Projects:
Advancing Magnetic Resonance (MR) Imaging Technologies: Magnetic resonance (MR) imaging technologies provide unique capabilities to probe the mysteries of biological systems, and have enabled novel insights into anatomy, metabolism, and physiology in both health and disease. However, while MR imaging is decades old, is associated with multiple Nobel prizes (in physics, chemistry, and medicine), and has already revolutionized fields like medicine and neuroscience, current methods are still very far from achieving the full potential of the MR signal. Specifically, modern MR image methods suffer due to long data acquisition times, limited signal-to-noise ratio, and various other practical and experimental factors - this limits the amount of information we can extract from living human subjects, and often precludes the use of advanced experimental methods that could otherwise increase our understanding by orders-of-magnitude. Our research group addresses such limitations from a signal processing perspective, developing novel methods for data acquisition, image reconstruction, and parameter estimation that combine: (1) the modeling and manipulation of physical imaging processes; (2) the use of novel constrained signal and image models; (3) novel theory to characterize signal estimation frameworks; and (4) fast computational algorithms and hardware. Our methods are often based on jointly designing data acquisition and image reconstruction methods to exploit the inherent structure that can be found within high-dimensional data, and we do our best to take full advantage of the "blessings of dimensionality" while mitigating the associated "curses." We are seeking excellent students with a strong background in signal processing, with an interest in developing methods to improve existing advanced MR methods and an interest in enabling/exploring innovative next generation imaging approaches. More information can be found at my website
Prof. Andrei Irimia
Faculty Email: irimia@usc.edu
Website: Andrei Irimia's Profile
Projects:
Deep Learning and Explainable AI for Neuroimage Analysis: The lab has projects available to use deep learning and explainable AI for neuroimage analysis and quantitation. We synergize magnetic resonance imaging and computed tomography with deep neural network to identify trajectories of brain aging that can lead to neurodegenerative diseases including Alzheimer's disease. Novel deep neural network architectures can also empower generative AI to advance the state of the art in patient-specific prognostication and disease risk assessment.
Prof. Mohammad Soleymani
Faculty Email: soleymani@ict.usc.edu
Department: Institute for Creative Technologies
Website: IHP Lab
Projects:
1. Human Motion Transfer and Synthesis: Human motion transfer is a challenging task in computer vision. This problem involves retargeting body and facial motions from one source to a target image. Such methods can be used for image stylization, editing, digital human synthesis, and possibly data generation for training perception models.
Recently, diffusion models have exhibited impressive ability on image generation. By learning from web-scale image datasets, these models present powerful visual priors for different downstream tasks, such as image inpainting, video generation, 3D generation, etc.
For more info, please check our recent works (MagicPose, DIM, MagicPose4D).
2. Facial Expression Analysis: Human facial expression analysis is valuable in computer vision and human-computer interaction. It involves detecting facial action units and expressions that contribute to specific expressions and identifying the overall emotion of a given face image. Action units provide a compact representation of the face, enabling downstream tasks such as behavior analysis or even behavior generation conditioned on these representations. However, the limited availability of annotated datasets makes facial expression analysis a challenging task. With the emergence of diffusion models, known for generating high-quality, photorealistic images, we can now leverage synthetic datasets created by these models to train and improve facial expression analysis models. This project will focus on improving LibreFace, our open source tool for facial expression analysis.
Prof. Shaama Mallikarjun Sharada
Faculty Email: ssharada@usc.edu
Department: Chemical Engineering and Materials Science
Website: Sharada Lab
Projects:
1. Designing Sustainable Reaction Pathways for Greenhouse Gas Utilization: Designing sustainable reaction pathways for greenhouse gas utilization using quantum chemistry and machine learning
2. Accelerating Quantum Chemistry Studies of Catalytic Reactions: Developing signal processing algorithms for accelerating quantum chemistry studies of catalytic reactions
Prof. Jiachen Zhang
Faculty Email: jiachen.zhang@usc.edu
Department: Civil and Environmental Engineering
Website: Jiachen Zhang's Lab
Projects:
1. Assessing the Impact of Energy and Transportation Policies: Assess the impact of energy and transportation policies (e.g., policies promoting electric vehicles) on urban climate, air quality, health, and equity.
2. Mitigating Urban Heat Through Land Use and Property Modifications: Investigate the effects of urban land use and property modifications as a means to mitigate urban heat and lower temperatures.
3. Improving Simulations for the Long-Range Transport of Pollutants: Improve the simulation for long-range transport of black carbon aerosols and per- and polyfluoroalkyl substances (PFAS) in atmospheric models
4. Integrating Air Quality Modeling with Observational Data: Integrate air quality modeling output with various observational data using machine learning techniques
5. Student-Proposed Research Ideas: Applicants are also encouraged to propose research ideas aligned with their interests that resonate with the overarching research objectives of my research group.
Prof. Zhenglu Li
Faculty Email: zhenglul@usc.edu
Website: Zhenglu Li's Group
Projects:
Development of Massively-Parallelized Computational Methods: This project focuses on developing and applying computational methods based on many-body quantum theories to investigate excited-state properties of materials, including bulk solids and two-dimensional systems. The research emphasizes first-principles approaches such as GW perturbation theory and time-dependent GW methods, aimed at understanding phenomena like electron-phonon coupling, superconductivity, and light-matter interactions.
Prof. Daniel Seita
Faculty Email: seita@usc.edu
Projects:
Developing Robots Comfortable with Contact: This project focuses on implementing algorithms that utilize Generative AI models (like GPT-4) to enable robots to make appropriate contact with obstacles in their environment. The student will create simulation environments for a robot manipulator in a cluttered setup to test these algorithms before transitioning to a physical setup that mirrors the simulations.
Prof. Constantine Sideris
Faculty Email: csideris@usc.edu
Department: Electrical and Computer Engineering
Website: ACME Research Group
Research Topics: Analog integrated circuits for biomedical and wireless applications, Computational electromagnetics for fast simulation and inverse design of nanophotonic and RF devices.
Projects:
1. Circuits: Students can design integrated circuits or printed circuit board (PCB) based circuits and experimentally validate them in the lab. The applications are flexible, likely related to biosensing or wearable devices. Students should have analog circuit design experience and familiarity with software such as Cadence and LTSpice is a plus.
2. Computational Electromagnetics: Students will develop fast techniques to simulate and inverse design high-performance electromagnetic devices. Inverse design automates the design of new devices based on performance specifications and design constraints. Proficiency in programming (C/C++ preferred) and a strong background in electromagnetics are recommended.
Prof. Evi Micha
Faculty Email: evi.micha@usc.edu
Department: Computer Science
Website: Personal Website
Research Topics: Intersection of Computer Science (AI and theory) and Economics, Computational social choice, Algorithmic fairness, Voting systems, Fair division, Matching, Strategic behaviors, AI alignment, RLHF, Multidisciplinary ideas in machine learning.
Projects:
1. Computational Social Choice and AI Alignment: Students can explore how to aggregate individual preferences into collective decisions, with applications in democratic systems and AI alignment. Projects may involve voting systems, fair division, matching, and strategic behaviors. A strong background in AI or algorithmic theory is recommended.
2. Algorithmic Fairness in AI: Students will explore fairness in AI and machine learning systems, with applications in clustering, classification, and peer review. This involves drawing on multidisciplinary ideas and requires familiarity with machine learning concepts and techniques.
Prof. Paul Bogdan
Faculty Email: pbogdan@usc.edu
Students interested in cyber-physical systems, machine learning / AI, and network science please Email
Prof. Ruolin Li
Faculty Email: ruolinl@usc.edu
Department: Civil and Environmental Engineering
Website: https://ruolinli.me/research
Projects:
1. Re-engineering Transportation Systems for AV Integration: The enhanced controllability of autonomous vehicles opens up a myriad of possibilities. For example, autonomous vehicles can be platooned with shorter headways, which increase road capacities, or they can act on information as altruistic decision-makers, thereby improving societal benefits. Autonomous vehicles and existing transportation infrastructure are also intricately linked. The challenge lies in managing or reshaping our current infrastructure to accommodate AVs efficiently and economically. How can we adapt our roads, signaling systems, and urban planning to meet the demands of AV technology? Insights into the dynamic interaction between robotic AVs and infrastructure optimization can guide us in creating cost-effective, efficient solutions that pave the way for the future of transportation.
2. React and Interact: Harmonizing AVs with Humans: Modeling human behavior in transportation systems is crucial yet challenging. Imagine drivers mischievously cutting in front of slowly moving autonomous vehicles just for fun! The impact of autonomous vehicles is significantly influenced by human reactions and interactions. Therefore, it is essential to investigate human behaviors and design control and optimization strategies for AVs that are resilient and adaptive to such uncertainties.
Prof. Feifei Qian
Faculty Email: feifeiqi@usc.edu
Website: Robot Locomotion And Navigation Dynamics (RoboLAND) Lab
Areas of Research: Bio-inspired Robotics, Legged Locomotion, Robophysics.
Projects:
1. Obstacle-aided locomotion and navigation: This project explores how robots can exploit different features of
their physical environments to achieve desired movements. Can
multi-legged robots and snake-like robots intelligently collide
with obstacles on purpose to robustly move towards desired
directions? Can a robot effectively turn itself by jamming the soft
sand with its tail? In this project we will perform robot
locomotion experiments to understand the complex interactions
between robots and their environments, and use these interaction models to create novel strategies that can enable effective locomotion and navigation through challenging environments.
2. Understanding the world through every step: This project focuses on developing robots that can use their legs as soil or mud sensor to help geoscientists collect and interpret information at high spatial and temporal resolution. To achieve this, we will build robot legs that can sensitively “feel” the responses of desert sand or near-shore mud. We will design different interaction-based sensing protocols for the robot legs, and test these protocols in lab experiments. Once the sensing capabilities are developed and tested, we will take the robots to field trips, where the robots work alongside human scientists and learn how human make sampling decisions and adapt exploration strategies based on dynamic incoming measurements. Going forward, these understandings will help enable our robots with cognitive “reasoning” capabilities to flexibly support human teammates’ scientific objectives during collaborative exploration missions.
Prof. Yue Wang
Faculty Email: yue.w@usc.edu
Website: https://yuewang.xyz/
Projects:
Vision-Language-Action Models for Embodied Intelligence: Recent vision-language-action models, e.g., RT-2, OpenVLA, LLARVA, etc., have demonstrated remarkable performance in robotic manipulation. However, they directly output embodiment-specific robotic actions, making them hard to generalize to different environments and embodiments. In this project, we propose the Large Trajectory Model (LTM), which marries the vision-language-action models with an object-centric and embodiment-agnostic action representation. This approach benefits from the commonsense knowledge embedded in vision-language models while enabling generalizable robotic manipulation across various environments and robotic platforms.
The Large Trajectory Model (LTM) is a point-wise vision-language-action model with an object-centric action representation. Specifically, the LTM can be trained on large-scale robotic datasets like Droid and RT-X, large-scale human-object interaction datasets like Ego-4D, or even Internet videos collected from YouTube. The Student Researcher (SR) will be responsible for designing and implementing the LTM architecture and conducting extensive experiments to validate its effectiveness across different robotic embodiments and environments. This scope is appropriate for an SR within the given time frame, as it leverages existing datasets and foundational models for scalable training, allowing the SR to focus on innovation and empirical evaluation.
Prof. Sai Praneeth Karimireddy
Faculty Email: karimire@usc.edu
Website: Sai Praneeth Karimireddy
Website: Swabha Swayamdipta
Projects:
1. Concept-Level Uncertainty Quantification in LLM Outputs: Current methods of quantifying uncertainty in LLM outputs often rely on token-level probabilities (e.g., softmax scores), after normalizing for the sentence length. However, these are potentially flawed. For instance, rephrasing a sentence without changing its meaning can alter the softmax scores, thus affecting the perceived uncertainty. Can we derive scores at a “concept level” which are robust to rephrasing? If our LLM is responding to a query made by an user, the answer contains “facts” and “reasoning”. Our score should directly target the specific “facts” being claimed, and the reasoning made. This project will require familiarity with LLMs and prompt engineering.
Hosts: Sai Praneeth Karimireddy and Swabha Swayamdipta
2. Unified Uncertainty Quantification for Multimodal Models: Current methods for quantifying uncertainty in language model outputs rely on token-level probabilities (e.g., softmax scores) normalized by sentence length to be able to compare outputs of different lengths. However, in multimodal models that generate outputs across various modalities (text, images, audio), how can we effectively combine uncertainties from different modalities, ensuring that the uncertainty estimates are comparable regardless of the mixture of modalities involved?
Hosts: Sai Praneeth Karimireddy and Swabha Swayamdipta
3. Realistic Benchmarks for Public-Private Learning: Current methods in privacy-preserving machine learning often involve pretraining models on large public datasets before fine-tuning them on private data using techniques like Differentially Private Stochastic Gradient Descent (DP-SGD). However, many evaluations are flawed because they use highly similar public and private datasets (e.g., ImageNet and CIFAR), which don't reflect real-world scenarios. In practice, private data (such as confidential healthcare records) can be significantly different from publicly available data (like internet images). How does the performance of the private learning change as we increase the dissimilarity of the public pre-training dataset?
Hosts: Sai Praneeth Karimireddy and Swabha Swayamdipta
4. Private Outlier Detection using Foundation Models: Traditional outlier detection methods often struggle with high-dimensional, complex data and may not capture subtle irregularities. This project proposes using Large Language Models (LLMs) to interpret and model intricate patterns within the data to identify outliers more effectively. By harnessing the nuanced understanding of LLMs, we aim to develop a novel outlier detection approach that works well even when data is privacy-sensitive and distributed, such as in federated learning environments.
Hosts: Sai Praneeth Karimireddy and Swabha Swayamdipta
5. Understanding Model Merging in Multi-Task Learning: Recent research has shown that the capabilities of different machine learning models, each trained on different tasks, can be combined by simply averaging their parameters—a process known as model merging. Remarkably, this merging approach for creating a multi-task model can even outperform training a model from scratch on the merged dataset. This project aims to understand and explain this phenomenon by exploring the theoretical underpinnings and practical implications of model merging in multi-task learning. This project will require some familiarity with ML theory as well as modern deep learning.
Hosts: Sai Praneeth Karimireddy
Prof. Zhaoyang Fan
Faculty Email: Zhaoyang.Fan@med.usc.edu
Department: Radiology and Biomedical Engineering
Website: https://sites.usc.edu/fan-mri-lab/
Projects:
Deep Learning-Assisted Detection and Segmentation of Brain Metastasis in MRI: Brain metastases (brain mets) are the most common type of brain tumor in adults, often arising from cancers such as lung, breast, and melanoma. Early and accurate detection of brain metastases is critical for timely treatment and improved patient outcomes. Conventional methods for detecting brain metastases rely heavily on manual interpretation of MRI scans, which is time-consuming and prone to variability among radiologists. This project aims to develop a robust and automated deep learning approach for detecting brain metastases from MRI scans, with the goal of enhancing diagnostic precision and efficiency.
Prof. Ishwar K. Puri
Faculty Email: moolayad@usc.edu
Department: Aerospace and Mechanical Engineering and Biomedical Engineering
Website: https://viterbi.usc.edu/directory/faculty/Puri/Ishwar
Projects:
1. Machine learning for drug combination optimization in sonodynamic therapy: The objective of this project is to use machine learning techniques to find the optimal drug combinations for sonodynamic therapy in brain and prostate cancer. The predicted drug will be experimentally validated using an in-house cancer spheroid printing platform in vitro. The selected candidate will also receive training on mammalian cell culture, cancer spheroid printing, and low-intensity ultrasound stimulation.
2. New materials for electrochemical sensing: The objective of this project is to advance the development of novel nanomaterials specifically designed for the precise sensing of critical molecules. The selected candidate will receive comprehensive training in the synthesis and characterization of nanomaterials employing state-of-the-art instrumentation. Additionally, the candidate will acquire specialized knowledge and technical skills in the domain of electrochemical sensors, enhancing their expertise in this innovative area of research.
3. Anodes for lithium-ion batteries: The project aims to enhance the anode capacities of lithium-ion batteries. Current batteries
have limited capacities, and there is a need to significantly improve them to meet industry requirements. The candidate will focus on developing new materials and will utilize state-of- the-art facilities at USC for characterization. Additionally, the candidate will receive training in developing coin cells and performing detailed characterization on them.
Prof. Seo Jin Park
Faculty Email: seojinpa@usc.edu
Personal Website: https://seojinpark.net/
Lab Website: https://nsl.usc.edu/
Projects:
1. Flash burst inference: Nowadays, AI models are prevalent for important control applications where both accuracy and latency of inference are important for safety. For example, autonomous driving cars need to make not only highly accurate but also timely decisions. Due to the limitation of computing power of edge devices (cars, robots, drones, etc), there is a significant limit on model sizes for these important applications. In this project, we will explore how to augment these on-edge low-accuracy inferences with on-cloud high-accuracy inferences on the cloud. When there is a sudden need for high-accuracy inferences, our system aims to finish the computationally intensive high-accuracy inference by harnessing hundreds of cloud GPUs in parallel. This project will explore the potential of many parallelization techniques including sequence parallelism via speculative decoding and tensor parallelism.
2. Efficient multi-modality LLM inference: This project aims to optimize multi-modal Large Language Model (LLM) inference through asynchronous execution of multiple encoders and LLMs. We will explore three key aspects: diverse parallelism configurations for various models, efficient scheduling mechanisms for multiple tasks, and strategies to efficiently handle inputs of varying sizes. By addressing these challenges, we seek to enhance the performance and scalability of multi-modal inference systems, leading to more efficient processing of input data across multiple modalities.
3. Efficient and low latency XR compute offloading: Computation can be a significant source of latency for resource-constrained XR headsets, as XR devices often lack sufficient compute capabilities to run compute-intensive functions locally in real-time. Sharing a powerful computing node between many XR sessions can provide lower latency than provisioning dedicated compute nodes with slower hardware, for the same cost. However, shared compute resources introduce another source of high tail latency, server overload. To prevent server overload from degrading user-perceived latency, the student will explore two methods: (1) device-edge congestion control to dynamically adjust the computation offloading rate based on server load feedback, and (2) seamless computation migration to quickly shift overloaded computations to less congested servers.
4. Resource-aware fast migration over RDMA: Cloud computing is rapidly shifting toward FaaS-style, fine-grained computation units, enabling new possibilities for distributed systems. However, challenges like maintaining quality of service and mitigating delays from server overloads persist, especially in multi-tenant environments. Our project explores utilizing RDMA for fast, intelligent migration of computations across servers to address these issues. By considering resource dependencies, such as file availability, we aim to devise a robust solution for minimizing latency and improving overall system efficiency.
Prof. Kallirroi Georgila
Faculty Email: kgeorgila@ict.usc.edu
Department: Computer Science Department & Institute for Creative Technologies
Website: https://kgeorgila.github.io/
Project:
Exploring synergistic approaches to reinforcement learning and large language models for natural language dialogue modeling: This project seeks to combine the use of reinforcement learning (RL) and large language models (LLMs) in the context of natural language dialogue systems. It will be explored how RL can help LLMs generate dialogue system outputs that are appropriate for a given dialogue context, personalized, and/or convey emotions. It will also be investigated how LLMs can serve as a means to explore various paths for optimal RL-based dialogue system policy learning (e.g., as simulated users), and how combining RL and LLMs can potentially help make dialogue system decisions more interpretable. The visiting students can also work on other topics related to natural language dialogue processing (including spoken language processing).
Prof. Erdem Biyik
Faculty Email: erdem.biyik@usc.edu
Department: Thomas Lord Department of Computer Science
Projects:
1. Modeling human interventions and corrections: This project will investigate human interventions and corrections to the robots. Specifically, the goal is to develop a computational model of when and how humans decide to intervene the operation of a robot. The applications include tabletop manipulation, autonomous driving, and possibly large language models.
2. Uncertainty modeling and active querying for RLHF: The current implementations of reinforcement learning from human feedback (RLHF) follows the learned policy to generate new queries for the human. Recent works in statistical learning showed that it is possible to form Bayesian distributions over neural networks using linear algebraic methods in an efficient way. The goal of this project is to investigate whether these distributions (over reward models in RLHF) can be used for uncertainty modeling and active querying for RLHF. The applications include tabletop manipulation and simulated dynamical environments where RLHF has been proven successful.
3. Human gaze as an inductive bias for robot learning: Human gaze contains a lot of information about the task we are interested in, e.g., what is important and what is not important in the task. In this project, we will explore whether we can use human gaze as an inductive bias for reinforcement learning and imitation learning when robot data is accompanied by gaze data that we will collect using a gaze tracker equipment. The applications include autonomous driving, autonomous gameplay and robotic manipulation.
Prof. Mengyuan Li
Faculty Email: mli49061@usc.edu
Department: Computer Science Department
Lab Website: https://mengyuan-l.github.io
Project:
Optimizing the performance of machine learning (ML) systems and enhancing real-time monitoring capabilities: This project focuses on improving the performance of ML systems by exploring various parallelism techniques on GPUs, optimizations at the GPU kernel level, and leveraging GPU performance counters for analysis. Applicants should have a solid understanding of these areas to contribute effectively to the project.
Prof. Ruishan Liu
Faculty Email: ruishanl@usc.edu
Department: Computer Science Department
Areas of Research: Machine Learning and Biomedical AI
Lab Website: https://viterbi-web.usc.edu/~ruishanl/
Projects:
1. Large Language Models for Longitudinal Mental Health Analysis: This project aims to explore the use of large language models to analyze longitudinal mental health data collected over time. The aim is to uncover patterns, predict mental health outcomes, and provide personalized intervention recommendations. The research will focus on improving interpretability and trustworthiness of model predictions for clinical use.
2. Machine Learning for Medical Data Distillation: This project aims to develop machine learning techniques to distill complex medical datasets into essential patterns and actionable insights. The focus is on improving the interpretability of distilled information for medical practitioners and researchers. Applications include simplifying diagnostics, enhancing treatment predictions, and accelerating biomedical research.
3. Reinforcement Learning for Precision Medicine: The project aims to apply reinforcement learning to design personalized treatment strategies that adapt to patient-specific needs. It aims to optimize clinical decisions through data-driven models that evolve with changing patient responses. Research will address challenges such as data sparsity and balancing safety with exploration in medical settings.
4. AI for Synthesizable Drug Discovery: This project focuses on discovering new drug candidates using AI, with a specific emphasis on ensuring synthesizability. The research aims to accelerate the drug discovery pipeline while maintaining chemical feasibility for real-world pharmaceutical development. Key challenges include predicting molecular properties and guiding synthesis routes efficiently.
Prof. Danny JJ Wang
Faculty Email: dannyjwa@usc.edu
Department: Neurology and Radiology
Lab Website: Laboratory of Functional MRI Technology (LOFT)
Project:
Development of innovative and noninvasive MRI technologies for brain assessment: The Laboratory of Functional MRI Technology (LOFT) focuses on creating advanced MRI methods to assess the structure, function, and connectivity of the live human brain. The aim is to achieve (sub)millimeter spatial resolution and millisecond temporal resolution to observe human brain activity dynamically in 4D space. The lab collaborates closely with clinicians, neuroscientists, and data scientists to translate these novel MRI technologies for mapping the human brain in both health and disease.
Summer 2024 Projects
Prof. Xiang Ren
Faculty Email: xiangren@usc.edu
Department: Computer Science
Website: Intelligence and Knowledge Discovery Research Lab
Projects:
1. Cultural Fairness Challenge for Large Language Models: Large language models (LLM) have garnered significant attention due to their far-reaching implications. For instance, ChatGPT can effectively respond to a wide range of inquiries, and maintain human-like conversations. Recently, the LLM has also been able to identify explicitly sensitive or offensive requests. ChatGPT’s versatile skills are attributed to its training on a large corpus of human written text and intentional programming to deny inappropriate requests. This, however, begs a few questions: Does ChatGPT learn from any implicit human biases present in the training process? If so, how can we identify and extract these biases? In this project, we look to 1) to create a benchmark dataset by scaling up prompt-based instances of social cultural ambiguity, 2) comparing performances of state-of-the-art LLMs, and 3) proposing a few prompt-based solutions for mitigating implicit cultural bias.
2. Systematic generation of long-tailed knowledge statement for large language models: Since large language models (LLMs) have approached human-level performance on many tasks, it has become increasingly harder for researchers to find tasks that are still challenging to the models. Failure cases usually come from the \lt distribution -- data to which an oracle language model could assign a probability on the lower end of its distribution. Systematically finding evaluation data in the \lt distribution is important, but current methodologies such as prompt engineering or crowdsourcing are insufficient because coming up with \lt examples is also hard for humans due to our cognitive bias. In this project, we look to: (1) build an algorithmic search process to systematically generate challenging knowledge statements for large language models like GPT-4; (2) use the large-scale generated dataset to build a smaller language model that is able to do better reasoning on such long-tailed knowledge than GPT-4.
Prof. Vatsal Sharan
Faculty Email: vsharan@usc.edu
Department: Computer Science
Website: https://vatsalsharan.github.io/
Projects:
1. Area of Research: ML Theory: The student will explore foundational questions regarding machine learning. A focus is on understanding computational-statistical tradeoffs: when computational efficiency might be at odds with statistical requirements (the data needed to learn). Recent work has opened much uncharted territory, particularly with respect to the role of memory in learning, which we will explore.
Prof. Feifei Qian
Faculty Email: feifeiqi@usc.edu
Department: Electrical and Computer Engineering
Research Lab Website: Robot Locomotion And Navigation Dynamics (RoboLAND)
Projects:
1. Obstacle-aided locomotion and navigation: This project explores how robots can exploit different features of their physical environments to achieve desired movements. Can multi-legged robots and snake-like robots intelligently collide with obstacles on purpose to robustly move towards desired directions? Can a robot effectively turn itself by jamming the soft sand with its tail? In this project we will perform robot locomotion experiments to understand the complex interactions between robots and their environments, and use these interaction models to create novel strategies that can enable effective locomotion and navigation through challenging environments.
2. Understanding the world through every step: This project focuses on developing robots that can use their legs as soil or mud sensor to help geoscientists collect and interpret information at high spatial and temporal resolution. To achieve this, we will build robot legs that can sensitively “feel” the responses of desert sand or near-shore mud. We will design different interaction-based sensing protocols for the robot legs, and test these protocols in lab experiments. Once the sensing capabilities are developed and tested, we will take the robots to field trips, where the robots work alongside human scientists and learn how human make sampling decisions and adapt exploration strategies based on dynamic incoming measurements. Going forward, these understandings will help enable our robots with cognitive “reasoning” capabilities to flexibly support human teammates’ scientific objectives during collaborative exploration missions.
Prof. Meisam Razaviyayn
Faculty Email: razaviya@usc.edu
Department: Industrial & Systems Engineering, Electrical Engineering, and Computer Science
Research Lab Website: https://realai.usc.edu
Projects:
1. Training Private Generative Models From a Combination of Public, Synthetic, and Private Data: Training procedures of generative models using individuals' data can pose a risk of the model outputting training, exposing sensitive information, or violating copyrights. To address this concern, Differential Privacy (DP) has emerged as a solution, ensuring that no malicious actor can glean excessive details about any specific individual's data. In addition, DP guarantees that removing any individual training input data does not change the output significantly. DP has been utilized in training small/medium-size models in various companies (such as Google and Apple). However, despite these successes, a significant obstacle to the broader adoption of DP in training large generative models and LLMs is the reduced accuracy of DP models compared to their non-private counterparts. To bridge this accuracy gap between DP and non-private models, one promising approach involves leveraging public data (or models trained based on public data), which is devoid of privacy issues. Additionally, synthetic data generation offers alternative means to access public data. This project develops and explores various methodologies for training private generative models from a combination of private, public, and synthetic datasets.
Prof. Michelle Povinelli
Faculty Email: povinell@usc.edu
Department: Electrical and Computer Engineering
Research Lab Website: Povinelli Nanophotonics Laboratory
Description:
Our lab works on cutting-edge research in Nanophotonics, including the development of new materials for infrared detection and thermal regulation. Students will gain experience in electromagnetic simulation and infrared measurements.
Prof. Souti Chattopadhyay
Faculty Email: schattop@usc.edu
Department: Computer Science
Research Lab Website: ADAPTIVE COMPUTING EXPERIENCES (ACE) LAB
Projects:
1. Exploring the Transformative Influence of AI-Powered Coding Assistants on Software Development: AI-assisted coding tools have transformed the workflow of software development. From the initial stages of coding to final deployment, AI-enabled tools have significantly impacted how developers can efficiently conduct the stages of development life-cycle. In this project, we aim to study how AI tools have changed the process of refactoring, debugging, code review, documentation, and other development activities. We aim to re-think the development workflow, re-imagine capabilities of development tools, and re-evaluate the skills next generation of developers need to build better software.
Prof. Yu-Tsun Shao
Faculty Email: yutsunsh@usc.edu
Department: Chemical Engineering and Materials Science
Research Lab Website: Shao Materials Research Group
Projects:
1. Reconstruction and analysis of multi-modal data acquired by Scanning Transmission Electron Microscopy (STEM): Structural and chemical information can be simultaneously acquired in the STEM, in combination with sub-Angstrom resolution, offering unprecedented precision and resolution for understanding the materials’ structure-property relations.
2. Iterative reconstruction of ptychographic image for achieving super-resolution: Electron ptychography currently holds the Guinness World Record for the highest resolution microscope, and we are actively working on improving this by advancing the experimental acquisition, reconstruction algorithms to beat the current record.
3. Machine learning methods for tackling large datasets: Four-dimensional STEM (4D-STEM) acquired 2D diffraction patterns at each position in real-space, yielding rich information about the materials but also large datasets of >100 GB. Applications of machine learning algorithms help us tackle this challenge and retrieve rich structural information of the materials, such as strain, chirality, polarization, magnetic or electric fields.
Prof. Weihang Wang
Faculty Email: weihangw@usc.edu
Department: Computer Science
Website: https://weihang-wang.github.io
Description:
Dr. Weihang Wang is leading the program analysis and software testing of web applications at USC. The group is broadly interested in software engineering, software security, machine learning, and computer systems. Our vision is to build testing and analysis techniques for improving the reliability, security, and efficiency of complex software systems. Some of our ongoing projects include reverse engineering, static/dynamic bug detection, program analysis for WebAssembly, attack investigation and detection, compiler testing, and performance profiling. We am excited to work with motivated applicants who (1) are committed to top-notch research, (2) have a solid background in system programming, and (3) have experience with building large software systems. Applicants with research experience in software engineering, security, or compilers will be given priority.
Prof. Kallirroi Georgila
Faculty Email: kgeorgila@ict.usc.edu
Department: Computer Science and Institute for Creative Technologies
Research Lab Website: Natural Language Dialogue group
Projects:
1. Exploring synergistic approaches to reinforcement learning and large language models for natural language dialogue modelling: This project seeks to combine the use of reinforcement learning (RL) and large language models (LLMs) in the context of natural language dialogue systems. It will be explored how RL can help LLMs generate dialogue system outputs that are appropriate for a given dialogue context, personalized, and/or convey emotions. It will also be investigated how LLMs can serve as a means to explore various paths for optimal RL-based dialogue system policy learning (e.g., as simulated users), and how combining RL and LLMs can potentially help make dialogue system decisions more interpretable. The visiting students can also work on other topics related to natural language dialogue processing (including spoken language processing).
Prof. Peter Yingxiao Wang
Faculty Email: ywang283@usc.edu
Department: Biomedical Engineering
Research Lab Website: Wang Lab
Description:
Chimeric antigen receptor (CAR) T cells show potential as paradigm-shifting therapeutic agents for cancer treatment by eradicating chemotherapy-resistant cancer cells. However, the potential for life-threatening activity against normal, nonmalignant cells (on-target/off-tumor effect) is a major problem that must be overcome to improve the chances of CAR-based immunotherapy for solid tumors. In my lab, we employ synthetic biology and genetic engineering to reprogram immune cells so that they can be controlled by ultrasound remotely and non-invasively for cancer immunotherapy. This spatial and temporal control of CAR expression may not only provide a safety "on" switch but also provide rest periods to the CAR T cells, which is increasingly recognized to reduce CAR T exhaustion and improve cell activity. Therefore, this approach may not only restrict CAR T cells activity to the tumor, but also enhance it within the tumor.
Prof. Andreas Molisch
Faculty Email: molisch@usc.edu
Department: Computer Science
Research Lab Website: Wireless Devices and Systems Group
Projects:
1. Deep Wi-Fi Sensing for Smart Environments: We are excited to introduce a research project that combines Wi-Fi sensing with deep learning to create an innovative and efficient environment. This project offers students a unique opportunity to work at the intersection of wireless technology and artificial intelligence, developing innovative solutions for various applications, including indoor localization, occupancy detection, energy efficiency, and security enhancement. Join our team to contribute to creating safer and more sustainable spaces.
2. Communication Efficient Federated Learning in Wireless Edge Networks for Video Caching: Federated learning (FL) is a potential solution to many machine learning (ML) problems where clients wish to keep their local data private. In FL, the central server broadcasts the ML model to distributed clients. These clients then perform local training on their local datasets and offload their trained models to the server. While this brings a privacy-preserving learning solution as the data stays at the local devices, the communication overhead for these models exchange can be enormous when the links between the server and the clients are wireless. Moreover, the local training and uplink offloading time can significantly slow down the learning process when the clients are the typical wireless user equipment (UE) with limited computation and battery powers. It is necessary to orchestrate the resources in such a resource- constrained environment. As such, we will seek a communication-efficient FL solution with wireless clients for video caching applications in this project.
Prof. Somil Bansal
Faculty Email: somilban@usc.edu
Department: Electrical and Computer Engineering
Research Lab Website: Safe and Intelligent Autonomy Lab
Projects:
1. Detecting and Mitigating Anomalies in Vision-Based Controllers: Autonomous systems, such as self-driving cars and drones, have made significant strides in recent years by leveraging visual inputs and machine learning for decision-making and control. Despite their impressive performance, these vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs. Such errors can cascade to catastrophic system failures and compromise system safety, as exemplified by recent self-driving car accidents. In this project, we aim to design a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures. Specifically, we leverage a reachability-based framework to stress-test the vision-based controller offline and mine its system-level failures. This data is then used to train a classifier that is leveraged online to flag inputs that might cause system breakdowns. The anomaly detector highlights issues that transcend individual modules and pertain to the safety of the overall system. We will also design a fallback controller that robustly handles these detected anomalies to preserve system safety. In our preliminary work, we validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxiing. Our results show the efficacy of the proposed approach in identifying and handling system-level anomalies, outperforming methods such as prediction errorbased detection, and ensembling, thereby enhancing the overall safety and robustness of autonomous systems. Preliminary experiment videos can be found at: https://phoenixrider12.github.io/failure_mitigation
2. Vision-based navigation in new environments: Autonomous robot navigation is a fundamental and well-studied problem in robotics. However, developing a fully autonomous robot that can navigate in a priori unknown environments is difficult due to challenges that span dynamics modeling, onboard perception, localization and mapping, trajectory generation, and optimal control. Classical approaches such as the generation of a real-time globally consistent geometric map of the environment are computationally expensive and confounded by texture-less, transparent or shiny objects, or strong ambient lighting. End-to-end learning can avoid map building, but is sample inefficient. Furthermore, end-to-end models tend to be system-specific. In this project, we will explore modular architectures to operate autonomous systems in completely novel environments using onboard perception sensors. These architectures use machine learning for high-level planning based on perceptual information; this high-level plan is then used for low-level planning and control via leveraging classical control-theoretic approaches. This modular approach enables the conjoining of the best of both worlds: autonomous systems learn navigation cues without extensive geometric information, making the model relatively lightweight; the inclusion of the physical system structure in learning reduces sample complexity relative to pure learning approaches. Our preliminary results indicate a 10x improvement in sample complexity for wheeled ground robots. Our hypothesis is that this gap will only increase further as the system dynamics become more complex, such as for an aerial or a legged robot, opening up new avenues for learning navigation policies in robotics. Preliminary experiment videos can be found at: https://smlbansal.github.io/LB-WayPtNav/ and https://smlbansal.github.io/LB-WayPtNav-DH/.
3. Safe assurances for learning and vision-driven robotic systems: Machine learning-driven vision and perception components make a core part of the navigation and autonomy stacks for modern robotic systems. On the one hand, they enable robots to make intelligent decisions in cluttered and a priori unknown environments based on what they see. On the other hand, the lack of reliable tools to analyze the failures of learning-based vision models make it challenging to integrate them into safety-critical robotic systems, such as autonomous cars and aerial vehicles. In this project, we will explore designing a robust control-based safety monitor for visual navigation and mobility in unknown environments. Our hypothesis is that rather than directly reasoning about the accuracy of the individual vision components and their effect on the robot safety, we can design a safety monitor for the overall system. This monitor detects safety-critical failures in the overall navigation stack (e.g., due to a vision component itself or its interaction with the downstream components) and provides safe corrective action if necessary. The latter is more tractable because the safety analysis of the overall system can be performed in the state-space of the system, which is generally much lower-dimensional than the high-dimensional raw sensory observations. Preliminary results on simulated and real robots demonstrate that our framework can ensure robot safety in various environments despite the vision component errors (the videos of some of our preliminary experiments can be found at https://smlbansal.github.io/website-safe-navigation/). In this project, we will extend the proposed framework to more complex and high-dimensional robotic systems, such as drones and legged robots. Other than ensuring robot safety, we will also explore using the proposed framework to mine critical failures of the system at scale, and using this failure dataset to improve the robot perception over time.
Prof. Shaama Mallikarjun Sharada
Faculty Email: ssharada@usc.edu
Department: Chemical Engineering and Materials Science
Research Lab Website: Sharada Lab
Description:
This research group uses computational chemistry and machine learning to find energy-efficient pathways for utilizing carbon dioxide as a source of fuels and chemicals. Please take a look at the above website for more information.
Prof. Daniel Seita
Faculty Email: seita@usc.edu
Department: Computer Science
Research Lab Website: SLURM Lab at USC
Projects:
1. VIsion-Language Models for Deformable Object Manipulation: The last few years has seen incredible growth and interest in vision-language models (VLMs) such as CLIP and GPT-4V. In parallel, the last few years has also seen a huge growth in robotic manipulation, such as with deformable object manipulation, which is inherently challenging due to issues with representing the configuration of the objects and reasoning about complex dynamics. Many recent methods for deformable object manipulation have used imitation learning or reinforcement learning. In this project, we will use pre-trained VLMs for deformable object manipulation, to see if we can sidestep the process of requiring demonstrations or reinforcement learning. We will understand how to use VLMs in a way that takes advantage of their strengths while enabling high-precision deformable object manipulation tasks that involve manipulating rope, cloth, fluids, and other challenging objects.
Prof. Erdem Biyik
Faculty Email: erdem.biyik@usc.edu
Department: Computer Science
Research Lab Website: USC Lira Lab
Projects:
1. Imitation learning from control-constrained demonstrations: This project will explore efficient ways of performing imitation learning and/or inverse reinforcement learning when the expert demonstrations come from a constrained control interfaces, e.g. due to the controller itself or the suboptimality of the expert human. The applications include tabletop manipulation and autonomous driving.
2. Self-supervised improvements over reinforcement learning with large pre-trained models: This project will explore the use of large pre-trained models (e.g., LLMs, VLMs, VQAs, etc.) for creating a self-supervision signal in reinforcement learning. The applications include, but are not limited to, tabletop manipulation.
3. Active querying for reinforcement learning from human feedback: The current implementations of reinforcement learning from human feedback (RLHF) follows the learned policy to generate new queries for the human. In this project, we will explore alternative ways to do it to improve data-efficiency of training. It will involve implementation of active learning techniques.
Prof. Andrei Irimia
Faculty Email: irimia@usc.edu
Department: Gerontology, Quantitative & Computational Biology, Biomedical Engineering and Neuroscience
Research Lab Website: The Irimia Lab
Description:
Please take a look at the above website to identify projects and research areas which you may be interested in.
Prof. Jieyu Zhao
Faculty Email: jieyuz@usc.edu
Department: Computer Science
Website: https://jyzhao.net
Projects:
1. Large language models (LLMs) offer remarkable capabilities and people are incorporating those models into their daily life more than ever. However LLMs can inadvertently learn and perpetuate biases present in the training data, potentially leading to biased outputs. Addressing bias in LLMs is a critical aspect of responsible AI development. Leveraging state-of-the-art machine learning algorithms, the project aims to enhance the transparency and fairness of LLMs by identifying and addressing potential biases in their outputs.
Prof. Viktor K Prasanna
Faculty Email: prasanna@usc.edu
Department: Electrical Engineering and Computer Science
Research Lab Website: https://sites.usc.edu/prasanna/
Description:
Areas of research: Accelerated computing, FPGAs, Accelerators for ML and AI, Data Science applications, Adversarial AI in vision, Applied ML.
Prof. Chongwu Zhou
Faculty Email: chongwuz@usc.edu
Department: Electrical Engineering
Research Lab Website: Nano Lab
Description:
Project: Professor Chongwu Zhou's research lab has projects on the synthesis and device applications of carbon nanotubes and two-dimensional materials such as MoS2 and WSe2. The visiting students will work on the synthesis of nanomaterials and characterization of novel nano-electronic devices. Areas of research: Microelectronics, semiconductor technology, nanotechnology.
Prof. Kandis Abdul-Aziz
Faculty Email: kabdulaz@usc.edu
Department: Civil and Environmental Engineering
Research Lab Website: The Sustainable Lab
Description:
Projects: 1. Integrated Carbon Capture and Utilization. 2. Heterogeneous Catalyst Synthesis and Optimization
Prof. Danny JJ Wang
Faculty Email: dannyjwa@usc.edu
Department: Biomedical Engineering
Research Lab Website: Laboratory of Functional MRI Technology
Description:
We develop cutting edge magnetic resonance imaging (MRI) technologies for mapping the function and physiology of the brain and body organs, and translating these new technologies in a range of neurologic disorders. We host the first FDA approved ultrahigh field MRI scanner (Siemens 7T Terra) in North America which provides ultrahigh sensitivity and spatiotemporal resolutions.
Prof. Raymond L. Goldsworthy
Faculty Email: rgoldswo@usc.edu
Department: Biomedical Engineering
Research Lab Website: Bionic Ear Lab
Description:
Dr. Goldsworthy’s lab, The Bionic Ear Lab, studies how hearing loss affects music appreciation. We combine auditory neuroscience, biomedical engineering, and psychology to help people with hearing loss rediscover music. Find out more at the above website.
Prof. Vinay Duddalwar
Faculty Email: Vinay.Duddalwar@med.usc.edu
Department: Clinical Radiology
Research Lab Website: USC Radiomics Lab
Description:
Here are selected areas of interest, but for a comprehensive list of our projects, please reach out to Professor. Renal cell carcinoma, Evaluation of renal Masses, Muscle Invasive Bladder Carcinoma, Contrast Enhanced Ultrasound (CEUS), Prostate cancer, Radiomic QA / Toolkit development.
Prof. Emilio Ferrara
Faculty Email: emiliofe@usc.edu
Department: Computer Science
Research Lab Website: http://www.emilio.ferrara.name
Description:
Looking to host any students who might be interested and qualified to work in the above lab.
Prof. Hossein Hashemi
Faculty Email: hosseinh@usc.edu
Department: Electrical and Computer Engineering
Research Lab Website: Hossein Hashemi Group
Description:
Current research projects include radiofrequency and millimeter-wave integrated circuits for 5G/6G wireless communications, radar, and power beaming; chip-scale lidar for self-driving cars and 3D imaging; optical computing in silicon photonics; computational inverse design and optimization of electromagnetic structures (mm-wave and optical); and implantable biomedical integrated systems.
Prof. Peter A. Beerel
Faculty Email: pabeerel@usc.edu
Department: Electrical and Computer Engineering
Research Lab Website: Energy Efficient Secure Sustainable Computing Group
Description:
Interested in hosting 2 students – in the general areas of Energy-Efficient Trustworthy Machine Learning, covering areas of hardware acceleration, hardware-algorithm co-design, privacy and machine learning, security and machine learning.
Prof. Cyrus Shahabi
Faculty Email: shahabi@usc.edu
Department: Electrical & Computer Engineering
Research Lab Website: Integrated Media Systems Center
Description:
The Integrated Media Systems Center, headquartered in USC’s Viterbi School of Engineering, pursues informatics research that delivers data-driven solutions for real-world applications. We find enhanced solutions to fundamental data science problems and apply these advances to achieve major societal impact. One of our most exciting research thrusts is analyzing location and mobility data for various real-world applications while maintaining user privacy. For example, we have an ongoing project to identify anomalous behavior based on GPS tracks. We use advanced techniques such as deep neural networks, differential privacy, and massive location data to build principled solutions for important problems. We are looking for students with basic experience in Python programming, probability, and machine learning.
Prof. Hangbo Zhao
Faculty Email: hangbozh@usc.edu
Department: Aerospace and Mechanical Engineering
Research Lab Website: https://sites.usc.edu/zhaogroup/
Projects:
1. Develop novel flexible sensors and actuators for closed-loop control of dexterous soft robots: Areas of research: flexible electronics, micro/nano fabrication, sensors and actuators, soft robotics.
Prof. Qiang Huang
Faculty Email: qianghua@usc.edu
Department: Industrial and Systems Engineering
Research Lab Website: https://huanglab.usc.edu
Projects:
1. Domain-information machine learning for smart manufacturing: Students will explore research to develop quality control theory and methods for personalized manufacturing and small-sample machine learning in for quality control in additive manufacturing.
Published on September 18th, 2017
Last updated on October 26th, 2024