Shri Narayanan Receives the 2023 ISCA Medal for Scientific Achievement

| March 17, 2023

The award is the highest honor in the field of speech communication research.

Shri Narayanan

Shri Narayanan adds the ISCA Medal for Scientific Achievement to his long list of honors.

Shrikanth (Shri) Narayanan, a USC University Professor and Niki and Max Nikias Chair in Engineering at the USC Viterbi School of Engineering, has received the 2023 International Speech Communication Association (ISCA) Medal for Scientific Achievement. The medal is the most prestigious award offered by this preeminent organization in the field of human speech communication research at the global level.

Narayanan is being honored by this interdisciplinary and international community of scientists, scholars, engineers, technologists and clinicians from academia, industry and government for his “sustained and diverse contributions to speech communication science and technologies and their application to human-centered engineering systems.”

He was named an ISCA Fellow in 2016 and has been a member of the community since he first began working in speech research as a graduate student three decades ago.

“I feel deeply honored, and humbled, especially to be in the company of many of my heroes and role models in speech science and technology research who have paved the way for all of us to follow over the last several decades,” said Narayanan, who is Professor in Electrical and Computer Engineering holding joint appointments in the departments of computer science, linguistics, psychology, pediatrics and otolaryngology-head & neck surgery.

Research that speaks volumes

The ability to use speech and spoken language to express and understand thoughts, desires and emotions is a core human facility. Understanding speech through an engineering lens and developing technologies that can support and enhance it have been areas of central focus in Narayanan’s research.

Over the past three decades, he has developed engineering and computational methods in advancing the scientific understanding of human speech. This includes novel ways for measuring speech such as through real-time MRI, as well as analyzing and modeling speech production. His work has helped to illuminate the elegant structural details in how we encode rich information in the speech signal.

He has also advanced the understanding of how and why speech production varies across people and contexts, including in developing children and in those grappling with clinical conditions such as neurological disorders and cancers of the head and neck region. These findings have led to contributions in speech science, linguistics and clinical realms represented by not only hundreds of publications but also original datasets and resources used worldwide for research and teaching.

Speaking of innovation…

Leveraging such scientific knowledge, Narayanan and his students have contributed to a variety of advances in the realm of human speech and language technologies. These include advances in automatic speech and speaker recognition, prosody modeling, speech translation, speech synthesis, spoken dialogue and conversational systems, behavioral signal processing, and affective computing.

These innovations have sparked commercial inventions with broad impact. His patents on voice interfaces laid the early foundation for now-ubiquitous speech-based services and information retrieval for cloud computing and mobile devices. His series of patents on automatic speech recognition for mobile devices paved the way for adaptive on-device (e.g., smartphone) speech processing and user personalization.

Human-centered inquiry

A critical aspect of Narayanan’s work is to create technologies that are inclusive and equitable. His research takes into account the rich diversity of human experience in regards to developmental differences in children, linguistic and cultural backgrounds of language speakers, and demographic variables including psychological state and health status.

Narayanan and his students at the Signal Analysis and Interpretation Lab (SAIL) have also been investigating speech in media as a means to identify bias along ethnic, racial and gender lines, in collaboration with the Geena Davis Institute on Gender in Media. Speech has been a key component of the behavioral signal processing and machine intelligence that SAIL has been pioneering in the service of a variety of societal applications including security, health and learning. SAIL’s contributions to human-centered computing research and clinical applications in mental and behavioral health and wellbeing across the life span — from autism spectrum disorder to depression, suicide and dementia — are pioneering.

One of the major achievements of the media collaboration has been the development of the Geena Davis Inclusion Quotient (GDIQ), a tool that uses artificial intelligence to analyze media content and identify patterns of representation across dimensions such as gender and age. The GDIQ was co-developed by the Geena Davis Institute on Gender in Media in partnership with Narayanan and SAIL.

Spreading the word

Narayanan’s influence in the field of human speech communication and technology is evidenced by numerous award-winning papers, widely cited patents and the founding of two startups. He has mentored over 85 doctoral and postdoctoral scholars and delivered over 100 invited keynote and plenary presentations worldwide.

In 2016, he was also elected as a fellow of the National Academy of Inventors.

Even after 30 years of sharing his research, the first address Narayanan gave to the ISCA stands out in his memory.

“I still remember presenting my first paper on speech — analyzing ‘noisy’ sounds like [s] and [sh] in speech using chaos theory — to this community in 1993 in Berlin,” said Narayanan.

He will have the opportunity to address his peers once again when the ISCA medal is presented to him at the opening ceremony of the Interspeech 2023 Conference, which will take place in August in Dublin, Ireland.

Published on March 17th, 2023

Last updated on May 16th, 2024

Share this Story