Nathan Dennler is many things: a Viterbi PhD student studying human-robot interaction, a proud Massachusettsan, a figure skater and a textile artist. He is also a member of Queer in AI, a global, volunteer-run grassroots organization that aims to create an inclusive and equitable space for queer people working in artificial intelligence (AI).
Launched in 2017, Queer in AI seeks to promote diversity and inclusivity within AI research and ensure LGBTQ+ perspectives experiences and needs are included in AI research and systems. Members include not only undergraduate and PhD students, but also professors and people in research and industry. With around 870 members across 47 countries, much of this coordination happens online over Slack.
Co-advised by Professor Maja Matarić and Assistant Professor Stefanos Nikolaidis, Dennler first discovered Queer in AI in 2019 at the Conference on Neural Information Processing (NeurIPS) where he was demonstrating a hair-combing robot developed in Nikolaidis’ lab to help people with disabilities.
Soon, he was volunteering with the group, hosting a workshop exploring potential harms from AI systems that specifically affect queer people, the results of which will be presented at the Conference on Artificial Intelligence, Ethics and Society in August. Affinity workshops have become a critical pillar of Queer in AI’s initiatives to help people feel more at ease and discuss relevant queer-specific issues in AI.
“I think in general the issue with the way people in power try to solve problems with AI is assuming that everything can be reduced to something quantitative, or that categories are fixed,” said Dennler. “For a lot of queer people, things like your understanding of gender identity are always changing and can’t really be described by categories or numbers and a lot of AI systems can’t adapt as their users change.”
From Algorithms to Advocacy
In June 2023, Dennler and 50 international co-authors from Queer in AI were awarded best paper at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) for “Queer In AI: A Case Study in Community-Led Participatory AI.”
According to the case study, in the US, queer people are at least 20% less represented in STEM than in the national population and experience higher levels of career limitations, harassment, and professional devaluation. Queer people also face issues of bias and exclusion in AI, write the authors, from healthcare discrimination and misgendering to the censoring of queer content. It doesn’t have to be that way, said Dennler.
“Having queer people as part of these systems can help to identify use-cases that are actively harmful and help shape the way these problems are conceptualized to prevent them from actively causing harm to queer people,” said Dennler. “For example, there was previous research on predicting gender or sexuality from faces to try to recommend products to people, which runs a pretty big risk of misgendering people.”
Dennler has explored some of these ideas in his own work. Now in his fourth year, his research in Matarić’s Interaction Lab has explored the role of voice, appearance and task on robot gender perception, and using design metaphors to understand user expectations of robot behavior, including robot gender expression.
For instance, in a paper lead-authored by Dennler titled “Using Design Metaphors to Understand User Expectations of Socially Interactive Robot Embodiments,” the researchers found that body shape was related to the expression of femininity in robots. This replicated previous work has similarly linked the relationship between robots’ waist-to-hip ratio to their perceived gender expression. Dennler is also the co-author of an upcoming paper on using gender-neutral voices in robots to reduce appearance-based gender stereotypes.
This summer, Dennler is interning at Uber in San Francisco, working on the company’s promotions team and personalizing deals received by users. “At USC I study personalization for human-robot interaction, adapting how robots behave for different end users, but in this role, we’re adapting how the app behaves instead of how the robots behave,” said Dennler.
Looking to the future, Dennler hopes that marginalized communities can be celebrated in AI for their unique perspectives.
“Everyone has their own point of view from their personal experiences, and making sure these points of view are diverse ultimately makes any technical pursuit better,” said Dennler.
“Despite the setbacks that often happen, I am hopeful that the future generally gets better. I think it is becoming easier for marginalized communities to participate in AI in general, and I believe that the future will be even more inclusive.”
Published on June 29th, 2023
Last updated on May 16th, 2024