Battling Bias in AI

| June 6, 2019 

USC computer science PhD student tackles algorithmic fairness for social justice in machine learning.

Hsien-Te Kao, computer science PhD student and graduate research assistant at ISI, received an NSF Graduate Research Fellowship for his work ensuring fairness in machine learning. Photo/Caitlin Dawson.

AI systems are increasingly being used for everything from predicting crime to determining insurance rates—but what happens when human bias creeps into AI? Bias in machine learning algorithms can have serious consequences: people can be denied health services, wrongly targeted for crimes, or turned away for a job because of a discriminatory algorithmic decision.

At the USC Information Sciences Institute, computer science PhD student Hsien-Te Kao is working towards a solution. Kao recently received an NSF Graduate Research Fellowship for proposing a fairness framework that could automatically identify and remove unknown bias in machine learning models, without collecting intrusive personal information.

His proposed work learns the underlying latent, or “hidden,” structure of the training data to uncover implicit biases and remove them during training. In fact, it could debias factors which aren’t even known to be biased.

“Under the hood, other unfair sources of information may have been picked up.” Hsien-Te Kao.

“Machine learning algorithms are more or less black boxes—it’s hard to know how they come to their decisions and as a result, it’s hard to tell if if they are biased,” said Kao.

“Current algorithmic fairness methods collect personal information like age, gender or race to conduct direct group comparison—but under the hood, other unfair sources of information may have been picked up.”

Valuable insights

Human behavior is complex, and human interaction is even more complex. But machine learning could help us better understand ourselves and others by looking for behavioral patterns in data—lots of data.

“In this era of big data, machine learning tools can provide valuable insights that are beyond human limitations,” said Kao, whose research spans computational social science and human-computer interaction.

“Machine learning can make a tremendous difference to our lives, but it is crucial to ensure our findings are not biased.” Hsien-Te Kao. Photo/Caitlin Dawson.

At ISI, Kao is currently working on TILES, or Tracking Individual Performance with Sensors, a major research study of hospital workers.

Using machine learning, the researchers aim to crunch massive amounts of data gathered from workplace sensors to better understand, and eventually reduce, workplace stress.

“This study is a great example of how machine learning can make a tremendous difference to our lives, but it is crucial to ensure our findings are not biased against any particular individuals or groups,” said Kao.

The tip of the iceberg

Kao has reason to be cautious. In 2017, Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s and last year, researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people.

“And that’s just the tip of the iceberg,” said Kao.

The challenges of fairness originate from the “neural inputs” used to train machine learning models, which may include human biases. In addition, even if the models don’t explicitly learn from personal information such as race or gender, other features in the data could give away this information.

For instance, sensitive information such as gender or age could be picked up by the algorithm through non-sensitive information such as breathing or heart rate, inadvertently leading to potentially biased algorithmic decisions.

“I see fairness in machine learning as a social movement and social movements take time.” Hsein-Te Kao.

A social movement

There is a silver lining. Because AI can learn relationships inside data sets, algorithms could help us better understand biases that haven’t yet been isolated.

But the machines can’t do it on their own. That’s why Kao’s method aims to identify unknown feature bias and debias the data without even collecting personal information.

Kao cautions the framework is unlikely to debias machine learning 100 percent. But he sees it as a first step towards developing a framework that could be used in all machine learning systems that could have inherent bias.

“I see fairness in machine learning as a social movement and social movements take time to mature and flourish,” said Kao.

“By calling for fairness in machine learning, we are making small changes today which we hope will have a big impact in the future.”

Published on June 6th, 2019

Last updated on May 16th, 2024

Share this Story