Spark

AI's problem with disability and diversity

How machine learning is failing people who don't fit into the norm.
Machine learning systems, like in self driving cars, gather data about the world and make predictions and decisions based on that data. (Reuters/Stephen Lam)

Machine learning is a technology we've seen move from the lab to something most of us interact with, directly or indirectly, on a daily basis. It's a powerful approach to artificial intelligence.

Traditionally, computing relied on humans writing rules that a computer would then have to follow. In machine learning, the computer looks at data and creates patterns to help it understand the world. It predicts how things will act in the future, and then make decisions based on those predictions.

It's this data that fuels machine learning. But in its raw form, the big data sets used to train AI are bound to contain information that isn't relevant or that is confusing. So it needs to be refined. The data is "cleaned" by removing outliers or data that disrupts a greater pattern.

Jutta Treviranus is the director of the Inclusive Design Research Centre and a professor at OCAD University. Jutta has been working towards more inclusive and accessible technology for much of her life. Her focus is now on improving artificial intelligence systems so they can better serve everyone, including people with disabilities, and organizes hackathons towards this goal.

"One of the things we want to do when we create data sets for machine learning engines, is we want to increase the signal and decrease the noise," Jutta says. "Meaning that we want to emphasize the dominant patterns, or the patterns that are going to be used to allow the machines to make useful decisions.
Jutta Treviranus

"But in the process of doing that we clean out what we call the outliers, or the edge data. Because that makes it more difficult to draw conclusion for the machines and therefore they're not as quick to make a useful decision."

The problem is that outlier data isn't just irrelevant. It can represent the experience of real world people. And while machine learning allows computers to make their own decisions, those decisions are essentially coloured by the data we provide them with.  "Our research is not very supportive to diversity," Jutta says, "and we have basically transferred that to machine learning."

"The silver lining to this is that it's made it possible to talk about things like stereotype and prejudice and bias and blind spots with hard scientists. Previously what is called soft science was seen as very fuzzy and indeterminate and didn't have sufficient hard edges. But because the machines make it manifest it becomes a topic that is respected and people will pay attention to and engage in." 

One of the things holding back improving machine learning systems for people with disabilities and other outliers is the use of proprietary software.

"We need open systems," Jutta says. "We need transparent systems in order to serve individuals or to create systems that work for people with disabilities or that work for diversity. If a system is a black box, if it is a closed system,  then there's no way to query it, there's no way to challenge who you've excluded.  And that is a really dangerous thing that happens."

"Now that we have networked systems we can design our systems for diversity. We can personalize them, customize them, make them flexible, so that they address our personal needs. And if there are gaps or things that aren't there for particular requirements someone might have, then we can call upon a larger network of individuals around the globe to help us to fill those gaps. But all of that requires a system that encourages or supports sharing."