Spark

Bad algorithms are making racist decisions

"Digital redlining" is often unintentional, but the impact is real

"Digital redlining" is often unintentional, but the impact is real

Redlining refers to the practice of dividing up cities, often to limit minorities access to services. (Tom Rumble/Unsplash)

The huge amount of data held by tech companies disproportionately limits access to their services by vulnerable people, including people of colour. That's according to research from Chris Gilliard, Professor of English at Macomb Community College, who describes the effect as "digital redlining".

much of that is based on training data that… probably [reflects] the biases that are already built into society.- Chris Gilliard

Gilliard compared what is happening online now to redlining. That's the practice of dividing up a city, making if harder for poorer areas, and areas with larger minority populations, to access banking, insurance, healthcare, or other services. Redlining was practiced in the United States and Canada. It was officially outlawed in the US in 1968 by the Civil Rights Act.

Digital redlining differs from traditional redlining in that it is often hidden. 

Chris Gilliard is a professor of English at Macomb Community College in Michigan. (Blaine Siesser)

Starting in 2016, Facebook allowed advertisers to target audiences by "ethnic affinity", but also allowed them to prevent those audiences from seeing an ad. Facebook has since ended that ability, but while it was active, investigative news source, Propublica, was able to post a housing ad that allowed them to exclude certain ethnicities, including African American, Asian American, and Hispanic, from seeing their ads.

Not only can this kind of narrowcasting prevent targeted groups from applying for a job or buying a house, it also prevents that group from ever knowing that post existed.  "It's much different than if someone puts in front of their house…a sign saying 'no blacks need apply'", Gilliard said in a conversation with Spark host Nora Young. "So it's much more insidious in that way."

In 2016, an investigation by Bloomberg found that Amazon tended to withhold their same-day-delivery service from minority neighbourhoods. Gilliard pointed to the example of Roxbury, a majority black neighbourhood in Boston. In Bloomberg's reporting, Roxbury was the lone area where same-day-delivery wasn't available, completely surrounded by areas Amazon would deliver to.

A map showing redlining in Philadelphia. (Wikimedia)

"I don't think necessarily that there are people at Amazon saying, 'let's not deliver to black people in Roxbury," Gilliard said. "What typically happens is there's an algorithm that determines that for some reason not delivering there made the most sense algorithmically, to maximize profit or time… And there are often very few people at companies that have the ability or the willingness or the knowledge to look at these things and say, 'hey, wait a minute.'"

While these decisions are often made by AI algorithms, that doesn't mean humans aren't responsible for the results. Gilliard said that when he sees the sort of AI algorithms Amazon and others use, "...my antenna sort of go up, because much of that is based on training data that… probably [reflects] the biases that are already built into society."