Spark·Q&A

The dangerous myth of neutrality in tech, and how to fix it

Data scientist and journalist Meredith Broussard discusses her new book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.

'The biases that we see in machine learning systems are the biases that exist out in the real world'

On the left, a smiling woman with curly brown hair and large hoop earrings and a double strand of pearls around her neck, stands in front of bookshelves. On the right is the cover of her book, which is black, and in white lettering it says: "More than a Glitch. Confronting Race, Gender and Ability Bias in Tech".
In her latest book, Meredith Broussard argues that inequality reinforced by technology isn't just a glitch, but coded into the system itself. (Devin Curry/MITP)

Originally published in March 2023.

When a machine learning algorithm or other advanced technological tool gives you an answer to a complicated problem, you tend to trust it. But should you?

Meredith Broussard, a data scientist, journalist and associate professor at New York University, explores this question in her new book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. She argues that the ways we think about tech design create deep-seated problems, because of how these systems can end up reflecting preexisting real-world biases.

Broussard spoke with Spark host Nora Young about the book and how we should approach the use of AI and other technologies to address social issues. Here is part of their conversation.

At one point in discussing healthcare you write, "It's dangerous to transfer diagnostic methods to algorithmic systems without first scrutinizing whether the diagnostic methods are disadvantaging certain groups." Can you explain what you mean by that?

We have situations like the U.S. kidney transplant list where for many years, if you were white, your kidney numbers would be measured in a certain way, and if you were Black, your kidney numbers would be measured in a different way, so that white people would get onto the kidney transplant list earlier than Black people. It was called race correction in medicine.

This to me is a really good illustration of why we need to really look at the underlying diagnostic systems before we start implementing them as algorithms, because obviously it's really unfair to put people onto the transplant eligible list earlier based on their race.

When something that is unfair is encoded in an algorithm, it becomes very difficult to see and almost impossible to eradicate.

[You argue that] part of the root cause [of bias in tech] is this underlying notion of "techno chauvinism," which I believe is a term you coined. So what's technochauvinism?

Technochauvinism is the idea that technological solutions are superior to others. What I would argue is that it's not a competition, that instead we should think about using the right tool for the task. Sometimes the right tool for the task is absolutely a computer, like, 'you will pry my smartphone out of my cold dead hands'. But other times it's something simple, like a book in the hands of a child sitting on a parent's lap. One is not inherently better than the other. We don't win anything by doing everything with computers instead of doing it with people. We need to think about what gets us toward a better world.

The biases that we see in machine learning systems are the biases that exist out in the real world. One of the things people often say is that AI is a mirror, and so we really shouldn't be surprised when bias pops up in AI systems because we know that we live in an unequal world.

One of the reasons I wrote the book is I feel like we can do better. When you just see an article every few months about a facial recognition fail, you might think, 'Oh yeah, that's happening every so often, it's not really a huge problem.' But when you see all of these stories piled up together, you really get a better sense of what are the real harms that people are suffering right now at the hands of algorithmic systems.

Do you think these technological tools are always fixable, or are there just cases where we should just be categorically saying, no, we're not using this?

I think we really need to make space for refusal. I think we need to make space to say, 'Oh yeah, this thing is not working as expected and we're going to throw it away.' And that's really hard to do, especially when you've invested millions in developing the system or when you've spent months of your life just trying to make something work.

I think that if more people start thinking about what goes into a computational system, they would make better decisions about what comes out of a computational system and we'll have less faith in them when we need to be skeptical. And I want [people] to feel empowered to push back against algorithmic systems or algorithmic decisions that are bad decisions.

In one chapter, you talk about gender and how we came to have the binary male-female, pick-your-gender options on forms. Can you tell me a bit about that?

So, 1950s ideas about gender are encoded into our databases. I think about the way that I was taught to write databases in college back in the day. You had to be really stingy with storage back then because storage was expensive, right? And so one of the ways you would make your programs smaller to run faster is you would use the smallest variable possible.

Well, a binary value is a zero or a one. It takes a very small unit of space inside the computer, and so I was taught to encode gender as a binary. But now we understand that gender is a spectrum. We understand that gender needs to be an editable field.

But are there ever tradeoffs associated with opening up these systems and making them more inclusive?

One of the concepts that was really important for me was the concept of the curb cut effect. The curb cut is the part at the edge of the sidewalk that slopes down into the street. And they didn't used to make sidewalks with curb cuts, right? It was something that was implemented as a result of just ages of work by disability advocates.

And curb cuts don't just benefit people in wheelchairs — they benefit people who are using walkers, they benefit people who are pushing babies in strollers, they benefit people who are wheeling a dolly down the sidewalk. And so everybody benefits from a curb cut. It's not just something that benefits people with specific disabilities. So when we design for accessibility, we are actually designing for the benefit of everybody.


This interview has been edited for length and clarity.