Asking 'why' instead of 'how' could better explain AI decisions
There was once a time when humans made important decisions about other humans.
You went to your bank manager, in person, to ask for a loan.
A human hiring committee decided which candidate would get a job.
And judges even determined what a guilty offender's sentence would be!
Now, many of those decisions are made by AIs. The machines look at reams and reams of data related to, say, getting a car loan, and using an algorithm, compare the patterns they see in that data with the loan applications they're deciding.
First of all, the algorithms the machines use are often proprietary, meaning they aren't available to the general public or other computer scientists to examine.
Second, and perhaps more troubling, AIs often interpret data in ways more complex than even their programmers can understand. In those cases, nobody could explain how the "black box" in an AI works, even if they wanted to.
So Wachter and her colleagues have come up with a different way to try to understand why an AI may have come to a particular decision: "counterfactual explanations."
Counterfactual explanations are a way of testing the AI to figure out how it weighed a specific variable in its calculations, she explained to Spark host Nora Young.
For example, if you were denied a loan, you might ask whether your income made a difference in how the machine decided. You may find out that if you had made $5,000 more per year, you would have been granted the loan, Wachter explained.
There is no coding required, and it gives a clear answer without having to reveal any proprietary information, she added.
Perhaps more important, counterfactual explanations can reveal whether an AI is using criteria that is ethical or even legal.
If it told you that the reason you didn't get the loan was because you were female, then that reveals a problem that the coders would have to address, she said.
"Usually people that are not surrounded by this technology on a daily basis might feel very uncomfortable having a black box make decisions for them," she said. Her method would increase the trustworthiness of the AI's decision-making methods, she added.
Wachter's ideas have been well received by the AI community, and Google has adopted a counterfactual explanation query model into its TensorFlow AI platform. Developers like it because it strikes a middle ground by making AIs more transparent, without having to reveal or explain proprietary code—and also because no computer knowledge is required to use it.
She's hoping it gains wider acceptance. "I would like to see the ability to query an algorithm [whenever] they are being used to make very important decisions about us."