Asking 'why' instead of 'how' could better explain AI decisions

Opening up the "black box" of AI with a trick from philosophy
AIs make more and more decisions about us, yet it can be difficult to understand how their algorithms influence those decisions. (Markus Spiske/Unsplash)

There was once a time when humans made important decisions about other humans.

You went to your bank manager, in person, to ask for a loan.

A human hiring committee decided which candidate would get a job.

And judges even determined what a guilty offender's sentence would be!

Now, many of those decisions are made by AIs. The machines look at reams and reams of data related to, say, getting a car loan, and using an algorithm, compare the patterns they see in that data with the loan applications they're deciding.

Besides being a little terrifying, there are a couple of problems with AIs making these decisions, maintains Sandra Wachter, an assistant professor at Oxford University's Internet Institute and fellow at the Alan Turing Institute.
Sandra Wachter (Oxford Internet Institute)

First of all, the algorithms the machines use are often proprietary, meaning they aren't available to the general public or other computer scientists to examine.

Second, and perhaps more troubling, AIs often interpret data in ways more complex than even their programmers can understand. In those cases, nobody could explain how the "black box" in an AI works, even if they wanted to.

So Wachter and her colleagues have come up with a different way to try to understand why an AI may have come to a particular decision: "counterfactual explanations."

Counterfactual explanations are a way of testing the AI to figure out how it weighed a specific variable in its calculations, she explained to Spark host Nora Young.

For example, if you were denied a loan, you might ask whether your income made a difference in how the machine decided. You may find out that if you had made $5,000 more per year, you would have been granted the loan, Wachter explained.

There is no coding required, and it gives a clear answer without having to reveal any proprietary information, she added.

Perhaps more important, counterfactual explanations can reveal whether an AI is using criteria that is ethical or even legal.

If it told you that the reason you didn't get the loan was because you were female, then that reveals a problem that the coders would have to address, she said.

"Usually people that are not surrounded by this technology on a daily basis might feel very uncomfortable having a black box make decisions for them," she said. Her method would increase the trustworthiness of the AI's decision-making methods, she added.

Wachter's ideas have been well received by the AI community, and Google has adopted a counterfactual explanation query model into its TensorFlow AI platform. Developers like it because it strikes a middle ground by making AIs more transparent, without having to reveal or explain proprietary code—and also because no computer knowledge is required to use it.

She's hoping it gains wider acceptance. "I would like to see the ability to query an algorithm [whenever] they are being used to make very important decisions about us."


To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.