The Importance of Explainability

Illustration generated using DALL·E 2.

In episode 19: Explainable AI, we discussed the concept of explainability in AI. In essence, explainability (or interpretability) entails being able to explain, in terms that would make sense to a human being, why an AI system reached a particular conclusion or produced a particular output. In an increasingly automated world, where even life-and-death situations may be decided by an AI, it becomes ever more important to provide transparency and to be able to explain how these systems work.

Death and taxes

Some things are simply taken as given, like taxes and death. But there are some things in life that need explaining, especially when they might be related to how much tax we pay and when and how we might die. As both of the aforementioned are increasingly linked in some way or another to AI, it is exactly that AI that needs to be explained; the AI that increasingly has control of many aspects of our life.
Whether we like and agree with it or not, organisations such a banks, credit card companies, and HR departments are outsourcing critical decision making tasks to algorithms. These days first stage in applying for a mortgage or other loan, credit card, or job is often via the filling in of an online form. The data on this form is then duly analysed by an algorithm that has pre-set parameters. If the data you have supplied is within these parameters then you will no doubt pass to the second stage.

Decision, decisions

But imagine something you include, or don’t include, or even aren’t able to include or explain on your application. When applying for a mortgage for example, the parameters might include a requirement to have been in the same job for at least three years. But maybe you just changed jobs and maybe even doubled your salary. The algorithm though has no ability to work outside the parameters set and so automatically declines you application. Of course there is no come back against the algorithm and no explanation will ever be given, you just have to suck it up.

Transparency needed

And it’s exactly the same for job applications. As we have discussed in a previous article, with anything that is data driven everything relies on those who have created the formula being as open, inclusive, and un-biased as possible. See the problem? Which is exactly why explainability needs to be built into the system from the very beginning. Explainability acts as a check and balance for the people developing the algorithm and also provides transparency for those using it.

Of course, we have to accept that any algorithm, whatever its purpose, even developed by the widest and most diverse team, is going to have bias, that’s unavoidably human. But we have an obligation to ensure any bias is transparent and explainable to ensure fair, justifiable and inclusive decisions are made.

Written by Ian Bowie