#19: Explainable AI

In an increasingly automated world where even potentially life-and-death decisions may be outsourced to AI, who frequently operate as black boxes, understanding how and why the decisions were made becomes ever more critical. Ian and Michael discuss the concept of explainable AI.

Ian Bowie
Hello and welcome to AI unfiltered with me, Ian Bowie, and our resident expert Michael Stormbom. Where we will be talking about everything to do with AI in our modern digital society and what the future holds for all of us.

Ian Bowie
Okay, so today we thought we’d talk a little bit about something called explainable AI. And sitting next to me here in the studio is Michael, who is actually going to explain what explainable AI really is.

Michael Stormbom
That I will attempt to do indeed, but perhaps we should start with explaining what the distinction between explainable AI and regular AI is, because a traditional AI system is sort of like a black box. So put in some data and it produces an outcome but you’re not necessarily sure why it produced that particular outcome. And so I mean, imagine for example, you have a system that decides whether to approve a house loan or not, so enter the applicants all their data and economic data, and then it makes a recommendation whether to approve the loan or not. So you might take a guess as to why it for example, it denied a loan or it approved, but you’re not entirely sure, it’s in that sense, a black box.

Ian Bowie
Yeah, so it’s the same as a recruitment AI.

Michael Stormbom
Or a recruitment AI. So it might decide that this person should go on to the next round of the recruitment process or it might decide that the dispersion get excluded, but you are not necessarily sure why it reached the conclusion that it did. So the whole concept of explainable AI tries to address this, that you build into the system the possibility for it to explain why it reached a conclusion that it did.

Ian Bowie
Okay, that sounds a bit complicated.

Michael Stormbom
Can be, yes, but I mean, and also we have previously spoken about facial recognition and bias, for example, systems identifying yourself as a criminal, or why did it come to that conclusion?

Ian Bowie
Yeah but I mean, you sometimes ask people questions and ask them to explain their actions. And they can’t give you a logical reason why.

Michael Stormbom
Yes, but in these AI systems because they, I mean, it’s all mathematical formulas, it’s optimization algorithms, that it has reached that particular conclusion, based on the data.

Ian Bowie
Isn’t there… I mean, if we think about recruiting, and recruitment AI, isn’t there a danger that I mean, obviously, you’re going to take out the the human factor from recruiting and you’re going to miss out on feeling, aren’t you, and emotion? I mean, I used to recruit many, many years ago back in the day and I used to recruit partly on logic, okay, you know, this person has got the right qualifications, the right background, etc, etc. But a lot of my reasoning was was based more on feelings, that I feel that this person would be a good fit in the team.

Michael Stormbom
Yeah, no, I mean, absolutely. And, of course, those recruitment filters, let’s call it filter for the sake of argument. I mean, it’s, if you have tons of applicants… is a human being going to be able really to go through all of them that thoroughly so there is that sort of screening and then candidates get excluded automatically without even really standing a chance in the process. But it comes down to then what’s in that filter? And as we have discussed before, there’s a great risk of bias in these AI systems. So if you have a system that just automatically excludes people, for whatever reason, let’s for sake of argument that the system has learned to exclude people based on foreign sounding names for example, due to faults in the data, due to imperfections in the data, it has learned to do that for the sake of argument. So that’s why if you can build in this explainability into the process, you can also again, you can see whether, how the AI has reached that conclusion and also, considerably then address problems in the AI. First of all, but also in the interest of transparency.

Ian Bowie
I mean, there’s always going to be bias as long as you have a human inputting data, isn’t there.

Michael Stormbom
Well, yes. And but that’s why you wanted to explainability there or I mean, if you’re, if you apply for a loan, you probably want to know why you were denied a loan.

Ian Bowie
Yeah, true.

Michael Stormbom
Yeah. But coming back to the recruitment process. I mean, of course, it’s also not just about hard data. I mean, it’s of course about soft skills and all of those sort of aspects as well, where you, of course.

Ian Bowie
Just going back to what you just said about if you apply for a loan, and you don’t, you’re unsuccessful, and you would like to know why. But I mean, even even the humans wouldn’t tell you why. I mean, there are certain also legal issues at play as well. It’s like, oh, I didn’t get the job. Can you tell me why? Or why not? So why does a machine need to explain itself when a human will refuse to do so?

Michael Stormbom
Well, let’s take another example. For example, with treatment healthcare treatment, let’s say you’re denied treatment based on an automated decision.

Ian Bowie
I doubt it. I mean, they’re leaving themselves wide open for a legal case, aren’t they? They’ll find a reason why not, which possibly won’t be the truth. You’re too old. You know, we’re not we’re not we’re not going to spend on that particular treatment is far too expensive. We know it’s good. We know it’ll make you better. But it’s too much money and you’re too old for it. So you get the cheap stuff. That’s the truth. They’re never gonna tell you that. Are they? I mean…

Michael Stormbom
Yeah, but shouldn’t they be compelled to tell you?

Ian Bowie
Well, I mean, in an ideal world, we would all get what we need in terms of particularly when we talk about medicine, but I mean, I remember here in Finland, there is a very, very big difference in the quality and level of care that you will receive, let’s say as a cancer patient because Helsinki has got a bigger budget they can afford the better drugs. But if you’re in a small community somewhere, they haven’t got the budget to afford the best. So you’ll get the second or even the third best option, simply because it’s not within their budget to afford better.

Michael Stormbom
Yeah, well, don’t we as the informed citizens deserve to know that then?

Ian Bowie
well, yeah, but I mean, can you imagine the chaos the truth can cause..

Michael Stormbom
Well, I say unleash the chaos I say. But of course there wouldbe a role for for AI to combat discrimination. You take away the sort of non rational reasons for…

Ian Bowie
But I mean, I think you’re going to be on a hiding to nothing here because I think a lot of governments see part of their job as protecting the people from the truth. And then, and then the next thing is, do we really want to know the truth? Do you really want to be told that you are too old for cancer care? You’re just not worth it. That’s the truth. And we’re not gonna do anything about it. We’re just gonna let you die. Would you rather live through the pretense that somebody actually cares?

Michael Stormbom
Well I mean, not. I feel I would like to know, so take for example, so my father died of cancer due to an inoperable tumor at age of 70. So surely, I would like to know if, if the decision was made that, you know what, he’s just too just too old. Yeah, and it’s too expensive. Well, yeah, we could have. But I mean, we could have saved him a million bucks. Of course. I wouldn’t want to do that.

Ian Bowie
I mean, we share the same fate. My father also died of cancer. And we feel that it was a cancer that should have been diagnosed up to half a year before it actually was and there was a potential if it had been diagnosed earlier that he could have been saved but the doctor in our feeling was completely useless and missed it. Now, I don’t know what would it help us to know that what would we do with that information afterwards? Of course, yeah, we could sue. We could demand the doctor loses their job. Don’t know what would be the end result of having that extra information?

Michael Stormbom
I’d rather have hard data than pretense. I mean, we talk about being a data driven society but then…

Ian Bowie
I think we are, but doesn’t somebody need to control the data. Somebody need to control who knows what. I mean surely we can’t have everybody knowing anything they want to know about everything.

Michael Stormbom
Well, I think fundamental aspects of our healthcare system I think, certainly, I mean, surely probably not the you know, the password to the to the nuclear power plant that one probably everyone don’t need to know.

Ian Bowie
I mean, if you’re if you’re reasonably intelligent, okay, but

Michael Stormbom
Let’s put it like this. And so, let’s for sake of argument, say that there’s a greatly unequal healthcare.

Ian Bowie
Absolutely. Everywhere. Yeah. If you’ve got the money, you’ll get anything you want.

Michael Stormbom
Yeah, no, but also in the example of that, okay, smaller places don’t have the budget for it. So they will, sort of more or more will more actively deny treatment because they simply can’t afford treatment, wouldn’t it be information that you would want to take into account when you decide where you live.

Ian Bowie
Well, yeah, but I mean, I think I think anybody who’s even reasonably intelligent can figure out you’re better off in the bigger metropolis than you are living out in the countryside miles from anywhere, where there are very few taxpayers I mean, that’s, I would say fairly self evident. If you’re going to live in Finland, you need to live in either the greater Helsinki area, Turku, or Tampere, I don’t see that there’s really any other option if you are expecting the kind of you know very high level of service in services in general. I mean, I’m not only talking about health care. I mean, you know, we’ve we’ve we’ve got a small house in Pohjanmaa or Österbotten, and, you know, the nearest police station is an hour away. So you know, if you need the cops, then you know, you better hope that you can wait an hour for them to get to you. Whereas I think in Turku, you can reasonably expect that there’ll be there pretty quickly. Or a house fire, you know, where’s the nearest fire station? And again, you know, if you’ve got a house 30 kilometers away in the countryside from the nearest fire station, and you happen to have a fire, well you can probably kiss it goodbye. Whereas again, you know, in a city like Turku or Tampere or Helsinki, you can reasonably expect hopefully that something gets saved.

Michael Stormbom
Yeah, but I mean if we had a hard data on it, it would also, you know, for just resource allocation purposes as well.

Ian Bowie
Well, yeah, but like I say, I mean, you can work it out, just using common sense that you’re gonna get a better level of service. Yes. Or you should do anyway,

Michael Stormbom
Yes, but it’s coming coming down back to the, to the topic of today, namely Explainable AI. So if you have an AI and it’s processing, it’s not necessarily common sense the conclusion and it reaches, this is the… so that’s where they explainability aspect comes into it.

Ian Bowie
Yeah, that makes sense. Yeah.

Michael Stormbom
Because it might reach completely outlandish conclusions and you don’t really understand why.

Ian Bowie
Yeah. Then it becomes interesting.

Michael Stormbom
And I mean, again, there is no AI system that’s 100% accurate. That’s just a impossibility

Ian Bowie
So in your opinion or experience. What what is the highest degree of accuracy that you could ever expect?

Michael Stormbom
That very much depends on the use case, right?

Ian Bowie
Is there actually any data on for example, these recruitment algorithms?

Michael Stormbom
I’m sure there is but we will need to need to look it up. I think yeah.

Ian Bowie
Because it will be very interesting to know, of people who have been recruited based on recruitment algorithms. How happy have the companies been with those new employees, let’s say after six months.

Michael Stormbom
Yeah. But I mean, surely no one is recruiting solely based on what AI says. I mean, how they probably use it to filter out people who are not going to the next round rather.

Ian Bowie
Yeah. But even that will be interesting, wouldn’t it? You know,

Michael Stormbom
Well, yeah, of course. Absolutely.

Ian Bowie
How happy are companies with the quality of applicants after they’ve gone through the filtering round, using the AI?

Michael Stormbom
Yeah, absolutely. So did it select the best candidates? Yeah, for sure.

Ian Bowie
Or if it was, what could be really interesting is to run a parallel test where you are recruiting. And you take the same pool of candidates, you run it through an AI recruitment algorithm, and you also run it through simply people and see what the end result would be.

Michael Stormbom
That could be quite interesting. Yes.

Ian Bowie
Yeah. That will be very interesting. Would they come up with the same new employee? Or would they have very different outcome.

Michael Stormbom
Absolutely. Yeah, I mean, in terms of recruiting, hopefully, hopefully, there’s some human involved in the recruitment process.

Ian Bowie
There often, if we go now to your bank loan example. Yeah, many times there are no humans involved in who gets a loan and who doesn’t.

Michael Stormbom
Yeah, and there then on the algorithm and was like, okay, denied and then, yeah.

Ian Bowie
So it would be very interesting also, maybe to run the same test with the loans as well.

Michael Stormbom
I think that would be very interesting. Yeah.

Ian Bowie
And then of course, as we know, there’s your personal relationship with the bank manager. Of course, you can have the…

Michael Stormbom
The AI can substitute that.

Ian Bowie
That’s right, you know, so we know you’re overdrawn and we know you just lost your job but you know, it’s not a problem Freddie, leave it with me.

Michael Stormbom
Yeah. Well, in fairness, I guess they’ve tightened up the regulations in terms of who gets and who does doesn’t so well, of course, I mean, there of course, are all public public criteria, so forth.

Ian Bowie
But still mates will help mates when they I mean, it’s just human nature. Again. Well, yeah. So of course, I suppose actually, perhaps AI can be used to reduce corruption.

Michael Stormbom
Well yes, so it can find it sort of anomaly detection in a way so it can find approved loans that look a little bit suspicious in terms of the criteria… Yeah, yeah, no, for sure.

Ian Bowie
Yeah. Oh, just simply bad decisions. I mean, you know, I mean, it might be the human being has granted a loan with the very best of intentions. And just, you know, made a mistake, yeah. That an algorithm would have said no to.

Michael Stormbom
Yeah, no, absolutely. And, and, of course, you can now apply them to the recruitment process, for example, detecting if there’s some sort of if there are particular bias in and so forth. So certainly.

Ian Bowie
I suppose you could argue that AI takes out the emotion.

Michael Stormbom
Conceivably, yes. But that doesn’t necessarily mean that the AI reaches rational or logical conclusions. Due to the aforementioned.

Ian Bowie
But of course if we go to facial recognition, AI…

Michael Stormbom
Yeah. So So there, you can sort of do this thing where you can see what part of the picture the AI sort of honed in on. So for example, when you do research on facial recognition systems, some of them tend to for example, put a lot of emphasis on the eyes and using the eyes for to determine who the person in the picture is.

Ian Bowie
Yeah. Does AI also kind of monitor things like eye movements? Could AI be used, for example, in lie detection?

Michael Stormbom
Yeah, I mean, to the extent that there is, which I don’t know if there’s, you know, does your eye movements betray?

Ian Bowie
Well, I’ve just been reading a book as I do and apparently yes, Eye movement is a very big factor in detecting who’s telling the truth and who’s lying.

Michael Stormbom
Yeah, just hook up you know, eye tracking camera and, and, yeah, for you to do. Then you have an AI.

Ian Bowie
There’s the book I’m reading actually, there’s a professor and I think he has said that the human face is capable of something like 270,000 individual expressions. Now I’ve got no idea how he came up with a such an enormous number. But this is apparently the number he’s come up with.

Michael Stormbom
That’s interesting.

Ian Bowie
Very interesting. Yeah. So if you could get I would say a range of people to be able to mimic that range, that number of range of expressions, then, you theoretically should be able to program an AI system to detect exactly how people are feeling at any given time.

Michael Stormbom
Yeah, but I guess how individual are also those facial expressions…

Ian Bowie
Culturally related as well. Yeah.

Michael Stormbom
Yeah. Well, I mean, for example, just take such a thing as nodding your head, which means different things in different… Yeah, not the facial expression as such, but of course you’re using your head.

Ian Bowie
This goes into body language, of course. Yeah. I mean, you know, the whole thing that I was reading, it was about obviously, facial expression, but it was also about body language, you know, how you use your hands and everything else. But then, of course, theoretically, you could go around the world. And you could record how different people from different cultures and different backgrounds, use their body, hand expressions, facial expressions, to communicate certain things. Yeah. And you just put the whole thing into one massive data driven algorithm. Yeah, it’d be a hell of a job. Yeah.

Michael Stormbom
But there was this was this article about Zelensky. So it is of course been, you know, appearing in front of all of the parliaments in, in Europe and around the world. So where he has been sort of adjusting his body language to sort of like the culture of the particular culture of that.. So that that’s kind of interesting.

Ian Bowie
So he’s been adapting. Yeah, okay. That’s interesting. Yeah.

Michael Stormbom
Well, I mean, he’s an actor, of course.

Ian Bowie
Of course. That’s right. Yeah, he’s trained. Yeah, but somebody’s obviously spotted this.

Michael Stormbom
Yeah. Someone had posted a picture of the…

Ian Bowie
All right. Okay. Interesting. Yeah.

Michael Stormbom
No AI involved in that one, but anyway, that was quite interesting.

Transcribed by https://otter.ai