#28: AI and Ethics
Ian and Michael discuss ethical use of AI and the ethics of innovation – if AI and digitalisation will destroy more jobs than it creates, is that a path we should continue walking? And is digitalisation the great equalizer, or will it exacerbate inquality and the darker sides of humanity? In the episode, Ian and Michael discuss the Vatican’s AI ethicist, human rights in the era of AI, anti-cheating software in education as an invasion of piracy, racial bias in healthcare, and facial recognition as a tool of oppression.
Automated Transcript
Ian Bowie
Hello and welcome to AI Unfiltered with me Ian Bowie and our resident expert, Michael Stormbom, where we will be talking about everything to do with AI in our modern digital society and what the future holds for all of us.
Ian Bowie
I also read an article recently, it was in the Financial Times, actually and it was about the Vatican have weighed in now with ethics and that they’ve actually got a Franciscan monk who is now an adviser to the Pope on ethics and AI. And that that was quite interesting because of course, he goes into things like you know, just because you have it, and just because it works, should you use it and, I think one of the big concerns with the question was the fear of job losses. So, you know, is it necessarily correct to implement AI? If it would mean that there would be you know, direct job losses? Because of it.
Michael Stormbom
it’s an interesting question. I mean, this is the case with any technological advance we’ve had, jobs become obsolete. I mean, should we just stop innovating for the sake of saving the current batch of jobs? But I think if it is the case that AI will destroy more jobs than it creates, then of course, it becomes an interesting topic, is it worthwhile to continue down that path but then I think… we have spoken about universal basic income and I mean, freeing… well I mean the most valuable thing we have is time, right? So imagine not spending all the time at work and spending it on what you want to spend it on?
Ian Bowie
Yeah, but then you get the danger of you know, okay, universal, basic income, but basic being the key word there.
Michael Stormbom
Well, then there’s of course the risk of the income being so low that you can’t really…
Ian Bowie
Well no, let’s say it’s enough that you can, you know, pay the rent, buy food. Some clothes when you need them. Yeah, but that’s it, but we know that humans always want more than that. So if you’ve got these people living on universal basic income with all the time in the world.
Michael Stormbom
Well, yeah, because you will have a large amount of understimulated people…
Ian Bowie
With time on their hands. So yeah, but I think people need to do something. Well, yeah, I mean, there are people in the world that are very happy just lying on the sofa, watching daytime reality television, but I don’t think they’re the majority. Yeah. Yeah. So I think you know, there is a real potential issue there that if AI takes a critical amount of jobs, even if there is a universal basic income,
Michael Stormbom
There will still be unrest due to the…
Ian Bowie
Yeah, yeah. There’s still gonna be problems and mental health problems, you know, if people, even if they’ve got enough money to satisfy…
Michael Stormbom
Well yeah I mean, it gives you a sense of purpose. Yes. Yeah. If you take that away, then what’s left.
Ian Bowie
What’s the point? Sit around and do nothing all day. I suppose that’s most people’s retirement, isn’t it?
Ian Bowie
Make podcasts?
Ian Bowie
Yeah, yeah, again, but they’re you I see that you’re a purpose. Yeah. You’ve got a reason. Yeah, yeah.
Michael Stormbom
Considering the aging of our population.
Ian Bowie
Absolutely. But yeah, yeah. And you know, not everybody is retiring on a big pension.
Michael Stormbom
No, no, indeed not.
Ian Bowie
So yeah, I think, you know, that’s another area that really needs to be looked at and addressed.
Michael Stormbom
Yeah. So coming back to the Vatican. Yeah. So the, well the argument was that, given the negative impact of this innovation, we shouldn’t necessarily use it then.
Ian Bowie
Yeah. Is it ethical? Yes, we have it. Yes, we can use it. But is it ethical? Is it morally correct to use it? Well, you know, this is where the Vatican are coming from.
Michael Stormbom
But, yeah. Again, I mean, will it benefit more people than it will negatively affect? I guess that’s a question as well.
Ian Bowie
That’s also a very contentious issue, isn’t it? I mean, because of course, companies will always try and manipulate the data to make it look. Well. Yeah. Good for them.
Michael Stormbom
lies, damned lies and statistics. Yeah, for sure. Yeah. Yeah.
Michael Stormbom
This was mentioned in the article on the Vatican. And there was also an article in Wired Magazine about racial bias in health care. So researchers had looked at one hospital in the United States, where they use a particular algorithm to help assess a person’s healthcare needs automatically as a basis for selecting people for further care by calculating a risk score for the patient, and the researchers had found that there was a clear racial bias in the algorithm, so disproportionately favoring white people over black people. So a white person would get a significantly higher risk score than a a black person, even though they were in roughly the same health. And as a result, the white person would be more likely to receive the extra care.
Michael Stormbom
And the reason is that the algorithm actually tried to predict the cost of health care for the patient as a way to assess healthcare needs. And it was said in the article that the algorithm works reasonably well on that score in predicting the cost. But it also reflected the underlying inequality in healthcare in the United States. So less money is being spent on treating black patients than on white patients. And the algorithm therefore predicted higher healthcare costs for a white patient than a black patient, even though they would have the exact same ailments and, since the predicted costs is used to assess the patient’s treatment needs, it disproportionately favored white people, regardless of the actual health care needs. And this wasn’t a deliberate design choice, but rather the data used to train the model reflected underlying social inequality. And that aspect had not been taken into consideration.
Michael Stormbom
This article was published in 2019. And the company behind that particular solution stated at the time that they are working to correct this, but by that point, this particular system had of course been widely used across the United States for quite some time already.
Ian Bowie
Crazy.
Michael Stormbom
Yeah, and I think that’s illustrates very well that there’s always a risk with these automated systems and the potential for bias. And especially what are the system designed for?
Ian Bowie
I mean, I suppose there’s, you know, we’ve talked about, is it AI for the greater good. Or is it AI for the good of those that can pay?
Michael Stormbom
Yeah, I think it’s leaning, as it stands, it’s very much towards… I mean, we’ve been talking about haves and have nots. I think there’s a great risk that AI will exacerbate the divisions and inequality.
Ian Bowie
Yeah, so there is actually an enormous need for some ethicist to be involved.
Michael Stormbom
For sure, yeah. And I think it’s definitely something that would need to be…
Ian Bowie
Go on.
Michael Stormbom
I think it’s something needs to be the discussed more, and I mean, in this article, they’re also talking about that various governments and organizations have come up with various rules for AI or AI ethics, but there isn’t really like a global or like a universal, Universal Declaration of Human Rights in the AI realm. There’s nothing of the sort.
Ian Bowie
I think we want to be very, very cynical. I suppose we could say, well, there’s no money in ethics. No, no.
Michael Stormbom
I mean, if it if it’s profitable to kill people, then companies will kill people.
Ian Bowie
Yeah. And they do.
Michael Stormbom
So they do.
Ian Bowie
Yeah. It’s one of the most profitable businesses in the world, isn’t it, arms, manufacting and dealing
Michael Stormbom
Cigarettes, tobacco.
Ian Bowie
Yeah, anything that kills Yeah. Yeah. So actually, if we think about that is there any hope for..
Michael Stormbom
I mean, if we see AI as an extension of humanity, then it also is an extension of the dark sides of humanity.
Ian Bowie
Well, I think we’ve talked about this in other episodes. I mean, yeah. Facial recognition. Being a case in point. Yeah, yeah. And now of course we have, you know, the ethics of AI in the medical industry. Yeah. Very sad.
Michael Stormbom
Very sad indeed. But I mean, it also goes to show that AI is not the problem necessarily. It’s the human beings that are the problems. So if you design an AI for the profit motive, and the AI, it just does what you ask it to do, basically.
Ian Bowie
I mean, there’s always I mean…
Michael Stormbom
AI is a tool.
Ian Bowie
It’s is a tool. Yeah. It’s always about profit. I mean, yeah. So I mean, basically, at the end of the day, whatever AI is developed, it’s always going to have a profit motive somewhere in the background.
Michael Stormbom
I don’t see how you get around that without like regulation and legislation. Yeah. Yeah, but which of course will be subjected to heavy lobbying.
Ian Bowie
Of course. Yeah. But then then, you know, I mean, I’m actually surprised that that there are so many health warnings on cigarette packets to be quite honest.
Michael Stormbom
Well, it was a quite a struggle to get there.
Ian Bowie
It was that they got there in the end. Yeah.
Ian Bowie
So I mean, for example, this this chap, the Franciscan monk who runs around Rome, advising the Pope. Is it just a waste of time?
Michael Stormbom
Well, I don’t know. I think it’s important to talk about it at least so people are aware, because I mean, the first step to addressing it is to be aware of it in the first place.
Ian Bowie
Yes, I know. But I mean, you know, people are aware of lots of things. And nothing ever happens about it, you know, I mean, yeah, I just wonder if you know, these people sort of talk themselves into knots. Nothing’s ever gonna happen.
Michael Stormbom
No, and the other thing is that of course, I mean, things are happening so fast in the, in the, in the technical innovation world, in the AI world, I mean, legislation. I mean, it just can’t keep up even.
Ian Bowie
Well, no, because I mean, legislation takes years to enact often.
Michael Stormbom
Yeah. It’s reactive in a way. Yeah.
Ian Bowie
I think there have been cases, I’m not sure I read about something recently. And it was, it was a law which, you know, basically was enacted to prevent something which was relevant in the 1990s. And it’s now just become law. And of course, it’s completely irrelevant because the world has moved on.
Michael Stormbom
It is just moving in a completely different pace for sure. Yep. Yep.
Ian Bowie
And with that in mind, I mean, can ethics keep up?
Michael Stormbom
Yeah, I mean, here in the European Union, they are, they’re working on this legislation around AI. So will it make a difference or not? We’ll see. But.
Ian Bowie
Some people will find a way around it. Other people will just ignore it.
Michael Stormbom
Yeah, and of course, it’s easy to just cook up a model of your own in your computer. Yes. Yeah. That would be an interesting topic. Can we use AI to curb human nature? So according to this article, there have been something like 84 distinct documents containing ethical guidelines for AI in the past few years, okay. And the majority, they are then written by the industry themselves, or then Western governments, the Western world. Yes. So obviously then there’s also a huge Western cultural bias and in all of this.
Ian Bowie
Yes, it’s Western ethics. Yeah.
Michael Stormbom
I saw an article about… so apparently, there’s this like anti-cheating software that’s used in the United States. No, but apparently the students have to install some sort of web browser extension, which then basically gets access to the computer’s microphone and its camera. So it can basically, if a student is doing a test or something, it can keep track on what the student is doing to make sure it’s not cheating.
Ian Bowie
Oh, this is the same kind of technology that they tried to introduce into offices to make sure that employees are doing or actually I think when they when they started working from home, that they introduced the same thing onto their computers to make sure that they’re actually working.
Michael Stormbom
Yeah, for sure. And I actually, I know these freelancer websites where, like, if you offer your services through those sites, then you have to install this software that basically checks that if you’re actually working the hours that you’re reporting and stuff like that. But yeah, so no, apparently there was a student who took this whole thing to court. And apparently, the court said that it is not constitutional. It violated the student’s constitutional protection against unreasonable searches.
Ian Bowie
It’s basically a breach of privacy.
Michael Stormbom
It is, apparently it can even control the cameras and look around your surroundings, like basically your entire home and stuff like that. Very, very 1984.
Ian Bowie
Oh, no, no, no, no.
Michael Stormbom
And the other thing that they made note of in this article is that also it’s fairly conclusive whether the software actually prevents cheating or catches any cheating, there are conflicting studies there.
Ian Bowie
Actually, alright. This isn’t really anything to do so much about AI is about education. I mean, don’t you think that in the modern world, examinations are completely irrelevant?
Michael Stormbom
Definitely to my mind, because whenever I took a test, it was always about just taking in all the knowledge you needed right before the test. Once the test is done, you probably forget all of it. For sure.
Ian Bowie
It’s very sort of short term.
Michael Stormbom
Yeah, no, I think the main thing I learned is just in time knowledge acquisition and then yeah, and then once you’re done, the you discard the information basically.
Ian Bowie
Maybe you could test people’s ability to source and validate information. Yeah. Not about what they can remember from having read a couple of books or something like that.
Michael Stormbom
Yeah. No, I mean, I remember specifically some high school courses, what you did is that you basically wrote down verbatim what the teacher had been saying it then. And then you just repeated in the test and then you’ve got an A, or a 10 as it was a the time.
Ian Bowie
You just described my, my history tutor. Sit down, shut up and listen and write down everything I say. Yeah.
Michael Stormbom
So when you answered there’s none of your own analysis. You just repeat the history teacher’s analysis of the thing and that’s it. The only thing you’ve learned is basically is stenography.
Ian Bowie
And not even that properly,
Michael Stormbom
No, not even that. And nowadays, you don’t need stenography. You can use speech to text.
Ian Bowie
But no, I just don’t understand why successive governments stick with this very outdated and very old fashioned and extremely unhelpful system of examinations.
Michael Stormbom
Yeah, no, indeed. And as it pertains to AI, so if we’re now applying AI to do exactly the same thing, I think that strikes me as rather bad use of AI.
Michael Stormbom
So there was a there was an article in The Guardian about the Iranian government using facial recognition technology, but they’re using it basically to identify women who are in public and not wearing a hijab.
Ian Bowie
Yeah. Well, of course, we’ve just had that poor girl that was basically murdered by the what do they call themselves, a morality police or something in Iran for not wearing a hijab.
Michael Stormbom
Yeah, no, there’s been those protests there. Yeah. Yeah. So we’ve been talking about the ethical use of AI on here we have a clear cut example of AI used as a tool of oppression. And when we’re talking about democratizing AI, that also means that literally anyone can get their hands on these types of technologies. So for example, facial recognition technologies are very easy to get your hands on.
Ian Bowie
No, no, but I mean, there’s no I mean, pretty much what these kinds of tinpot regimes are going to use it for, of course they are. Oh, for sure. But of course, I mean, all these governments are going to want to get their hands on it for their own purposes. They’re all doing it, of course, and not necessarily for democratizing the world.
Michael Stormbom
Now, quite the opposite one might argue.
Ian Bowie
Not for the good of the people. Yeah, absolutely. Yeah.
Michael Stormbom
Of course, there’s a number of ways of using AI to censor people, not just the chilling effect of facial recognition. I mean, just censoring, look for keywords and censor what people…
Ian Bowie
Supposedly Facebook and YouTube and all of these programs are doing all the time. Yeah, they’re hunting for these kinds of negative videos and posts and everything else and fake news and everything else.
Michael Stormbom
Well, I think Facebook isn’t, I think that’s the problem. I mean, they want they want, they like people being outraged because then people are engaged with the platform,
Ian Bowie
Yeah, that’s true. Yeah. Yeah. What about YouTube?
Michael Stormbom
YouTube I I’m not sure how it goes with YouTube. I think they have the problem that there’s just so much content there. So I mean,
Ian Bowie
Impossible to police. No, it’s just me and a couple of friends. We have this idea of just sitting around the table with a few beers and, and talking about whatever sort of pops into our heads. Well, of course, you’re talking about a bunch of white, privileged, middle class, Anglo Saxon males. So you can imagine that some of the stuff isn’t going to be the most politically correct and the planet, so we were sort of also trying to second guess how long it would take YouTube to cancel our account. Maybe we’re actually not the most extreme people out there.
Michael Stormbom
No, I mean, considering how long Alex Jones was on YouTube before he got kicked off.
Ian Bowie
Yeah. Is that just a smoke and mirrors, telling us that we were using these technologies to police our platforms?
Michael Stormbom
And I would say somewhat, I think it’s also very inconsistent in how they…
Ian Bowie
How they do it.
Michael Stormbom
How they do it. Yes. And I think cyber bullying is a big problem in all of these platforms, which I don’t think it’s being meaningfully addressed.
Ian Bowie
Yeah, but a lot of these people, you know, the so called trolls and those kinds of people, I mean, they are a minority, aren’t they? They’re just very, very loud. And very, very nasty and very, very twisted.
Michael Stormbom
That scares people off from..
Ian Bowie
That’s it. You’ve got the well adjusted majority who have very reasoned arguments for you know why they believe certain things, but they just think I don’t need that kind of poison in my life. I’m just gonna stay away.
Michael Stormbom
Yeah, no, absolutely.
Ian Bowie
And then these these these horrendous creatures, sort of take over the whole platform. Yeah. No, absolutely. Yeah. Yeah. What I mean, don’t you think that there should be AI algorithms out there looking to shoot down these vitriolic…
Michael Stormbom
AI trollhunters?
Ian Bowie
Yes, yeah, absolutely. AI trollhunters, let’s bring them on. Yeah, exterminate. Yeah, fry their IP addresses.
Ian Bowie
You’ve been listening to me, Ian Bowie, and my colleague Michael Stormbom, on AI Unfiltered and for more episodes, please go to aiunfiltered.com. Thank you.
Transcribed by https://otter.ai