AI and Ethics

Illustration generated using DALL·E 2.

Written by Ian Bowie

The need for greed

It’s that age old chestnut – just because you can, should you? When it comes to AI then that chestnut is probably more relevant than ever before. As it becomes ever more prevalent in our everyday lives the morality of using AI becomes an ever more relevant discussion topic.
We live in a world driven by consumption and market economics, where the desire for ever greater profits overrides all other considerations and hang the consequences. But luckily there are individuals and organisations that do care about the potential human consequences of AI. If you haven’t heard or read about before now, then take a look at the ‘Rome Call’. The Franciscan monk we mention on the podcast is one Paolo Benanti and he helped draft the document which is a commitment, signed in March 2020 by global tech firms IBM and Microsoft, stating that human dignity will always be a priority as AI is rolled out worldwide.

You always get what you want

So all is good right? Well, not necessarily I’m afraid as we humans just can’t help ourselves. Once we get an idea in our heads and even the smallest sign there might be a profit in it we will push ahead developing that idea whether it is for the greater good or not. Take autonomous vehicles as an example. An interesting idea perhaps, but how many millions of driving jobs worldwide would disappear if it ever became a reality? And it’s very easy to say that there would be other jobs for all the drivers that just lost theirs, but would there really, I suspect not.
And of course there is still the ugly spectre of bias hanging in the air. As we have discussed, an AI model is only as good as the data that is used to the train it, and so it should come as no surprise that if the data is biased, for example reflecting an underlying social disparity, the resulting model will make biased decisions, thereby exacerbating and perpetuating the underlying disparity. In some instances this may not matter, but in the case of primary healthcare, your ethnicity may well be the difference between life and death.

Technology moves faster than legislation

Individual governments are slowly waking up to the need to start legislating AI development but at the time of writing there is no international coordination. And even if there was, given the speed of technological innovation, the legislators might find it difficult to keep up with the pace. So can we trust the industry to regulate itself, sadly I suspect not.
We are living in a world where corporations use AI to monitor their employees, governments use it to watch and even control our behaviour, and brands use it to get us to spend even more money on goods we don’t necessarily need. Whether it is morally or ethically correct and the future we want for ourselves is a question that would at least benefit from some open public debate. The rest I shall leave up to you.