Facial Recognition and Bias

A humanoid robot looking at security monitors

Illustration generated using DALL·E 2.

Smile, you’re on camera

They say the British are the most watched people in Europe, which probably means the Chinese are there most watched in the world. Yes, we’re talking facial recognition. Of course, as the old adage goes, if you aint done nothin’ wrong you aint got nothin’ to worry about. Surely the software behind all the cameras out there, watching our every move, is as impartial as the Queen of England, or are they? But the cameras aren’t just monitoring our every move, they are also picking individuals out of the crowd using sophisticated facial recognition software. This software didn’t appear out of thin air, it is a product of man, and in this case it is highly likely it really is man, or a group of men, that have coded the software the cameras use. And not just any men, predominantly white, western and middle aged men. So is a system created by such a narrow demographic of society going to be completely unbiased? Probably not, in fact, definitely not.

Your face is your passport

Facial recognition is widely used today in a variety of ways for a variety of different purposes. Put simply, it works by taking an image or video, and if there’s a face in the picture identifying who the person might be. It’s part of a broader group of techniques, usually called computer vision, which is basically the analysis of images and videos for various use cases and purposes. If, for example, you’re on social media, and you upload a picture, you might notice your friends are automatically tagged in the picture, that’s an example of facial recognition. At a mundane level it is used to allow employees access to buildings or specific parts of buildings and, somewhat controversially, to monitor workers efficiency and levels of alertness in the workplace. But facial recognition is mostly recognised for its use in security, where it is used in crime prevention and investigation; but how accurate is it?

Watch the bias

If you have ever watched a Mission Impossible film, you will no doubt recall the scene where a mould is made of the bad guy’s face and then used by the good guys to gain access to a vault, building, or mobile phone. Now that is Hollywood, but much of the future was first seen in one of their films. AI facial recognition is not foolproof, but perhaps the single most worrying area that it is not foolproof is in how it recognises people from different ethnic backgrounds, or even doesn’t. The unconscious bias of the people who programmed the software in the first place plays a big role in how it works. It is more likely middle aged white western males are more likely to have a bias towards believing many black people are criminals and people of middle eastern origin terrorists. When a system with that type of inbuilt bias is used by government security services it isn’t a big step of the imagination to foresee what the results might be.

More diversity

So should we be worried? In short, yes we should. Independent trials of facial recognition software have shown the system consistently miss-profiling people as wanted criminals or terrorists based on gender and/or ethnic background. It is important that those using the software have an understanding and appreciation of the limitations of the system they are using, especially when it comes to bias. But perhaps the solution is simply to include more diversity in the teams that create the software in the first place.

Written by Ian Bowie