FACIAL recognition technology has become so good that social media sites like Facebook can recognise and alert me every time someone uploads a picture of me on the site. Think about it – this is a social networking site that has close to 2.4 billion users, and it can identify me out of the millions upon millions of photos uploaded each day from around the world!
When a technology becomes this powerful, it can be used for good and perhaps not-so-good purposes. Law enforcement agencies in particular, like to use it to help identify potential criminals and terrorists. But civil rights advocates say it can infringe upon a citizen’s civil liberties.
San Francisco has become the first US city to ban the use of facial recognition by police enforcement and local government agencies. This is significant although it should be highlighted that this ban does not extend to the use of this technology in airports, ports and private companies.
A DISCRIMINATING TECHNOLOGY
The main complaint about facial recognition technology is the fact that while it is quite good at identifying certain types of faces, it’s not very good at identifying faces of black people and of women.
A famous study by an MIT Media Lab researcher, Joy Buolamwini, found that three leading facial recognition systems were likely to misidentify the gender of dark-skinned people than of light-skinned people.
She looked at the technologies developed by Microsoft, IBM and Megvii of China – all of which offered gender classification features in their facial recognition software – and found all three to be problematic.
For her analysis, Buolamwini built a data set of over 1,000 faces, using faces of politicians from countries with a high percentage of women in public office. This included three African countries (for dark-skinned people) and three Nordic countries (for light-skinned people). Microsoft’s error rate for dark-skinned women was 21 per cent, while IBM and Megvii’s were nearly 35 per cent. In contrast, the error rates for all three systems were below 1 per cent for light-skinned males.
In another test which highlights the problem facial recognition software has with people of colour, the American Civil Liberties Union (ACLU) ran a test of Amazon’s facial recognition software and found that it incorrectly misidentified 28 black members of Congress as criminals.
When there is such inaccuracy involving certain segments of society (in this case, blacks and women), it could lead to such people being more closely monitored by the authorities. It could also lead to misidentification and wrongful prosecution.
Of course over time, technology will improve – especially with more datasets using greater diversity– and there will come a time when facial recognition technology will become so accurate that concerns over this matter will cease to exist. But even if the technology can be proven to be practically error-free, there would still be reasons to be concerned.
Imagine if there was a proliferation of cameras everywhere in public spaces, and the authorities had the ability to track your every movement and know precisely who you are and what you’re doing.
A natural response from those who favour such surveillance would be why worry if you don’t plan to do anything wrong? But even if you’re not a criminal or a terrorist, you could have reasons to not want your every movement in public to be tracked.
Let’s say you want to attend a rally of some sort. You might not necessarily want the authorities to have a record of that. You might not want some people, such as your employer, to know that you were at a particular event or rally. As a result, you might just end up not attending to avoid any scrutiny. This is an example of the chilling effect that a sophisticated and high-accurate facial recognition system could have on society and free speech.
In some places this is already happening. In Moscow and London, thousands of cameras are installed in public places and the police are able to match surveillance photos with those on their watch lists. In Germany, the Hamburg police used facial recognition to search for individuals involved in the riots at a G20 Summit. Perhaps the most egregious case of “Big Brother” watching you is happening in China, where facial recognition software is being used to identify political activists and as well as to monitor ethnic minorities in the Uighur region.
HELPING LAW ENFORCEMENT
The flip side of all this of course is that facial recognition technology has been effective for helping with law enforcement. In Washington County, Oregon, the Sheriff’s Office said that Amazon’s Rekognition (that’s the name of its facial recognition software) has “greatly increased the ability of our law enforcement officers to act quickly and decisively” by reducing the time it takes to “identify criminal suspects” down from two to three days to minutes. It was also facial recognition technology that helped to identify the suspects in the shooting deaths of five newspaper employees in Annapolis, Maryland.
But facial recognition technology isn’t used just for catching criminals. It could be used for other noble purposes such as helping to find missing children as well as the elderly who might have wandered off and gotten lost.
It’s clear that facial recognition software, just like most technologies, can be used for good or bad. Without doubt, as the technology becomes more accurate, it can be effective in helping to catch criminals and preventing criminal or terror activities from happening. The question society has to ask is – at what price?
I don’t think anyone objects to facial recognition technology being used in airports. That’s because none of us want to be in the same plane as terrorists. So, we want that kind of invasive, proactive screening to be done.
We accept that at airports but can we accept the same thing everywhere and at every moment of our lives when we are in public? A lot of that probably depends on how safe the public sphere is in the coming years. Would people have accepted such high levels of airport security if the infamous 9/11 terror attacks never happened? It took that one horrific incident to change everybody’s mind about airport security.
If a big terror attack or worse still, a series of attacks, were to happen in the future, perhaps the tide of public opinion would be on the side of having cameras everywhere with facial recognition technology to aid the police in identifying people on their watch list. In contrast, if no big terror attacks happen for years, the general public might not have the appetite for “Big Brother” watching their every move.
Oon Yeoh is a consultant with experiences in print, online and mobile media. Reach him at oonyeoh@gmail.com.