The misuse of facial recognition technology

The misuse of automated facial recognition technology is not limited to authoritarian countries such as China, North Korea and Myanmar. It is also happening in democracies, such as the United States or India.

Automatic facial recognition (AFR) uses cameras to record faces in a crowd. These images are then processed to create a unique biometric map of each person’s face, which is based on the measurement of the distance between a person’s eyes, nose, mouth and jaw. Facial recognition can play a positive role in some situations. In India, for example, it has helped police identify and return to their families thousands of children who have been illegally trafficked.

In Delhi, police have used automatic facial recognition technology to search photos for a government database called TrackChild. The World Wide Web holds the database of police reports of missing children and of children who are being looked after in childcare institutions. Automatic facial recognition technology has been introduced at airports such as Shannon, Dublin, Heathrow and Amsterdam Schiphol. The main selling point is, supposedly, that it will enhance our travelling experience and ensure that we will not be caught in queues at the passport desk or other area in the airport.

However, facial recognition technology comes at a price. This surveillance technology tracks us domestically and internationally and is a serious threat to our human and civil liberties. ‘Big brother’ is around all the time. During the Covid-19 pandemic, many governments declared national emergencies and discussed using technologies such as facial recognition or an app as a way of tracking the disease.

Paul Reid, the Chief Executive of the Health Service Executive (HSE) in Ireland, said that the HSE would make a contact tracing app available for people, to help a person who had been diagnosed as having Covid-19 to identify his/her close contacts. The Irish app will follow the German model where all the relevant data is stored on the device and not the government’s database. Here again, the issue of privacy is paramount, coupled with the trust that ordinary citizens have on whether, or not, their government respects their privacy.

In India, many are very sceptical of the request from Prime Minister Narenda Modi, for people to wear a smart phone app to help them identify their risk of catching and spreading the virus. The Indian author, Arundhati Roy said that “the coronavirus is a gift to authoritarian states, including India.” He added: “Pre-Corona, if we were sleepwalking into the surveillance state, now we are panic-running into the super-surveillance state.”

Persecution of Uighurs There are also many uses of automatic facial recognition which are indeed very worrying. The Uighurs are a Turkic Muslim people who live in western China in the Xinjiang Uyghur Autonomous Region. They comprise about half of the region’s 26 million people. They, and other Muslim minorities, are being persecuted by the Chinese government.

According to Human Rights Watch, the Uighur people are facing an Orwellian-style surveillance by the police in China, where information gathered by facial recognition technology is used as a way to persecute them. In August 2018, a United Nations committee heard that up to one million Uighurs and other Muslim groups in China are being detained in internment camps where they are said to be undergoing “re-education” programmes.

The journalist Colm Keena of The Irish Times interviewed people from China and from Hong Kong who believe that there would be consequences for the family in China, if it was known that they spoke in favour of the democracy rallies in Hong Kong to the Irish media. Three of them agreed to be interviewed, but arrived for the interview wearing face masks, reflector sunglasses and baseball hats pulled low on their faces in order to hide their identity.

The misuse of automated facial recognition technology is not limited to authoritarian countries such as China, North Korea and Myanmar. It is also happening in democracies, such as the United States. In an article in The Irish Times in May 2019, the journalist, Mudhumita Murgia, recalled the surprise which the researcher Jillian York showed when she was told by a friend that photos of her had been found on a US government database. They were being used to train facial recognition algorithms and, naturally, Ms York wondered if this had anything to do with the fact that she worked in Berlin for a non-profit group called Electronic Frontier Foundation. When she accessed the database, she found that there were many other photographs of her dating from 2008 right up to 2015.

During further research she discovered photographs of 3500 people on this database which is known as Iarpa Janus Benchmark C (UB-C). Iarpa is a US body which funds innovative research on security issues in order to ensure that the US remains ahead of every other country on a whole host of security technologies. Jillian also found out that Noblis was the company which compiled 21 294 images for the US government. Among the faces on that database was an Al-Jazeera journalist, a technology writer and, at least, three Middle Eastern political activists. There was a photo of an Egyptian scientist who had participated in the protest in Tahrir Square in 2011.

Once again, none of the people on the database knew their photos were being used in that way, and none would have given their consent if they had been asked. The photos which Jillian found on the database were available online under the Creative Commons licences. Often, the search companies take photos from the internet and store them in a database. Though, technically, this is not illegal or a breach of copyright, the current system is ethically quite unsatisfactory because it does not properly protect the privacy of the people whose images are stored in the database.

Kar Ricanek, Professor of Computer Science at the University of Wilmington in North Carolina is an expert on many aspects of automated facial recognition technology. He makes the point that the technology is not just geared to recognising peoples’ faces. He cited a tech-company in Boston, called Affectiva, which is building ‘emotional artificial intelligence.’

The company hopes to be able to determine from a webcam image whether or not someone in the shop is going to buy an item. Shoshana Zuboff, an American academic, challenged this type of advertising in her book, The Age of Surveillance Capitalism, because it is a fundamental attack on our privacy at a point when we are making important decisions.

In June, the CEO of IBM, Arvind Krishna, announced that the technology company was opposed to the use of any technology for mass surveillance, racial profiling and the violations of basic hu- man rights. After the murder of George Floyd in Minneapolis, on 25 May 2020, he called for responsible national standards regarding how facial recognition systems could be used by police agencies. San Francisco is the first city in the United States which has banned police and other agencies from using automatic facial recognition because it is an intrusion on a person’s privacy.

The ethical teaching of most religions emphasises the need to protect human privacy. Therefore, religious people should oppose any use of facial images on databases of governments or large corporations such as Facebook, Amazon or Google, if the consent of the person who has been photographed has not been given.

(Fr Seán McDonagh)

Subscribe to our mailing list!

Recent Posts