The many biases of facial recognition tools

There has been growing concern about the use of facial recognition in the U.S.

Last month, San Francisco’s Board of Supervisors, in an 8-1 vote, banned the use of facial recognition technology by local agencies. The liberal city is the first in America to pass such a law — especially significant given that San Francisco is also the beating heart of the tech universe. There has been growing concern about the use of facial recognition in the U.S. and talk about it is particularly noticeable, given that the U.S. is in the midst of taking China to task for the extraordinary surveillance it conducts on its Uighur Muslim minority.

More than 50 agencies across the country use, or have used, facial recognition technology for ID checks or to identify criminals, The Washington Post reported. This has raised not only concerns around privacy but also around accuracy and bias. Last year, the American Civil Liberties Union (ACLU) reported that it tested Rekognition — Amazon’s controversial facial recognition tool — and was able to (incorrectly) match mugshots of those who have been arrested for a crime with those of 28 members of the U.S. Congress. Racial minorities were disproportionally represented in the set of mismatches.

An MIT Media Lab study found that facial recognition misclassified women as men 19% of the time and the error rate got worse, to 31%, for darker skin, as per The New York Times.

Civil rights concerns

The ACLU and a group of other similar organisations wrote to Jeff Bezos in May 2018 requesting that Amazon stop selling its facial recognition technology to law enforcement agencies. But technology companies are pushing back to varying degrees, and Amazon is chief among those resisting curbs on the technology. Last month, it was revealed that just 2.4% of Amazon shareholders voted for a proposal to stop the sale of facial recognition technology to government agencies, and just 27.5% supported a proposal to study the civil rights implications of Rekognition.

At the moment, there are no federal laws governing the use of the technology, so there is broad scope for companies and subnational governments to do as they please, unless specific laws like the one in San Francisco limit the technology’s use.

In March, Buzzfeed News reported that the U.S. Customs and Border Police (CBP) is planning to scan the faces of passengers on 16,300 flights per week (or 100 million passengers) going abroad within two years. This is 100% of all passengers at the top 20 airports in the U.S.

Buzzfeed reported that the images of departing non-citizens could be retained for up to 14 days and used in “evaluation of the technology” and “assurance of the accuracy of the algorithms”. In other words, the images could be used to train the facial recognition AI tool. The images of non-citizens can be retained for up to 75 years, the report said.

Congress has authorised the collection of biometrics of non-citizens but the legality of collecting citizens’ facial data stands on a much flimsier ground. This is on the top of the known inaccuracies of the system.

However, the situation is starting to show signs of change. On May 22, the U.S. House of Representatives Oversight Committee held a hearing on the impact of facial recognition technology on civil rights and liberties. The cause brought Republicans and Democrats together, a rare event in Washington these days.

“You’ve hit the sweet spot that brings progressives and conservatives together,” said Mark Meadows, a Republican Congressman.

“So what demographic is this [facial recognition] most effective on?” freshman Democrat Congresswoman Alexandria Ocasio-Cortez asked.

“White men,” said Joy Buolamwini, founder of Algorithmic Justice League, an organisation that fights bias in codes and algorithms.

“And who are the primary engineers and designers of these algorithms?” Ms. Ocasio-Cortez asked. “Definitely white men,” said Ms. Buolamwini, going on to describe the possibility of using facial recognition technology on body-worn cameras used by the police. In a system already fraught with racial bias, Ms. Buolamwini said, technology can exacerbate the challenges.

The hearing is only an early step in what will have to be a substantial but quick journey if the law is to catch up with and regulate the burgeoning use of facial recognition.

(Sriram Lakshman is The Hindu’s Washington correspondent.)

Source: Read Full Article