Google reportedly targeted people with 'dark skin' to improve facial recognition

  04 October 2019    Read: 1321
Google reportedly targeted people with

Facial recognition technology’s failures when it comes to accurately identifying people of color have been well documented and much criticized. But an attempt by Google to improve its facial recognition algorithms by collecting data from people with dark skin is raising further concerns about the ethics of the data harvesting, the Guardian reports. 

Google has been using subcontracted workers to collect face scans from members of the public in exchange for $5 gift cards, according to a report from the New York Daily News. The face scan collection project had been previously reported, but anonymous sources described unethical and deceptive practices to the Daily News.

The subcontracted workers were employed by staffing firm Randstad but directed by Google managers, according to the report. They were instructed to target people with “darker skin tones” and those who would be more likely to be enticed by the $5 gift card, including homeless people and college students.

“They said to target homeless people because they’re the least likely to say anything to the media,” a former contractor told the Daily News. “The homeless people didn’t know what was going on at all.”

“I feel like they wanted us to prey on the weak,” another contractor told the Daily News.

Randstad did not immediately respond to a request for comment. Google defended the project but said it was investigating allegations of wrongdoing.

The contractors also described using deceptive tactics to persuade subjects to agree to the face scans, including mischaracterizing the face scan as a “selfie game” or “survey”, pressuring people to sign a consent form without reading it, and concealing the fact that the phone the research subjects were handed to “play with” was taking video of their faces.

“We’re taking these claims seriously and investigating them,” a Google spokesperson said in a statement. “The allegations regarding truthfulness and consent are in violation of our requirements for volunteer research studies and the training that we provided.”

The spokesperson added that the “collection of face samples for machine learning training” were intended to “build fairness” into the “face unlock feature” for the company’s new phone, Pixel 4.

“It’s critical we have a diverse sample, which is an important part of building an inclusive product,” the spokesperson said, adding that the face unlock feature will provide users with “a powerful new security measure”.

But the project has drawn harsh condemnation from digital civil rights and racial justice advocates.

The controversy touches on tricky questions about algorithmic bias and data privacy. Is it better to improve facial recognition for people of all skin colors – or ban the technology entirely, as a handful of US cities have done this year? And how much should users be compensated for providing companies such as Google with their personal data?

“Though research shows clearly that facial recognition system disproportionately misidentify black and brown faces, the goal should not be to improve the accuracy on this extremely invasive system,” said Malkia Cyril, founder and executive director of MediaJustice, a national racial justice organization advancing media and technology rights. “In the context of existing racial bias in the criminal legal system and in counter-terrorism, it should be no one’s goal to make the technology easier to use against people of color – especially black, AMEMSA [Arab, Middle Eastern, Muslim and South Asian] communities and undocumented people.”

“This is totally unacceptable conduct from a Google contractor. It’s why the way AI is built today needs to change,” said Jake Snow, an attorney with the ACLU of Northern California. “The answer to algorithmic bias is not to target the most vulnerable.”

“Whether it’s racist because it’s accurate or because it’s inaccurate, facial recognition and biometric tools in general fuel racial bias,” said Cyril. “No amount of money or informed consent is enough to produce a weakly regulated technology already being used to violate the human rights of millions.”

Rashad Robinson, executive director of Color of Change, said: “Google’s tactic of targeting economically vulnerable populations is morally reprehensible. There’s no way to put a sticker price on biometric data – nor should there be.”

“Facial recognition software has bias baked into its coding, and has primarily been used to control our movements and decide who belongs and who doesn’t, in public and private spaces,” he added. “This technology is dangerous – especially for black people – and that’s why Color Of Change is mobilizing for complete legislative bans on facial recognition across the country. We don’t need more tech experiments. We need government regulation to stop the unfettered growth of this technology.”


More about: #GOOGLE  


News Line