Facebook secret software reveals 8.7m child abuse images on its platform

  25 October 2018    Read: 1439
Facebook secret software reveals 8.7m child abuse images on its platform

Facebook has said its moderators have removed 8.7m child abuse images in the past three months, as the company battles pressure from regulators and lawmakers worldwide to speed up removal of illicit material.

It said on Wednesday that previously undisclosed software automatically flags images that contain both nudity and a child, helping its reviewers. A similar machine learning tool was also revealed that it said caught users engaged in “grooming” of minors for sexual exploitation.

Facebook has vowed to speed up removal of extremist and illicit material, and machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.


Facebook’s global head of safety Antigone Davis told Reuters in an interview the “machine helps us prioritise” and “more efficiently queue” problematic content for its reviewers.

The company is exploring applying the same technology to its Instagram app.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal. “We’d rather err on the side of caution with children,” she said.

Before the new software, Facebook relied on users or its adult nudity filters to catch such images. A separate system blocks child abuse that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21m posts and comments it removed in the first quarter for sexual activity and adult nudity.

Shares of Facebook fell 5% on Wednesday.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, had led to more removals. In some cases, the system has caused outrage, such as when it censored the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam war napalm attack.

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said it expected to receive about 16m child abuse tipoffs worldwide this year from Facebook and other tech companies, up from 10m last year. With the increase, NCMEC said it was working with Facebook to develop software to decide which tips to assess first.

DeLaune acknowledged that a crucial blind spot was encrypted chat apps and secretive “dark web” sites where most new child abuse images originate.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analysing them. DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.


More about: #Facebook  


News Line