Facebook struggled to remove sensitive content during lockdown

  14 August 2020    Read: 872
Facebook struggled to remove sensitive content during lockdown

Facebook has admitted it struggled to remove content that promoted suicide or exploited children after global lockdowns forced it to rely more heavily on automatic moderation, the Guardian reported. 

Facebook sent many of its content reviewers home in March and began focusing on AI-driven moderation. In its first quarterly report on its moderation practices since the coronavirus crisis took hold, the company set out some of the successes and failures of that approach.

“We rely heavily on people to review suicide and self-injury and child exploitative content, and help improve the technology that proactively finds and removes identical or near-identical content that violates these policies,” said Guy Rosen, Facebook’s vice-president of integrity.

“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.”

According to the report, Facebook removed 479,400 pieces of content from Instagram for violating rules on child exploitation and nudity between April and June, down from 1 million in the previous three months.

Content that broke rules about suicide and self-harm was removed from Instagram 275,000 times over the same period, down from 1.3m the quarter before.

Not every moderation goal was similarly affected, though. Adult nudity, which is increasingly easy for companies such as Facebook to automatically flag and remove using machine vision tools, was removed from Facebook 37.5m times, down slightly from 39.5m in the first quarter of the year.

Hate speech removals, by contrast, were hugely up. In the first three months of 2020, Facebook acted on 9.6m pieces of content over hate speech violations, and in the second quarter that more than doubled, to 22.5m posts, videos and photos.

Rosen said that increase came about because the company automated the process of finding and removing hate speech in three new languages – Spanish, Arabic and Indonesian – and improved technology for finding hate speech in English and Burmese. He said these improvements meant the company now removed 95% of the hate speech it takes down proactively, without requiring a user to flag it as problematic.

As well as the standard challenges of working from home, Facebook had to deal with other problems as it gradually built up its moderators’ capacity for remote working. Mark Zuckerberg said in March that the company faced data protection issues, which meant it could not allow some contractors to work on their own devices.

Mental health concerns also limited the company’s ability to fully shift work remotely, Zuckerberg said. Some of the most distressing work would only be done by full-time staff who were still able to enter the office, since the infrastructure required to provide mental health support to contractors working remotely was not in place. That constraint appears to have been what limited the company’s ability to respond to suicidal content and child exploitation material during the pandemic.

In the long term, Facebook has committed to embracing remote workingeven after the pandemic cools. In May, Zuckerberg said he expected that about half the company’s workforce would be remote by 2030.

“We need to do this in a way that’s thoughtful and responsible, so we’re going to do this in a measured way,” he said. “But I think that it’s possible that over the next five to 10 years – maybe closer to 10 than five, but somewhere in that range – I think we could get to about half of the company working remotely permanently.”


More about: #Facebook  


News Line