Facebook Twitter

TikTok’s new software automatically removes nudity. Why don’t all social media networks do this?

Most social media networks use some automation to flag content that violates their community guidelines. But allowing technology to remove it before any human eyes see it seems like a sound idea

SHARE TikTok’s new software automatically removes nudity. Why don’t all social media networks do this?
TikTok will start automatically detecting and removing nudity.

TikTok will start automatically detecting and removing nudity.

Kiichiro Sato, Associated Press

Social media network TikTok currently uses a safety team that decides whether content violates the company’s community guidelines. But the decision only comes to them if software flags something for review. That means these safety team members are seeing a lot of distressing videos.

Over the next few weeks, TikTok will start using technology that has high accuracy in detecting such content and will automatically remove it. TikTok said in a statement that it’ll start with content involving adult nudity, sexual activities, violent and graphic content, minor safety, illegal activities and regulated goods such as drugs and firearms.

The company is already allowing artificial intelligence to sift through and remove offending posts, but this new announcement seems to hint that more accurate technology for automatic removals could speed up the process. In TikTok’s Transparency Report for the first quarter of 2021, the company reported it removed nearly 62 million videos for violating community guidelines or terms of service. Close to 9 million of those videos were flagged and removed automatically without a human ever needing to view the material.

Does 62 million videos removed seem like an outlandish number?

Wrap your brain around the fact that the posts removed make up less than 1% of all videos uploaded on TikTok. 

Creators can always appeal a removal decision and will get alerts about any violations. For a first violation, offenders will receive a warning. After the first violation, the account user will not be able to upload a video, comment or edit their profile for 24 to 48 hours. TikTok may instead restrict an account to view-only functionality for 72 hours or up to one week. After several violations, TikTok will inform the user that their account could soon be banned and if the behavior continues, TikTok will permanently remove the account.

Some content violations, like child sex abuse material, fall under a zero-tolerance policy and TikTok will automatically remove the account. The company may also block a device so it can’t simply create another account.

The company is hopeful this automation will allow safety team members to spend more time on more nuanced areas like bullying, hateful behavior and misinformation.

This may be a step in the right direction after Facebook — and Facebook-owned Instagram — has faced harsh criticism from many of its content moderators. One former Facebook and Instagram content moderator, Josh Sklar, told NBC News that he and his colleagues viewed hundreds of pieces of often traumatic posts every day.

A story from The Verge last year reported that Facebook agreed to pay $52 million to current and former moderators. The money is intended to go toward costs associated with mental health issues such as post-traumatic stress disorder and depression sustained from their time on the job viewing graphic and disturbing content.

Facebook Chief Technology Officer Mike Schroepfer wrote a May 2021 blog post about the company’s new system for automatically predicting whether content violates community standards.

“Our automated tools are now doing a much better job identifying priority cases to be sent for human review,” he wrote.

And last November, The Verge quoted Ryan Barnes, a product manager with Facebook’s community integrity team who spoke to reporters during a press briefing. He said, “all content violations will still receive some substantial human review, but we’ll be using this system to better prioritize (that process).”

So while Facebook’s artificial intelligence will filter violating content, real people will still have to view all of the filth. 

Snapchat says on its Brand Safety website that it’s investing in “advanced, automated methods to take action against Snapchatters who violate our terms and community guidelines.”

The Twitter Public Policy account tweeted a few weeks ago that there needs to be a balanced approach to content moderation. “We must move past the leave-up-take-down model and focus on how people discover content online,” it said.

But I’m all for an automated take-down of offensive material, especially if the tech has a high accuracy rate.

Eric Han, head of safety for TikTok, explained in a blog post that the company started using its automated removal technology in places where it needed help during the pandemic. They found only 5% of those videos that the technology removed turned out to be false positives. And remember, any creator can appeal a video’s removal to a human.

This software can detect — with 95% accuracy — videos that depict sexual activities, bullying, self-harm and violent content. Why not get those posts off social platforms before human eyes have to see them? I can definitely see the benefit for kids, adults and for those content moderators who quite possibly have the toughest job in tech.