Facebook Twitter

Do you want websites to moderate content? That’s easier said than done

Rather than advocating for perfect content moderation, we should be empowering and encouraging user control.

SHARE Do you want websites to moderate content? That’s easier said than done

This April 26, 2017, file photo shows the Google mobile phone icon, in Philadelphia.

Matt Rourke, Associated Press

The fight for the internet is still very much alive. 

As proof, on Sept. 15, Sen. Mike Lee, R-Utah, hosted yet another hearing on antitrust this week. As it always inevitably does, the hearing strayed from the discussion of antitrust toward questions ultimately concerned with Section 230 of the Communications Decency Act. To an extent, the hearing served as an opportunity for airing grievances by politicians against Google. That’s a problem. Politicians shouldn’t weaponize antitrust laws in their attempts to grapple with Section 230.

For a long while, platforms like Facebook and Google have borne the brunt of “techlash” criticisms. Opponents of Facebook have raised concerns over the site spreading disinformation and hate speech, while Google has been accused of suppressing conservative voices across its platforms. These critics might have some merit, but they’re failing to account for the whole picture.

Sure, you could argue that these platforms should simply hire more people — but that’s easier said than done. At the moment, there is a shortage of people willing to fill the role of content moderators. Many moderators are currently having to work from home, where some content may not be appropriate to vet. This only exacerbates the difficulty. 

Human content moderators inevitably treat any particular post with some subjectivity. Rarely is the content blatantly objectionable for public consumption (i.e. child porn or terrorism), and therefore disagreement might exist among moderators on what to do with posts. No matter how many moderators a platform hires, they’ll still have implicit biases and the same underlying issues that come with human moderation. It’s an imperfect art, not a science, as some detractors think. While some opponents would argue that Section 230 enables a company to be careless about the quality of their content, and gives them free license to build a “rage machine,” they’re clearly ignorant of how much bad content is actually weeded out of public consumption. 

Despite the claims of the website being a “rage machine” disseminating disinformation, Facebook has stepped up during the pandemic to ensure its platform is safe and that action is being taken against objectionable content. In the second quarter of 2020, Facebook took initiative on over 22 million pieces of content surrounding hate speech alone, up an astonishing 134% from the previous quarter. Facebook is also quick to identify harmful content — the company finds and flags nearly 89% of content on hate speech before users reported it during that same timeframe. 

Misinformation presents moderators with a fine line to walk. Many of the platforms have instituted fact-checking programs to reduce the spread of false information and to remove harmful misinformation, like voter interference rumors or COVID-19 denial. It is important to understand that no platform, regardless of how big or how advanced a system they use, will ever be able to perfectly moderate content at scale. So putting the boot on a company because it doesn’t is unfair.

Facebook, Google and the general public should balance free speech with common sense content moderation. When looking to deal with hate speech and misinformation, it’s important to exercise caution. Rather than advocating for perfect content moderation, we should be empowering and encouraging user control. Users could very well be more vigilant, taking charge of their own online experience. Weaponizing antitrust in the name of something completely unrelated like Section 230 will leave consumers with less control over their online experience, not more.

James Czerniawski is the tech and innovation policy analyst with the Libertas Institute, a free-market think tank in Utah and an associate contributor with Young Voices. You can follow him on Twitter @JamesCz19.