Facebook and TikTok are two platforms failing to remove election-related misinformation ahead of the midterms next month, according to a new investigation.

During the investigation, teams from Global Witness and Cybersecurity for Democracy, part of the Center for Cybersecurity at New York University’s Tandon School of Engineering, compared the ability of Facebook, TikTok and YouTube to detect and remove misinformation ahead of the elections.

“This is no longer a new problem. For years we have seen key democratic processes undermined by disinformation, lies and hate being spread on social media platforms — the companies themselves even claim to recognise the problem,” Jon Lloyd, senior adviser at Global Witness, said in a press release.

“But this research shows they are still simply not doing enough to stop threats to democracy surfacing on their platforms,” he added.

Related
Did Facebook and the crisis in journalism influence the election?

Misinformation and delegitimization

The researchers submitted 20 ads in English and in Spanish to each of the three platforms.

Five of the submissions contained false election information, which was targeted in five flip states with close races, including Arizona, Colorado, Georgia, North Carolina and Pennsylvania.

Specifically, the content featured false information about the time, place and method of voting to stop people from participating in the election.

Another five ads were aimed “to delegitimize the electoral process,” like voting by mail, per the report. All the submitted promotions were removed before they went live.

Here’s how the three platforms performed.

Facebook

A dummy account was created to place ads. The platform’s  “ad authorizations” process was ignored and the promotions were submitted, violating Meta’s policy.

The social media site blocked 13 of the ads but approved seven, according to The Hill. The ones that suggested a change in date were approved, but those encouraging people to vote twice were taken down.

In response to the report, Meta said that the tests “were based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world,” per CNN.

The spokesperson added: “Our ads review process has several layers of analysis and detection, both before and after an ad goes live.”

TikTok

Dummy accounts were used again to post ads from within the U.S., even though political promotions are banned on the platform.

“TikTok performed the worst out of all of the platforms tested in this experiment,” the report said, noting that only two ads — both related to the COVID-19 vaccine being required for voting — were rejected.

Meanwhile, ads that publicized the wrong election day, encouraged people to vote twice and invalidated the electoral process were all approved.

A TikTok spokesperson said that the platform was made to create “authentic and entertaining content” and that politics-related advertising is prohibited and removed.

“We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”

YouTube

The video platform rejected half of the ads within a day while the rest were rejected subsequently. It also banned the dummy YouTube channel that was used to host the ads.

View Comments

But, the report noted, the Google Ads account used wasn’t restricted.

“YouTube’s performance in our experiment demonstrates that detecting damaging election disinformation isn’t impossible,” said Laura Edelson, co-director of NYU’s C4D team, per The Hill. “But all the platforms we studied should have gotten an ‘A’ on this assignment. We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters.”

What does this mean for midterms?

The research brings up concerns about the inadequate steps some of these social media platforms are taking to combat misinformation ahead of the elections.

All three sites have plans for the midterms and strategies to tackle efforts that delegitimize the election but, as CNN noted, this “is a reminder that the platforms can differ wildly in their content enforcement actions.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.