clock menu more-arrow no yes

Filed under:

Instagram vows to fight misinformation but allows deepfakes. Let's examine this contradiction

Adam Mosseri, head of Instagram, told CBS on Wednesday morning that they did not have a policy to remove 'deepfakes.' This contradicts previous promises to fight misinformation.

A young woman looks at the Instagram profile of user Amalie Lee on the display of a smartphone in Berlin, Germany, on Aug. 2, 2016. Lee, who suffers from an eating disorder, uses Instagram to document her road to recovery.
A young woman looks at the Instagram profile of user Amalie Lee on the display of a smartphone in Berlin, Germany, on Aug. 2, 2016. Lee, who suffers from an eating disorder, uses Instagram to document her road to recovery.
Monika Skolimowska, dpa via Associated Press

SALT LAKE CITY — Do social media platforms have a responsibility to fight misinformation? The question has been fiercely debated since the 2016 elections, and Facebook CEO Mark Zuckerberg has clearly stated many times that he was working to address the problem.

But manipulated or altered video footage — sometimes referred to as ‘deepfakes’ when the technology used is more sophisticated and convincing — is not included in that promise.

"Well, we don't have a policy against deepfakes currently," Adam Mosseri, head of Instagram, told CBS news Wednesday morning. Instagram is owned by Facebook.

“But it’s influencing people with things that aren’t true, that’s why it’s upsetting,” CBS host Gayle King pointed out.

Mosseri said the fake videos upset him too, especially one in which Mark Zuckerberg claims he is going to steal CBSN customers’ data. That latest viral deepfake was created by two artists and features Zuckerberg saying, “Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures," according to Vice. The creators told Vice their purpose was to educate people about deepfakes.

The comment by the head of Instagram was in keeping with previous policies, but critics say it undermines Instagram's promise to fight misinformation.

Wired called Instagram a "hotbed for disinformation and inflammatory content designed to exacerbate tensions among different demographic groups." The publication wrote that while the company had developed a tool to identify false posts, it wouldn't take them down.

Judd Legum published an article in The Guardian, “Facebook’s pledge to eliminate misinformation is itself fake news.” Legum argues, “Facebook is trying to have it both ways. The company is actively seeking credit for fighting misinformation and fake news. At the same time, its CEO is explicitly saying that information he acknowledges is fake should be distributed by Facebook.”

In a documentary-style PR video that Facebook released, “Facing Facts: An Inside Look at Facebook’s Fight Against Misinformation,” one employee explains that content deemed low in truth and high in intent to mislead is the kind of content they “have to get right if they are going to regain people's trust.” Manipulated video footage of high-profile individuals and politicians does not fall under this category.

The same employee later says that “because the problem is so complicated, we’re deploying fundamentally every resource, we’re leveraging machine learning everywhere we can, we’re creating data sets that allow us to build algorithms that detect even the most nuanced version of misinformation.”

Those tools have primarily been used for flagging content that is false, providing users with additional context, and demoting it. But according to a statement from Facebook shared with Politico, "We remove things from Facebook that violate our Community Standards, and we don't have a policy that stipulates that the information you post on Facebook must be true."

Danielle Citron, a law professor at the University of Maryland, told Axios that deepfakes “could cause a riot; it could tip an election; it could crash an IPO.”

The political implications were clear when a doctored video of House Speaker Nancy Pelosi appeared on Facebook, YouTube and Twitter, according to The Washington Post. The video slowed Pelosi’s speech, making her appear drunk. Facebook refused to remove the video, reiterating its argument, “We don’t have a policy that stipulates that the information you post on Facebook must be true.”

The House Intelligence Committee held a hearing on deepfakes on June 13, in which lawmakers said social media platforms needed to put policies in place to combat them, CBS reported. One year ago, The New York Times reported that Facebook had set up a war room for “safeguarding elections” ahead of the 2018 midterm elections.

According to Politico, newsrooms are training their journalists to detect deepfake videos, and lawmakers have pointed out that fake videos could be a huge issue in the 2020 presidential election.

Lawmakers say they will be watching carefully to see what the company decides to do. Sen. Ben Sasse has already proposed laws that would make distributing and creating deepfakes illegal, according to the Verge.

In an article published in April, Emily Dreyfuss and Issie Lapowsky wrote in Wired, “Facebook is finally coming around to the fact that assuming the role of a supposedly neutral platform for many years has had some not-so-neutral consequences.”