SALT LAKE CITY — Immediately after President Donald Trump went public on Twitter with his COVID-19 diagnosis on Oct. 1, a wave of anti-Asian, anti-Semitic and conspiracy theories about the spread of the novel coronavirus flooded the social media platform, according to a new report from the Anti-Defamation League.
Data analysis performed by league researchers found that “in the 12 hours after the president’s initial tweet about his and the first lady’s COVID-19 diagnosis, there was an 85% increase in anti-Asian language and conspiracy theories tracked on the platform.
“Similarly, the rate of discussions about various conspiracy theories increased 41 percent, with some of those conversations also taking on anti-Semitic overtones,” the league reported. “From Oct. 2-5, the percentage of anti-Asian language on Twitter remained higher than usual.”
The nonprofit, anti-hate organization said an analysis of Twitter traffic conducted in the days following Trump’s announcement used tools like the league’s machine-learning driven Online Hate Index tool, along with help from the Alan Turing Institute, the United Kingdom’s national institute for artificial intelligence and data science, to analyze millions of Twitter postings.
Anti-Defamation League CEO Jonathan A. Greenblatt warned that targeted online postings can lead to acts of real-world violence.
“The level of hatred and vitriol that was aimed at Asian Americans and Chinese people on social media is simply staggering,” Greenblatt said in a statement. “The hate speech and stereotyping are irresponsible and can spill over into real-world violence. With the alarming increase in physical attacks and hate crimes against Asian Americans in recent months, it is clear that all leaders, including our president, need to stop blaming others for spreading the virus.”
The Anti-Defamation League also reported that, in addition to a large uptick in hate speech targeting ethnic and religious groups, “one of the most common beliefs expressed by conspiracy theories was that a ‘New World Order’ or ‘NWO’ would be implemented — supposedly run by secretive actors who either gave President Trump the virus or plan to assassinate him under cover of his illness.”
The report also found other conspiracy narratives that popped up in the early days of the COVID-19 pandemic, like one widely circulated claim that Microsoft founder Bill Gates had caused the virus, resurfaced following the news of Trump’s diagnosis. And the analysis identified numerous Twitter discussions with elements of the QAnon conspiracy, using the intentionally misspelled follower code word “patriqts” to “signal fellow believers in Q to rescue the president.”
Rep. Judy Chu, D-Calif., was the first Chinese American woman elected to Congress when she won her Los Angeles County district in 2009. Chu said the Anti-Defamation League’s report helps highlight the negative impacts that hate speech and inaccurate information can have amid the ongoing COVID-19 public health crisis.
“Misinformation and xenophobia are dangerous,” Chu said in a statement. “That is why the (Centers for Disease Control and Prevention) and (World Health Organization) have both warned not to associate COVID-19 with a specific people or country because of the stigma it causes.
“And now, thanks to the ADL’s report, we are able to see that harmful impact in real time. As the ADL’s report shows, the alarming anti-Asian hate incidents we have witnessed in recent months are not an accident. They are the result of an atmosphere of xenophobia and bigotry that is thriving on Twitter and other online platforms.”
And running concerns about how to appropriately regulate online speech extend beyond Twitter to other social media and communication platforms.
In July, the league, along with the NAACP and other groups, led a call to businesses to abstain from advertising on Facebook for an entire month in hopes of compelling CEO Mark Zuckerberg to tighten restrictions on hate speech on the social media platform he founded over 16 years ago that now boasts some 2.7 billion active monthly users.
On Monday, Zuckerberg announced on his own Facebook page that the company is changing its internal policies on policing postings about the Holocaust, including redirects to reliable sources.
“We’ve long taken down posts that praise hate crimes or mass murder, including the Holocaust,” Zuckerberg wrote. “But with rising anti-Semitism, we’re expanding our policy to prohibit any content that denies or distorts the Holocaust as well.
“If people search for the Holocaust on Facebook, we’ll start directing you to authoritative sources to get accurate information.”
Zuckerberg cited his growing concerns with the rising level of real-world incidents of anti-Semitic violence.
“I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust,” Zuckerberg wrote. “My own thinking has evolved as I’ve seen data showing an increase in anti-Semitic violence, as have our wider policies on hate speech. Drawing the right lines between what is and isn’t acceptable speech isn’t straightforward, but with the current state of the world, I believe this is the right balance.”
Zuckerberg has long argued that private companies like Facebook should not be put in the position of being the arbiters of First Amendment issues in general or, on a more granular level, defining what is, or is not, hate speech. Similar issues have been noted by former ACLU director Nadine Strossen, herself the daughter of Holocaust survivors.
In a September interview the the Jewish Telegraphic Agency, Strossen said even the highest U.S. judicial authority has avoided weighing in on hate speech designations.
“For starters, it’s really important to understand that there is no agreed upon legal definition of hate speech in the United States — the U.S. Supreme Court has consistently, unanimously refused to carve out an exception from free speech protections,” Strossen said. “The label is usually used in everyday speech to refer to speech that conveys a hateful or discriminatory message, particularly about people who belong to racial, religious, sexual or other groups that have traditionally been marginalized and oppressed.”
Strossen also offered the reminder the U.S. Constitution protections from government restrictions on free speech do not extend to private entities.
“The First Amendment protects us only against government restrictions on our speech,” Strossen said. “We have no free speech rights against Facebook or any private sector entity. There are a lot of people who are shocked to learn that!
“However, this is not an all-or-nothing dichotomy: Hate speech is not either completely protected or completely unprotected. Rather, it’s much more complicated in a way that actually makes good sense.”
Strossen said the U.S. Supreme Court has consistently and unanimously held that “the government may not outlaw any speech based solely on the disapproval of its content” but has also supported, in certain circumstances, establishing legal accountabilities for speech connected to a direct harm.
“When you get beyond the content of the speech and look at the overall context in which it is expressed, then the Supreme Court has laid out what is often summarized as the emergency principle: If speech poses as a direct threat of imminent, specific and serious harm — in the particular context, facts and circumstances — then it may and should be punished,” Strossen said.