A pair of Supreme Court decisions published Thursday on cases that involve tech giants Google and Twitter confirmed protections provided by a controversial communications law that mostly shields social media companies and digital platform operators from liability for content posted by users.

In Twitter vs. Taamneh, the court ruled against a family member of Jordanian citizen Nawras Alassaf, who died in the 2017 attack on the Reina nightclub in Istanbul where a gunman affiliated with the Islamic State group killed 39 people. Alassaf’s relatives sued Twitter, Google and Facebook for aiding terrorism, arguing the platforms helped the militant organization grow and did not go far enough in trying to curb terrorist activity on their platforms. A lower court had earlier let the case proceed.

In the second ruling with direct implications for Section 230 of the Communications Decency Act, the court dismissed Gonzalez vs. Google, leaving a lower court ruling in place. The case stemmed from the death of American college student Nohemi Gonzalez in a terrorist attack in Paris in 2015. Members of the Gonzalez family sought to sue Google-owned YouTube for helping the Islamic State group spread its message and attract new recruits, in violation of the Anti-Terrorism Act. Earlier lower court rulings sided with Google.

The decisions allow the court to avoid directly assessing the merits of Section 230’s protections, ones that have been defended by the tech industry but maligned by some elected officials and other critics as being too broad.

Related
Will the Supreme Court undo the ‘rule that made the modern internet’?
Congress set to grill Facebook, Google and Twitter CEOs on extremist content, misinformation

“The court will eventually have to answer some important questions that it avoided in today’s opinions,” Anna Diakun, staff attorney at the Knight First Amendment Institute at Columbia University, said in an emailed statement to The Associated Press. “Questions about the scope of platforms’ immunity under Section 230 are consequential and will certainly come up soon in other cases.”

Section 230 of the Communications Decency Act of 1996, widely credited with helping online companies prosper since its adoption, provides protection for platform operators against legal issues raised by content published by users under the stipulation that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In its simplest interpretation, the section places the liability for any posting squarely on the user/poster and creates a legal firewall between that user and the website or platform operator, as well as other users. But, the protection is not absolute as the Electronic Frontier Foundation notes, in that it does not protect companies that violate federal criminal law, create illegal or harmful content, or illegally repurpose someone else’s intellectual property.

Ahead of a 2021 congressional hearing focused on misinformation and extremist content on social media platforms, Google CEO Sundar Pichai said in written testimony that without Section 230, online platforms would be obligated to over-filter content or not filter it all and that the clause “allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.”

“Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability,” Pichai wrote. “We are, however, concerned that many recent proposals to change Section 230 — including calls to repeal it altogether — would not serve that objective well.

View Comments

“In fact, they would have unintended consequences — harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”

In his own memo ahead of that hearing, which included Pichai as well as Facebook CEO Mark Zuckerberg and Twitter’s then-CEO Jack Dorsey as witnesses, chairman of the U.S. House Energy and Commerce Committee, Rep. Frank Pallone Jr. D-N.J., said that social media operators were failing to effectively staunch the flow of misinformation and extremist postings.

“Facebook, Google, and Twitter operate some of the largest and most influential online social media platforms reaching billions of users across the globe,” Pallone wrote. “As a result, they are among the largest platforms for the dissemination of misinformation and extremist content.

“These platforms maximize their reach — and advertising dollars — by using algorithms or other technologies to promote content and make content recommendations that increase user engagement. Users of these platforms often engage more with questionable or provocative content, thus the algorithms often elevate or amplify disinformation and extremist content.”

Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.