WASHINGTON — Sen. John Curtis, R-Utah, is challenging major tech companies to alter their algorithm systems, accusing platforms of recommending content that could lead to radicalized behavior.
During a hearing with Big Tech executives on Wednesday, Curtis pressed top officials from companies such as Google on what drives their content amplification systems and how it affects users — arguing federal law should not shield platforms from liability when individuals commit harmful acts influenced by their online activity.
“We all know that Section 230 was meant to protect platforms that acted in good faith,” Curtis said, referencing the Communications Decency Act that provides legal immunity to providers based on what users post . “But when an algorithm downranks speech or drives users towards extremism because it’s good for engagement, is that really good faith moderation? And should Section 230 immunity apply when you as a company or industry make decisions that magnify certain content and downgrade other content?”
Curtis compared the current online landscape to Senate hearings in the 1990s during which tobacco companies testified that nicotine was not an addictive substance and that there were no harmful effects of tobacco — which later proved to be false.
Curtis challenged Markham Erickson, a lawyer representing Google, on whether there will be studies in the future indicating social media algorithms are harmful to users driven by “internal conversations that says, ‘It’s good to have people stay on our platform longer?’”
“Senator, we want people to stay on our platforms,” Erickson replied.
Curtis also pushed for increased regulation of social media platforms, rejecting arguments from some advocates that the tech companies are protected by the First Amendment. Curtis pushed back to say that company executives must “own” their decisions on what content to amplify on their sites.
“Interference starts when tech companies apply an algorithm to content,” Curtis said.
