It is difficult to believe that it has been more than four years since Frances Haugen, formerly with Facebook, testified before the U.S. Senate about how that company’s algorithms were meant to create addictive levels of engagement, and how even though Facebook knew that teens were negatively affected, it did not cease its practices.
The business model of social media companies is based on engagement, which is measured in the amount of time a person spends consuming and reacting to content on the sites. The greater the engagement, the more information the site can extract and sell about you to advertisers and others. These companies look for ways to keep you on their site longer; they must keep your interest high.
And so they have developed algorithms that incorporate machine-learning to figure out what gets you excited and engaged as long as possible. The key to achieving this is to elicit an emotional reaction, not a cerebral one. The strongest emotions — outrage, titillation, envy, despair — are the best bets.
No wonder, then, as Haugen noted, “The result has been a system that amplifies division, extremism and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people. In other cases, their profit-optimizing machine is generating self-harm and self-hate — especially for vulnerable groups, like teenage girls.”
Congress has failed to act, despite the fact that good laws have been proposed, such as Utah Sen. John Curtis’ Algorithm Accountability Act. The states have been forced to step into the breach, but social media companies have sought injunctions preventing state regulation from going into effect, claiming, along with free speech arguments, that only the impotent federal government can regulate them because social media is by its very nature interstate commerce.
Of course, the American way of getting around all of this stalemated government we call democracy is to directly sue companies for harm. Even without legislation being passed, if companies know that those they’ve harmed can successfully sue them for megabucks, companies will rein in their behavior. Suing these companies has been tricky, however, because Section 230 of the Communications Decency Act states that platforms are not responsible for content posted by third parties. A winning liability lawsuit strategy had not yet been tested in court.
Until, I believe, now.
Two trials have begun, one in Los Angeles and one in New Mexico. It’s the one in Los Angeles against Meta and YouTube that I have my eye on, because it is debuting a new strategy against social media companies’ claims that Section 230 and its relationship to free-speech rights protects them from being sued over user-generated content that harms.
Rather than attack the content itself in direct fashion, lawyers in the Los Angeles case are attacking how the content is delivered. This is a smart move, because Section 230 protects content that is delivered, and not the manner of delivery of content. The lawyers will be arguing that Meta and YouTube deliver user-generated content in a manner guaranteed to create addiction, particularly in the case of children, and that these companies did so purposefully. That is, the companies deliberately chose an addictive form of delivery over other possible delivery forms.
As the lead lawyer for the plaintiffs, Mark Lanier, expressed it, “They didn’t just build apps, they built traps. They didn’t want users, they wanted addicts.” Lanier’s team will be arguing that the chosen mechanisms for addiction included these companies’ algorithmic recommendations, infinite scroll of this personalized feed, and automatic video play of items in the scroll. These features amount to addictive design.
The internal documents are damning. In addition to what Haugen produced earlier, discovery in this and other trials shows company employees openly talking about the addictive nature of their products. From documentation in another trial, we read, “Internally, Meta researchers minced no words: “IG (Instagram) is a drug. … We’re basically pushers.” ... “Teens are hooked despite how it makes them feel. Instagram is addictive, and time-spend on platform is having a negative impact on mental health.”
Meanwhile, TikTok researched what “fuels usage and adoption” in high schools. Snap acknowledged that “the ones that have the Snapchat addiction have no room for anything else. Snap dominates their life.” And YouTube — the most widely used platform by U.S. teens —recognized “growing concerns that excessive screen time is stunting physical, emotional, and social development of teens,” according to court documentation.
There are now dozens of lawsuits, not only in the U.S. but also in Europe, seeking to stop social media companies from continuing their business strategy of addictive engagement. I have argued for years that U.S. legislation must mandate default opt-out of algorithmic recommendations altogether on social media.
From what we’ve heard in testimony so far, Meta is arguing that “addiction” is different, clinically speaking, from “problematic use,” and if it’s problematic use, then that is the user’s fault. YouTube’s strategy appears to be to deny it is a social media company at all. If that’s the best they’ve got, I smell blood in the water. May the sharks in pin-striped suits get them where it hurts.
