The Algorithm Accountability Act (S.3193), proposed this week by Sens. John Curtis and Mark Kelly, is an incredibly important piece of legislation that would amend Section 230 of the Communications Act. For the first time, social media companies would be held accountable for their proprietary algorithms that feed users content through recommendation and promotion. If these algorithms result in foreseeable harm to users, as they have clearly done in many cases, companies can be held liable in a court of law even despite mandatory arbitration clauses in user agreements. Section 230 would no longer function as a shield for social media companies.

Algorithms are the if-then instructions used by social media sites that infer who we are from our characteristics and behavior and then push content onto us that is designed to keep us engaged — that is, to keep us on the site. Algorithms are at the heart of the social media business model. The instructions in the algorithms are designed to feed us material that keeps us engaging with the platform and therefore allow the company to monetize our captured attention in some way, such as by being exposed to certain ads.

Related
Sen. Curtis challenges Google executives on addictive algorithms that could radicalize users

These algorithms can work on even the barest of information about the user. Consider a recent experiment in which a brand new email address was created by journalists in order to sign up for Facebook, with the only information provided being that the owner of the address was a 24-year-old male. The journalists took no action that might indicate any preferences whatsoever — not “liking” or posting or interacting with any content on the site so that they could begin to explore how Facebook’s secret algorithms work. By the third month, “highly sexist and misogynistic images … appeared in the feed without any input from the user.”

What is shown is not random information by any means. We now know from the testimony of Frances Haugen, formerly of Facebook, and other whistleblowers that the key to achieving high levels of engagement on a social media site is to elicit an emotional reaction, not an intellectual one. The strongest emotions — outrage, titillation, envy, despair — are the best bets. And so social media algorithms have a vested interest in moving us step by step in the direction of these extreme emotions.

We have a colloquial term for that algorithmically guided path; we call it “going down the rabbit hole.” For example, perhaps you’re feeling a bit sad. Through your interaction with a social media site, it picks up through its own secret sensors — not only by reading what you are typing but also tracking other cues about what you are doing — that you are feeling this way. The algorithms will start resonating with that mood, recommending sad videos. And then the algorithm starts recommending really sad videos. And then it starts recommending heartbreaking videos. And then it starts recommending despairing, hopeless videos … anything to keep you clicking. It has moved beyond echoing your low mood to profoundly intensifying it so you keep engaging with the platform.

The harm is foreseeable; the harm is even, in a sense, designed to occur: the purpose of a rabbit hole is to go down it. The case of Molly Russell has always broken my heart because this is precisely what happened to her.

Molly Russell was only 14 when she hanged herself in the U.K. in November 2017. Almost five years later, the official coroner’s inquest found that “Molly Rose Russell died from an act of self-harm whilst suffering from depression and the negative effects of online content.”

Haugen, commenting on the case, stated that the algorithms likely showed Russell harmful content before she had even searched for it. If Molly had even typed something innocuous about being blue or feeling sad, the algorithms would have pushed on her material that was much further down the rabbit hole. The algorithms were designed to induce binge-watching, and they were successful. In the last six months of her life, Russell was engaging with an average of 130 posts a day.

The child psychiatrist giving testimony at the inquest, Dr. Navin Venugopal, watched the material Russell was fed. He stated, “I will talk about the effect the material had on my state of mind. I had to see it over a short period of time, and it was very disturbing, distressing. There were periods where I was not able to sleep well for a few weeks so bearing in mind that the child saw this over a period of months I can only say that she was (affected) — especially bearing in mind that she was a depressed 14-year-old. It would certainly affect her and made her feel more hopeless.”

The Russell family’s lawyer had to seek therapy after viewing some of the content Russell had viewed, commenting, “It keeps sucking you deeper, I could feel it happening to myself and I’m a resilient adult.”

We cannot continue to allow ourselves to be preyed upon by social media companies that put their economic self-interest before safety concerns. While there are numerous little hacks designed to disable algorithmically delivered content, what individuals and families need now is for our government to step up to the plate.

View Comments

Utah has been a leader among the U.S. states in attempting to protect children from social media harm. Other nations, such as Australia and Denmark, go even further, banning all social media for children. There is much that should have already been done to protect children from the psychologically destabilizing effects of living online, and legislators like Sen. Curtis are starting a long-overdue regulatory revolution.

Related
Sen. John Curtis wants tech companies liable for ‘radicalizing’ content after Charlie Kirk assassination

The Algorithm Accountability Act, if passed, would be a huge victory for parents, for children and for all Americans. Sen. Curtis is to be commended for his strong action on all our behalf, cementing Utah’s status as a real vanguard in the fight against the harms of social media and nascent AI.

I also recommend legislation that would mandate the default mode on social media be no algorithmically pushed content whatsoever. That is, you should have to opt in to be shown pushed content. Massachusetts is already proposing that this be done to protect teens. I suggest all consumers should have default algorithmic opt-out as a basic right.

We are not helpless in the face of predatory algorithms; Sen. Curtis has shown this through his bold proposal. The Algorithm Accountability Act deserves all our support.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.