Rep. Celeste Maloy, R-Utah, sponsored new bipartisan legislation that would make social media and other platforms legally responsible if they fail to act on abusive deepfake images and cyberstalking.
On Monday, Maloy and Rep. Jake Auchincloss, D-Mass., introduced the Deepfake Liability Act, a bill that would change how federal law treats websites and apps that host nonconsensual AI-generated sexual images and other intimate content.
“Abusive deepfakes and cyberstalking are harming people across the country, and victims deserve real help,” Maloy said in a press release. “Our bill creates a straightforward duty of care and a reliable process to remove harmful content when victims ask for help. Companies that take this seriously will keep their protections under the law. Those that do nothing will be held accountable.”
Maloy’s office noted that women and teenage girls are the overwhelming targets of nonconsensual deepfake pornography, which now makes up the majority of deepfake content online.
Changing Section 230 rules for AI content
The bill targets Section 230 of the Communications Decency Act, the law that has long shielded online platforms from being sued over most user-generated content.
The Deepfake Liability Act would condition those protections on whether a platform meets a new “duty of care.” To keep their immunity, companies would need to:
- Take basic steps to prevent cyberstalking and abusive deepfakes
- Respond to reports from victims
- Investigate credible complaints
- Remove intimate or privacy-violating content identified by those victims
The bill also clarifies that AI-generated content is not automatically covered by Section 230 immunity — a key change as generative tools make it easier to create convincing fake images and videos.
“AI shouldn’t have special privileges and immunities that journalists don’t get,” Auchincloss said in the press release, arguing that using bots or deepfakes to violate or stalk another person “needs to be a CEO-level problem for the trillion-dollar social media corporations that platform it. Congress needs to get ahead of this growing problem, instead of being left in the dust like we were with social media.”
Speaking about his broader “UnAnxious Generation” legislative package, Auchincloss told Time magazine that the Deepfake Liability Act is meant to move platforms from a “reactive” posture to a proactive one: Section 230 protections would hinge on actively working to prevent and remove deepfake porn and cyberstalking, not just responding when forced.
How it connects to the Take It Down Act
The new proposal is designed to build on a law that passed earlier this year: the federal Take It Down Act.
The Take It Down Act was co-sponsored by Sens. Ted Cruz, R-Texas, and Amy Klobuchar, D-Minn. First lady Melania Trump also strongly advocated for the bill to be passed. It passed the Senate by unanimous consent and cleared the House on a 409–2 vote before President Donald Trump signed it into law on May 19.
That law makes it a federal crime to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-generated deepfakes. It also requires covered websites and social media platforms to remove such material — and make efforts to delete copies — within 48 hours after a victim reports it.
Enforcement is handled by the Federal Trade Commission, and platforms have until May 2026 to fully implement the required notice-and-removal systems.
The Deepfake Liability Act uses that same basic notice-and-removal framework but goes further by tying Section 230 protections to whether companies meet a clear duty of care.
Maloy and Auchincloss say that change would ensure that platforms that ignore reports of abuse no longer have the same legal shield as those that take active steps to protect victims.
Supporters say it closes a gap — critics warn about overreach
Advocates for reforming online liability say the new bill is a needed next step after Take It Down.
“The time is now to reform Section 230,” said Danielle Keats Citron, vice president of the Cyber Civil Rights Initiative and a longtime scholar of online abuse, per the release.
Keats said the Deepfake Liability Act contains a “well-defined duty of care” that would require platforms to prevent, investigate and remove cyberstalking, nonconsensual intimate images and digital forgeries. She also argued that it would close a loophole by making platforms responsible not only for content they help create but also for harmful content they “solicit or encourage.”
The Take It Down Act from earlier this year had drawn criticism from some free speech and digital rights groups, including the Electronic Frontier Foundation and others, who said its fast takedown deadlines and broad language could pressure platforms to over-remove content, rely heavily on automated filters and potentially sweep in lawful speech — such as news reporting, protest images or LGBTQ content — in the name of avoiding liability, per The Associated Press.
This new measure is part of a broader, bipartisan push to regulate AI-related harms and tighten rules for how tech companies handle children’s safety, online abuse and emerging threats from generative tools.
