After President Joe Biden announced his campaign on Tuesday, the Republican National Committee released an ad made up of images generated from artificial intelligence.

Titled “Beat Biden,” the ad includes a small, faintly visible disclaimer in the top left corner that reads, “Built entirely with AI imagery.” The 30-second spot shows imagined scenes of Biden and Vice President Kamala Harris at a political event and shuttered buildings with text on screen asking viewers to imagine a disastrous second term under Biden.

“What if the weakest president we’ve ever had were re-elected?” the ad asks. “What if international tensions escalate? What if financial systems crumble?”

The images show a few telltale signs that they’re faked, like a campaign sign with the wrong logo and a close-up shot of Biden that looks unnatural (AI-generated images often do a poor job of rendering teeth and hands).

Warning about the possible worst outcomes under four years of your political opponent is common in politics, but it’s unclear how voters will feel about AI-generated images in political ads. RNC chair Ronna McDaniel said the video was shared ethically because of its disclaimer.

“It is AI-generated so we’re sharing that up front, ethically, so it’s not a deepfake, every single image was AI, but we are painting a picture of a future Biden America,” McDaniel told Fox News. “Our border is overrun, crime is surging, he has not taken a stand on the national stage, he has kowtowed to China, his family is compromised by China.”

She said it was “important the American people see in a graphic way, in video, what four more years of this president would mean and the destruction it would mean for our country.”

Because political rhetoric is rife with clichés, which AI generates, it’s potentially helpful for political professionals, but the technology has sparked concerns over its ability to generate deceptive images and misinformation quickly and cheaply. Artificial intelligence could be used in politics to develop ad copy or script drafts, panelists hosted by the Project on Ethics in Political Communication said last month.

Related
How could ChatGPT and artificial intelligence change politics?

Before former President Donald Trump’s indictment, AI-generated images imagining him being tackled to the ground by officers went viral on social media, as did AI-generated images of the pope wearing a fashionable puffy coat.

View Comments

During the 2016 campaign, the Boston Globe’s opinion section imagined what the U.S. would be like if Trump were elected with a fake front page showing headlines including “Deportations to begin” and “Markets sink as trade war looms.” At the time, Trump called the paper “stupid.”

“They pretended Trump is the president and they made up, the whole front page is a make-believe story, which is really no different from the whole paper,” he said. “The whole thing is made up.”

The Congressional Research Service wrote in a memo updated this month that deepfake images generated through AI could “be used for nefarious purposes.”

“State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately,” the report read. “Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.