The race to regulate powerful new artificial intelligence tools is in full swing as a growing number of industry experts, scientists and policymakers warn that the rising capabilities of this advanced technology could be marching a path toward an epic, future cataclysm, up to and including an as-yet-undefined extinction event.

Just last week, hundreds of distinguished scientists, academics and tech developers including Turing Award winners Geoffrey Hinton and Yoshua Bengio, and leaders of the major AI labs, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, signed on in support of a single-sentence missive issued by the nonprofit Center for AI Safety.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.

But as lawmakers and regulatory bodies in the U.S. and around the world scramble to catch up with AI developments that have progressed by unexpected leaps in recent months in hopes of staving off some near-future calamity, AI-generated content has been fomenting concern for years and is only getting better.

And one of the definitions of “better” when it comes to work generated by AI systems is that it’s increasingly difficult to detect or differentiate from the output of fleshly creators.

Related
How do we avoid an AI-driven extinction event? Unknown, but experts sign ‘global priority’ declaration
Bill Gates: AI will kill off Amazon, Google and drive future humanoid workers
Can new government regulation repack the Pandora’s box of emerging AI tools?

The technology behind deepfake videos has been under development for decades and many point to a series of 2017 postings on Reddit as the first widely distributed examples. These manipulations, done by a user operating under the name “deepfake,” used face-swapping technology to replace subjects in pornographic videos with celebrities. After garnering a massive response Reddit would counter with new rules banning explicit content. But, these postings turned out to be an unfortunate bellwether of what was to come. Deepfake detection firm DeepMedia reports the number of deepfake videos posted online has tripled so far in 2023 and estimates some 500,000 deepfake videos and voice recordings will be posted globally by the end of the year. And the vast majority of the synthesized videos fall into the pornography category.

But, alongside the torrent of manipulated porn imagery, the output of the now ultra-advanced, AI-driven tools is showing up across the content spectrum.

Last month, an image showing black smoke pouring out of a building purported to be the Pentagon circulate widely on Twitter. Users quickly pointed out inconsistencies in the image and the Department of Defense discounted it as a fake in its own Twitter posting, but it generated enough concern to impact the major stock indices which dipped briefly in response to the false reports.

Some spoofs related to the upcoming 2024 presidential election cycle have begun appearing. The Republican National Committee produced a political ad it released in April that used AI-generated video imagery to depict concocted post-election scenarios should President Joe Biden be reelected.

Along with the doom-and-gloom AI fakes are plenty of more lighthearted postings — including the work of Belgian visual effects specialist Chris Ume who has posted a series of AI-created videos on social media site TikTok featuring A-list film star Tom Cruise doing various silly and benign activities. Ume has said the work wasn’t meant to fool anyone and he sees the spoofs as a way to raise awareness of how advanced deepfake technology has become. Experts are impressed at just how convincing the videos are.

“My first thought was they’re incredibly well done,” digital image forensics expert and University of California, Berkeley professor Hany Farid, who specializes in image analysis and misinformation, told NBC News. “They’re funny, they’re clever.”

So how do average internet users effectively navigate this fast-evolving world of fakery? And, perhaps more importantly, is there anything individuals can do to protect themselves from one day ending up as an unwitting character in an AI-generated video?

Dozens of companies are currently focused on building tools that can quickly and accurately identify the AI-generated faux from the real-world factual and most are employing artificial intelligence technology to be the detectors of content produced by the very same engines.

But a Utah startup, launched by a trio of local tech veterans, is pursuing a different angle, aiming to develop a process that mediates the easy certification of video, images and audio files that can then be used as the foils to locate and identify fake, and unauthorized use, of that information.

And the company, Salt Lake City-based Bunked, just earned its way into an exclusive accelerator program hosted by Amazon Web Services that’s working to give emerging companies in the artificial intelligence space a running start.

The system under development by Bunked aims to use what might be best described as a reverse search engine that can detect when and where content that’s been certified by its customers has been repurposed. If the use has been authorized, the content earns a certification badge confirming legitimacy and also includes a path back to source. But if it hasn’t, Bunked’s service will immediately identify it with a badge noting its falsehood.

Bunked co-founder and CEO Cameron Bell said the rate at which AI capabilities are advancing and the growing access to advanced generative AI tools that can be operated easily by non-technicians underscores the critical need for more robust detection tools.

“I don’t think anyone saw this coming as quickly as it did,” Bell said. “We can now be scrolling through a timeline of fake content that is completely generated by a GPU, not a person.”

Bunked’s initial work is focused on protecting “personhood.” But founders say they’re aiming to build out the system to enable certification and detection of the re-use of protected content, like a musician’s voice or audio recordings or the works of a visual artist or other original content.

Bunked is now several weeks into a generative AI accelerator program hosted by cloud computing giant Amazon Web Services. The Utah company was one of only 21 startups selected from over 1,200 applicants. Accelerator cohort participants get access to a wide range of technical support, mentorship and networking opportunities in a fast-growing sector.

The Amazon accelerator participants also get a dividend that’s extremely valuable to newly launched companies that, like Bunked, are self-financing early development efforts. And that’s $300,000 worth of AWS cloud computing time, a precious commodity for the processing-heavy tasks of AI-related businesses.

Kathryn Van Nuys, head of startup business development in North America for Amazon Web Services, said the generative AI accelerator effort grew out of the company’s recognition of a new wave of startups that are the natural outgrowth of the advancement of foundational AI systems like OpenAI, Google DeepMind and others.

“Where we saw a need was to work with and support the best and brightest startups in the generative AI space,” Van Nuys said. “AWS is uniquely positioned to do so with our deep experience in machine learning and AI.”

View Comments

Van Nuys said the interest in the program far outpaced expectations and the cohort represents a wide range of AI business ideas including animation, a program to customize children’s literacy programs and a company aiming to protect enterprise data sets.

On top of the developmental support, Bunked and its fellow cohort companies will wrap up the program with a demo day in July that will also include an opportunity to meet with potential investors. And, like the interest shown by program applicants, Van Nuys said there is an equally fervent buzz in an investment community that’s keen to be a part of an explosion in AI startup ideas.

For now, Bell and his fellow Bunked co-founders Cody Maughan and Sam Tanner, are focused on embracing the positive potential of advanced AI tools while developing the next big thing when it comes to aiding individuals and businesses who are facing myriad new challenges when it comes to accountability and reliability in the realm of digital content.

“Now that we’re all connected to everyone, you don’t know where things come from,” Bell said. “We’ve expanded our world and trust as become an increasingly scarce commodity in today’s world. Reestablishing that trust is what we’re trying to provide.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.