SALT LAKE CITY — If there’s a throughline when it comes to talking about deepfakes, it’s that they’re getting uncomfortably easy to make.
According to The Verge, researchers from Stanford, Princeton, Adobe Research and the Max Planck Institute for Informatics have developed new software that allows users to add, delete or alter words spoken in a video. Findings from the software can be seen in a video via YouTube.
“To create the video fakes, the scientists combine a number of techniques. First, they scan the target video to isolate phonemes spoken by the subject,” The Verge writes. “They then match these phonemes with corresponding visemes, which are the facial expressions that accompany each sound. Finally, they create a 3D model of the lower half of the subject’s face using the target video.”
Basically, the software breaks down a required 40 minutes of video into raw data which can then be used to create new content. The new audio and video are then inserted into the original videos to create a fairly convincing fake — certainly something more accurate than the artificial intelligence that can imitate comedian Joe Rogan, which itself is fairly convincing.
The Verge also notes that out of 138 test participants, 60 percent thought the edited videos were real. That’s a scary statistic, considering the Guardian reports that deepfakes can be used to spread misinformation that can be hard for some to detect.
Fortunately, Ars Technica reports that as convincing as deepfakes are, they’re currently still possible to identify. Fine details tend to disappear and AIs with limited data tend to show their seams. However, improved technology will likely make fake videos more lifelike.
But deepfakes can also have a fun use — I reported last month for Deseret News that the Dali Museum in Florida used AI and actors to create an interactive exhibit bringing the artist Salvador Dali back to life.