Skip to main content
  1. Home
  2. Emerging Tech
  3. News

Samsung’s new A.I. software makes generating fake videos even easier

Add as a preferred source on Google
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

A.I. is getting better and better at producing fake videos, for everything from amusingly adding Nicholas Cage into movies to maliciously spreading fake news. Now Samsung has developed software which makes creating fake videos even easier.

Recommended Videos

The new A.I. software was developed at Samsung’s A.I. Center in Moscow. As described in a paper available on pre-publication archive arXiv, is a new development in the technology. Previously, most deep fake software required a very large number of images of a particular person’s face in order to map that face onto a video. But the new software can create somewhat convincing fakes from just a few images of a person. Potentially, it could even work with a single image of a face.

The quality of fakes produced by A.I. is still very variable, and how convincing a fake will be depends on factors like the lighting of the original and the target images and the commonality between the two.

To demonstrate the new software, the Samsung team shared a video showing fun applications like “living portraits,” in which images of celebrities like Marilyn Monroe and Salvador Dali are brought to life. There’s even a video clip of the Mona Lisa, animated to show the abilities of the software.

But the potential for abuse of this technology is serious, as demonstrated in a doctored clip of politician Nancy Pelosi which is currently doing the rounds on Facebook.

The authors of the paper, Egor Zakharov and colleagues, are aware of this potential for abuse and seem mindful of it. “We believe that telepresence technologies in AR, VR and other media are to transform the world in the not-so-distant future,” they write on YouTube. “We realize that our technology can have a negative use for the so-called ‘deepfake’ videos. However, it is important to realize that Hollywood has been making fake videos (aka ‘special effects’) for a century, and deep networks with similar capabilities have been available for the past several years.”

The authors describe their software as democratizing special effects, and write that “the net effect of democratization on the [w]orld has been positive, and mechanisms for stemming the negative effects have been developed. We believe that the case of neural avatar technology will be no different.”

Arguably, the ability to doctor videos with this kind of software is not so different from the ability to doctor images with Photoshop. But as the software becomes more common, it’s important to remember that just because you see it in a video, doesn’t mean it’s real.

Georgina Torbet
Georgina has been the space writer at Digital Trends space writer for six years, covering human space exploration, planetary…
Robots just ran the Beijing half-marathon faster than the world record holder
humanoid robot running a marathon

A humanoid robot just ran a half-marathon faster than the world record holder. It might not seem impressive at first, but considering last year, the fastest robot at Beijing's humanoid robot half-marathon finished in two hours and 40 minutes, this is a huge achievement. 

As reported by the Associated Press, the winning robot at this year's Beijing half-marathon crossed the finish line in 50 minutes and 26 seconds, comfortably beating the human world record of 57 minutes recently set by Jacob Kiplimo. 

Read more
As if the plate wasn’t already full, AI is about to worsen the global e-waste crisis
New report highlights a rising environmental concern
Stack of graphics cards and motherboards in a landfill site e-waste

AI is already changing how the world works, but it’s also quietly making one of our biggest environmental problems even worse. And no, this isn’t about energy consumption this time. It’s about the hardware. Because every smarter AI model comes with a physical cost.

AI is about to supercharge the e-waste problem

Read more
Smart glasses are finding a surprise niche — Korean drama and theater shows
Urban, Night Life, Person

Every year, millions of people follow Korean content without speaking a word of the language. They stream shows with subtitles, read translated lyrics, and find workarounds. But live theater has always been a different problem — you can't pause or rewind it. That's the problem: a Korean startup thinks it's cracked, and Yuroy Wang was one of the first to try it. The 22-year-old Taipei retail worker is a K-pop fan who loves Korean culture but doesn't speak the language. When he went to see "The Second Chance Convenience Store," a touring play based on a Korean novel that was a bestseller in Taiwan, he expected supertitles. What he got instead was a pair of chunky black-framed AI-powered glasses sitting on his nose, translating the dialogue in real time directly on the lenses. "As soon as I found out they were available, I couldn't wait to try them," he said. Wang is part of a growing audience discovering that smart glasses, a category of tech that has struggled to find mainstream purpose for years, might have just found their calling in the most unexpected of places: live Korean theater.

How do the glasses work?

Read more