Skip to main content
  1. Home
  2. Emerging Tech
  3. Computing
  4. News

Thanks to A.I., there is finally a way to spot ‘deepfake’ face swaps online

Add as a preferred source on Google
Image used with permission by copyright holder

The ability to use deep learning artificial intelligence to realistically superimpose one person’s face onto another person’s body sounds like good, wholesome fun. Unfortunately, it’s got a sinister side, too, as evidenced by phenomenon like the popularity of “deepfake” pornography starring assorted celebrities. It’s part of a wider concern about fake news and the ease with which cutting- edge tech can be used to fraudulent effect.

Researchers from Germany’s Technical University of Munich want to help, however — and they are turning to some of the same A.I. tools to help them in their fight. What they have developed is an algorithm called XceptionNet that quickly spots faked videos posted online. It could be used to identify misleading videos on the internet so that they could be removed when necessary. Or, at the very least, reveal to users when they have been manipulated in some way.

Recommended Videos

“Ideally, the goal would be to integrate our A.I. algorithms into a browser or social media plugin,” Matthias Niessner, a professor in the university’s Visual Computing Group, told Digital Trends. “Essentially, the algorithm [will run] in the background, and if it identifies an image or video as manipulated it would give the user a warning.”

The team started by training a deep-learning neural network with a dataset of more than 1,000 videos and 500,000 images. By showing the computer both the doctored and undoctored images, the machine learning tool was able to figure out the differences between the two — even in cases where this would be difficult to spot for a human.

“For compressed videos, our user study participants could not tell fakes apart from real data,” Niessner continued. On the other hand, the A.I. is able to easily distinguish between the two. Where humans were right 50 percent of the time, making it the equivalent of random guesses, the convolution neural network could get compressed videos right anywhere from 87 percent to 98 percent of the time. This is particularly impressive since compressed images and video are harder to distinguish than uncompressed pictures.

Compared to other fraudulent image-spotting algorithms, XceptionNet is way ahead of the curve. It’s another amazing illustration of the power of artificial intelligence and, in this case, of how it can be used for good.

A paper describing the work titled, “FaceForensics: A Large-scale Video Data Set for Forgery Detection in Human Faces,” is available to read online.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Robots just ran the Beijing half-marathon faster than the world record holder
humanoid robot running a marathon

A humanoid robot just ran a half-marathon faster than the world record holder. It might not seem impressive at first, but considering last year, the fastest robot at Beijing's humanoid robot half-marathon finished in two hours and 40 minutes, this is a huge achievement. 

As reported by the Associated Press, the winning robot at this year's Beijing half-marathon crossed the finish line in 50 minutes and 26 seconds, comfortably beating the human world record of 57 minutes recently set by Jacob Kiplimo. 

Read more
As if the plate wasn’t already full, AI is about to worsen the global e-waste crisis
New report highlights a rising environmental concern
Stack of graphics cards and motherboards in a landfill site e-waste

AI is already changing how the world works, but it’s also quietly making one of our biggest environmental problems even worse. And no, this isn’t about energy consumption this time. It’s about the hardware. Because every smarter AI model comes with a physical cost.

AI is about to supercharge the e-waste problem

Read more
Smart glasses are finding a surprise niche — Korean drama and theater shows
Urban, Night Life, Person

Every year, millions of people follow Korean content without speaking a word of the language. They stream shows with subtitles, read translated lyrics, and find workarounds. But live theater has always been a different problem — you can't pause or rewind it. That's the problem: a Korean startup thinks it's cracked, and Yuroy Wang was one of the first to try it. The 22-year-old Taipei retail worker is a K-pop fan who loves Korean culture but doesn't speak the language. When he went to see "The Second Chance Convenience Store," a touring play based on a Korean novel that was a bestseller in Taiwan, he expected supertitles. What he got instead was a pair of chunky black-framed AI-powered glasses sitting on his nose, translating the dialogue in real time directly on the lenses. "As soon as I found out they were available, I couldn't wait to try them," he said. Wang is part of a growing audience discovering that smart glasses, a category of tech that has struggled to find mainstream purpose for years, might have just found their calling in the most unexpected of places: live Korean theater.

How do the glasses work?

Read more