Skip to main content
  1. Home
  2. Emerging Tech
  3. Computing
  4. News

Two left feet? No problem. This A.I. can turn anyone into a dancer

Add as a preferred source on Google
Everybody Dance Now

Are you a terrible dancer who dreams of one day starring in a toe-tapping music video that would have made Michael Jackson jealous? If so, you’ve got two options: go the Napoleon Dynamite route and put in some serious practice, or simplify the process by taking advantage of some cutting-edge artificial intelligence.

Recommended Videos

Since you’re still reading and not off YouTubing “How to dance” videos, we’re going to assume the second of these options is the one that more appeals to you. If so, you have researchers at the University of California, Berkeley, to thank. Using the kind of “deepfake” technology that makes it possible to carry out realistic face-swaps in videos, they have developed a tool which can make even the most bumbling and uncoordinated among us look like experts.

“We have developed a method to transfer dance motions from one individual — a professional dancer — to another, [who we’ll refer to as ‘Joe’ for this example,]” Shiry Ginosar, a Ph.D. student in Computer Vision at UC Berkeley, told Digital Trends. “In order to do this, we take a video of Joe performing all kinds of motions. We use this video to train a generative adversarial network to learn a model of how Joe looks and moves. When we have learned this model, we can take a stick figure of a body pose as input and generate a still photograph of Joe performing that body pose as output. If we have a whole video of a dancing stick figure, we can generate a whole video of Joe dancing in the same way. Now, given a video of the professional dancer, we extract the body pose of the dancer and go back to Joe and generate a video of him dancing in much the same way.”

Aside from the fun of being able to make anyone resemble an expert dancer, Ginosar said that dancing presents an interesting challenge for this kind of deepfake technology. That’s because it involves the entire human body moving in a fluid way, which is considerably different (and tougher) than the more static pose or face transfers which have been carried out so far.

A paper describing the work, titled “Everybody Dance Now,” is available to read on the arXiv preprint server. In addition to Ginosar, other researchers on the project included Caroline Chan, Tinghui Zhou, and Alexei Efros.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Robots just ran the Beijing half-marathon faster than the world record holder
humanoid robot running a marathon

A humanoid robot just ran a half-marathon faster than the world record holder. It might not seem impressive at first, but considering last year, the fastest robot at Beijing's humanoid robot half-marathon finished in two hours and 40 minutes, this is a huge achievement. 

As reported by the Associated Press, the winning robot at this year's Beijing half-marathon crossed the finish line in 50 minutes and 26 seconds, comfortably beating the human world record of 57 minutes recently set by Jacob Kiplimo. 

Read more
As if the plate wasn’t already full, AI is about to worsen the global e-waste crisis
New report highlights a rising environmental concern
Stack of graphics cards and motherboards in a landfill site e-waste

AI is already changing how the world works, but it’s also quietly making one of our biggest environmental problems even worse. And no, this isn’t about energy consumption this time. It’s about the hardware. Because every smarter AI model comes with a physical cost.

AI is about to supercharge the e-waste problem

Read more
Smart glasses are finding a surprise niche — Korean drama and theater shows
Urban, Night Life, Person

Every year, millions of people follow Korean content without speaking a word of the language. They stream shows with subtitles, read translated lyrics, and find workarounds. But live theater has always been a different problem — you can't pause or rewind it. That's the problem: a Korean startup thinks it's cracked, and Yuroy Wang was one of the first to try it. The 22-year-old Taipei retail worker is a K-pop fan who loves Korean culture but doesn't speak the language. When he went to see "The Second Chance Convenience Store," a touring play based on a Korean novel that was a bestseller in Taiwan, he expected supertitles. What he got instead was a pair of chunky black-framed AI-powered glasses sitting on his nose, translating the dialogue in real time directly on the lenses. "As soon as I found out they were available, I couldn't wait to try them," he said. Wang is part of a growing audience discovering that smart glasses, a category of tech that has struggled to find mainstream purpose for years, might have just found their calling in the most unexpected of places: live Korean theater.

How do the glasses work?

Read more