Skip to main content
  1. Home
  2. Emerging Tech
  3. Computing
  4. News

That turtle is a gun! MIT scientists highlight major flaw in image recognition

Add as a preferred source on Google

When is a rifle actually a 3D-printed turtle? When is an espresso actually a baseball? A fascinating, yet alarming new piece of research from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) shows that it’s possible to create objects that are able to trick Google image recognition algorithms into thinking they are looking at something else entirely.

In their paper, the team of MIT researchers describe an algorithm which changes the texture of an object just enough that it can fool image classification algorithms. The proof of what the team calls “adversarial examples” turns out to be baffling to image recognition systems, regardless of the angle the objects are viewed from — such as the 3D printed turtle which is consistently identified as a rifle. That’s bad news for security systems which use A.I. for spotting potential security threats.

“It’s actually not just that they’re avoiding correct categorization — they’re classified as a chosen adversarial class, so we could have turned them into anything else if we had wanted to,” researcher Anish Athalye told Digital Trends. “The rifle and espresso classes were chosen uniformly at random. The adversarial examples were produced using an algorithm called Expectation Over Transformation (EOT), which is presented in our research paper. The algorithm takes in any textured 3D model, such as a turtle, and finds a way to subtly change the texture such that it confuses a given neural network into thinking the turtle is any chosen target class.”

Recommended Videos

While it might be funny to have a 3D-printed turtle recognized as a rifle, however, the researchers point out that the implications are pretty darn terrifying. Imagine, for instance, a security system which uses AI to flag guns or bombs, but can be tricked into thinking that they are instead tomatoes, or cups of coffee, or even entirely invisible. It also underlines frailty in the kind of image recognition systems self-driving cars will rely on, at high speed, to discern the world around them.

“Our work demonstrates that adversarial examples are a bigger problem than many people previously thought, and it shows that adversarial examples for neural networks are a real concern in the physical world,” Athalye continued. “This problem is not just an intellectual curiosity: It is a problem that needs to be solved in order for practical systems that use deep learning to be safe from attack.”

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Robots just ran the Beijing half-marathon faster than the world record holder
humanoid robot running a marathon

A humanoid robot just ran a half-marathon faster than the world record holder. It might not seem impressive at first, but considering last year, the fastest robot at Beijing's humanoid robot half-marathon finished in two hours and 40 minutes, this is a huge achievement. 

As reported by the Associated Press, the winning robot at this year's Beijing half-marathon crossed the finish line in 50 minutes and 26 seconds, comfortably beating the human world record of 57 minutes recently set by Jacob Kiplimo. 

Read more
As if the plate wasn’t already full, AI is about to worsen the global e-waste crisis
New report highlights a rising environmental concern
Stack of graphics cards and motherboards in a landfill site e-waste

AI is already changing how the world works, but it’s also quietly making one of our biggest environmental problems even worse. And no, this isn’t about energy consumption this time. It’s about the hardware. Because every smarter AI model comes with a physical cost.

AI is about to supercharge the e-waste problem

Read more
Smart glasses are finding a surprise niche — Korean drama and theater shows
Urban, Night Life, Person

Every year, millions of people follow Korean content without speaking a word of the language. They stream shows with subtitles, read translated lyrics, and find workarounds. But live theater has always been a different problem — you can't pause or rewind it. That's the problem: a Korean startup thinks it's cracked, and Yuroy Wang was one of the first to try it. The 22-year-old Taipei retail worker is a K-pop fan who loves Korean culture but doesn't speak the language. When he went to see "The Second Chance Convenience Store," a touring play based on a Korean novel that was a bestseller in Taiwan, he expected supertitles. What he got instead was a pair of chunky black-framed AI-powered glasses sitting on his nose, translating the dialogue in real time directly on the lenses. "As soon as I found out they were available, I couldn't wait to try them," he said. Wang is part of a growing audience discovering that smart glasses, a category of tech that has struggled to find mainstream purpose for years, might have just found their calling in the most unexpected of places: live Korean theater.

How do the glasses work?

Read more