Skip to main content
  1. Home
  2. Emerging Tech
  3. News

Robot learns how to grab objects by analyzing them in simulated reality

Add as a preferred source on Google

Our hands are pretty great at picking up all manner of objects, while our brains are fine-tuned at working out exactly where and how to pick up an object most securely. That’s not easy for a robot, however. Faced with a world full of strange-shaped objects to pick up and manipulate, there’s no easy way of programming a robot to be able to know the precise grip it should employ to deal with every single object it might encounter.

That’s where researchers from the University of California, Berkeley come into play. They’ve developed a system called DexNet 2.0 that works out how to perform this task not by endlessly practicing in real life, but by analyzing the objects in virtual reality — courtesy of a deep learning neural network.

Recommended Videos

“We construct a probabilistic model of the physics of grasping, rather than assuming the robot knows the true state of the world,” Jeff Mahler, a postdoctoral researcher who worked on the project, told Digital Trends. “Specifically we model the robustness, or probability of achieving a successful grasp, given an observation of the environment. We use a large dataset of 1,500 virtual 3D models to generate 6.7 million synthetic point clouds and grasps across many possible objects. Then we can learn to predict the probability of success of grasps given a point cloud using deep learning. Deep learning allows us to learn this mapping across such as large and complex dataset.”

Image used with permission by copyright holder

The most obvious application for DexNet would be to improve robots used in warehousing or manufacturing by enabling them to cope with new components or other objects, and be able to manipulate them by packing them into boxes for shipping or performing assemblies. However, as Mahler points out, the technology could also help improve the capabilities of home robots — such as those that can clean up items or be used for assistive care, such as bringing items to elderly folks who can’t otherwise reach them.

There’s still more work to be done, though. “The big thrust of research in the next year is related to having the robot grasp for a particular use case,” Mahler said. “For example, orienting a bottle so it can be placed standing up or flipping legos over to plug them into other bricks.”

Other specifics on the agenda include the ability to grasp objects in clutter and reorienting objects for assembly. The team also plans to release the necessary code to let users generate their own training datasets and deploy the system on their own parallel-jaw robot. This will take place later in 2017.

“We have some interest in commercialization, but are primarily interested in furthering research on the subject in the next 6-12 months,” Mahler concluded.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
DJI’s first 360° drone offers 8K video recording and a freakishly long transmission range
From omnidirectional obstacle sensing to 42 GB of onboard storage, the Avata 360 is DJI doing what DJI does best: raising the bar for everyone else.
DJI Avata 360° drone.

DJI has officially entered the 360° drone arena with the launch of the Avata 360. It’s the company’s first-ever fully immersive FPV drone, and a direct shot at the Antigravity A1, a rival built by an Insta360-incubated brand. Looks like the drone wars just got more interesting. 

What makes the Avata 360 worth looking at?

Read more
I transferred all my chats from other AI apps to Gemini — and it works flawlessly
Google Gemini Graphics Featured

You know that moment when AI assistants like ChatGPT, Gemini, or Claude suddenly lose the plot mid-conversation and start hallucinating like they’re absolutely sure they’re right? Yeah…it’s equal parts funny and painfully annoying. My usual reaction is switching between apps, hoping one of them gets it right. But the real problem is that I have to start over every single time. It feels like I’m stuck in a loop explaining my life story to different AIs, one after the other.

Now with Gemini, I can now jump in from other AI apps without that whole reset conversation. Finally, the Google gods have blessed us. I tried it out expecting the usual hiccups, but it was surprisingly smooth and quick.

Read more
Google expands Search Live globally with voice and camera AI
The feature is now available in 200+ countries with multilingual support
Google Search Live

Google is taking another big step toward turning Search into a full-blown AI assistant. The company has officially expanded Search Live globally, making the feature available in over 200 countries and territories, along with support for dozens of languages.

https://twitter.com/google/status/2037201891130523917

Read more