Picking things up is such a fundamental skill for robots, and robots have been picking up things for such a long time, that it’s sometimes difficult to understand how challenging grasping still is. Robots that are good at grasping things usually depend on high quality sensor data along with some amount of advance knowledge about the things that they’re going to be grasping. Where grasping gets really tricky is when you’re trying to design a system that can use standardized (and affordable) grippers and sensors to reliably pick up almost anything, including that infinitely long tail of objects that are, for whatever reason, weird and annoying to grasp.
One way around this is to design grasping hardware that uses clever tricks (like enveloping grasps or adhesives) to compensate for not really knowing the best way to pick up a given object, but this may not be a long-term sustainable approach: Solving the problem in software is much more efficient and scalable, if you can pull it off. “I’ve been studying robot grasping for 30 years and I’m convinced that the key to reliable robot grasping is the perception and control software, not the hardware,” Ken Goldberg, a professor of robotics and director of the AUTOLAB at UC Berkeley, told us this week.
Today, Professor Goldberg and AUTOLAB researcher Jeff Mahler are announcing the release of an enormous dataset that provides the foundation for Dex-Net 2.0, a project that uses neural networks to develop highly reliable robot grasping across a wide variety of rigid objects. The dataset consists of 6.7 million point object point clouds, accompanying parallel-jaw gripper poses, along with a robustness estimate of how likely it is that the grasp will be able to lift and carry the object, and now you can use it to train your own grasping system.