Point-and-Click Method Makes Robot Grasping Control Less Tedious

Teleoperating complex robots is really hard, and Georgia Tech is working to fix that

3 min read

Georgia Tech researchers developed a novel point-and-click robot grasping system
Image: Georgia Tech RAIL Lab via YouTube

Until all robots everywhere are autonomous all the time, humans are going to have to take over once in awhile. This is going to happen more frequently as robots that are almost but not quite fully autonomous get deployed in residential, commercial, and industrial environments. When these robots get stuck on a task (and they definitely will get stuck), a human operator could hop in via telepresence to help them out. One problem, though, is that right now this teleoperation process is awfully tedious.

For most grasping tasks, when a robot needs help, it means that a human needs to manually position every single degree of freedom of the gripper while squinting at some low-resolution 3D point cloud. Georgia Tech researchers are working on making the process significantly less painful. Their approach involves getting rid of all of that manual positioning and using a friendly, interactive interface that takes care of everything with just one or two clicks.

Between the full manual and point-and-click grasping approaches shown in the above video, the researchers, from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) lab, led by Professor Sonia Chernova, also implemented a middle ground “constrained positioning” method, which intelligently limits the amount of degrees of freedom that a user needs to position. Instead, the user needs to select only a grasp point, approach angle, and grasp depth. If you put these approaches together, you get a spectrum of options for teleoperated grasping, ranging from full 6-DoF manual control to 3-DoF constrained positioning grasping to single-click mostly automated grasping. You can see all of these methods in action here.

Georgia Tech's point-and-click teleoperation system offers a spectrum of options for robot grasping, from full 6-DoF manual control to 3-DoF constrained positioning grasping to single-click mostly automated grasping.

As the autonomy of these grasping approaches improves, the system makes increasing use of scene information, although it’s not necessarily doing scene or object recognition. In other words, it needs basic depth data in order to help you out with grasping, but it doesn’t need to understand what it’s looking at, making it easy to scale up and deploy to new environments without training. The three approaches also use only a mouse and keyboard, and are accessed through a friendly looking web interface, which helps users to get comfortable with the system. Comfort is an important point, because ideally, you want non-expert users to be able to control your robots without getting frustrated at the task, the robot, and (ultimately) at you, the robot owner.

A study of non-expert users showed that the point-and-click interface was the most effective, helping users to “complete a greater number of tasks more quickly, complete tasks more consistently, and make fewer mistakes,” as the RAIL lab researchers explain: 

While point-and-click had the clearest advantages over the other interfaces, constrained positioning does have a significant advantage over free positioning in reducing the number of errors made by users. The most frequent type of error was missing a grasp, in which the gripper failed to make contact with any part of the environment, object or otherwise. Both constrained positioning and point-and-click require the user to focus their end-effector positioning around a point cloud by clicking on a point to initiate the interaction. Constraining the interaction to a physical surface significantly reduces the number of missed grasps, resulting in more efficient use of the arm.

“A Comparison of Remote Robot Teleoperation Interfaces for General Object Manipulation,” by David Kent, Carl Saldanha, and Sonia Chernova from Georgia Tech, was presented at HRI 2017 in Vienna, Austria.

The Conversation (0)