Home Robot Control for People With Disabilities

Georgia Tech's augmented-reality interface gives control over complex robots to the people who need them

6 min read

Henry Evans, a California man who participated in Georgia Tech's study, used a PR2 robot to shave, wipe his face, and scratch his head.
Henry Evans, a California man who participated in Georgia Tech's study, used a PR2 robot to shave, wipe his face, and scratch his head.
Photo: Henry Clever and Phillip Grice/Georgia Tech

Robots offer an opportunity to enable people to live safely and comfortably in their homes as they grow older. In the near future (we’re all hoping), robots will be able to help us by cooking, cleaning, doing chores, and generally taking care of us, but they’re not yet at the point where they can do those sorts of things autonomously. Putting a human in the loop can help robots be useful more quickly, which is especially important for the people who would benefit the most from this technology—specifically, folks with disabilities that make them more reliant on care.

Ideally, the people who need things done would be the people in the loop telling the robot what to do, but that can be particularly challenging for those with disabilities that limit how mobile they are. If you can’t move your arms or hands, for example, how are you going to control a robot? At Georgia Tech, a group of roboticists led by Charlie Kemp are trying to figure out how to make this work, by developing new interfaces that enable the control of complex robots through the use of a single-button mouse and nothing else.

One of the users involved in the Georgia Tech research is Henry Evans, who has been working with a PR2 (and other robotic systems) for many years through the Robots for Humanity project. Henry suffered a brain stem stroke several decades ago, and is almost entirely paralyzed and unable to speak. Henry describes his condition in this way:

I had always been fiercely independent, probably to a fault. With one stroke I became completely dependent for everything—eating, drinking, going to the bathroom, scratching itches, etc. I would, to this day, literally die if someone weren’t around to help me, 24 hours a day. Most of us are able to take control over our own bodies for granted. Not me. Every single thing I want done, I have to ask someone else to do and depend on them to do it. They get tired of it. So do I, but whereas they can walk out of the room or pretend not to see my gestures, I cannot escape. People say I am very patient, and I am. It is only partly due to my nature. The basic truth is, I have no choice. 

Henry can move his eyes and click a button with his thumb, which allows him to use an eye-tracking mouse. With just this simple input device, he’s been able to control the PR2, a two-armed mobile manipulator, to do some things for himself, including scratching itches:

What the video doesn’t show is what most of this research is actually about: Giving Henry, and other people, the ability to control the robot to get it to do all of this stuff. PR2 is a very complicated robot, with an intimidating 20+ degrees of freedom, and even for people with two hands on a game controller and a lot of experience, it’s not easy to remote control the robot into doing manipulation tasks. It becomes even more difficult if you’re restricted to controlling a very 3D robot through a very 2D computer screen. The key is a carefully designed low-level Web interface that relies on multiple interface modes and augmented reality for intuitive control of even complex robots.

Our approach is to provide an augmented-reality (AR) interface running in a standard Web browser with only low-level robot autonomy. Many commercially available assistive input devices, such as head trackers, eye-gaze trackers, or voice controls, can provide single-button mouse-type input to a Web browser. The standard Web browser enables people with profound motor deficits to use the same methods they already use to access the Internet to control the robot. The AR interface uses state-of-the-art visualization to present the robot’s sensor information and options for controlling the robot in a way that people with profound motor deficits have found easy to use. 

With its autonomy limited to low-level operations, such as tactile-sensor-driven grasping and moving an arm with respect to inverse kinematics to achieve end-effector poses, the robot performs consistently across diverse situations, allowing the person to attempt to use the robot in diverse and novel ways.

Image shows the view through the PR2\u2019s cameras showing the environment around the robot. Clicking the yellow disc allows users the control the arm.A browser window shows the view through PR2’s cameras of the environment around the robot with superimposed augmented-reality elements. Clicking the yellow disc allows users to control the position of the arm.Image: Phillip Grice/Georgia Tech

The interface is based around a first-person perspective, with a video feed streaming from the PR2’s head camera. Augmented reality markers show 3D space controls, provide visual estimates of how the robot will move when commands are executed, and also provide feedback from other nonvisual sensors, like tactile sensors and obstacle detection. One of the biggest challenges is how to adequately represent the 3D workspace of the robot through a 2D screen, but a “3D peek” feature overlays a Kinect-based low resolution 3D model of the environment around the robot’s gripper, and then simulates a camera rotation. To keep the interface accessible to users with only a mouse and single-click control, there are many different operation modes that can be selected, including:

  • Looking mode: Displays the mouse cursor as a pair of eyeballs, and the robot looks toward any point where the user clicks on the video.
  • Driving mode: Allows users to drive the robot in any direction without rotating, or to rotate the robot in place in either direction. The robot drives toward the location on the ground indicated by the cursor over the video when the user holds down the mouse button, and three overlaid traces show the selected movement direction, updating in real time. “Turn Left” and “Turn Right” buttons over the bottom corners of the camera view turn the robot in place.
  • Spine mode: Displays a vertical slider over the right edge of the image. The slider handle indicates the relative height of the robot’s spine, and moving the handle raises or lowers the spine accordingly. These direct manipulation features use the context provided by the video feed to allow users to specify their commands with respect to the world, rather than the robot, simplifying operation.
  • Left-hand and right-hand modes: Allow control of the position and orientation of the grippers in separate submodes, as well as opening and closing the gripper. In either mode, the head automatically tracks the robot’s fingertips, keeping the gripper centered in the video feed and eliminating the need to switch modes to keep the gripper in the camera view.

The grippers also have submodes for position control, orientation control, and grasping. This kind of interface is not going to be the fastest way to control a robot, but for some, it’s the only way. And as Henry says, he’s patient.

In a study of 15 disabled participants who took control of Georgia Tech’s PR2 over the Internet with very little training (a bit over an hour), this software interface proved both easy to use and effective. It’s certainly not fast—simple tasks like picking up objects took most participants 5 minutes when it would take an able-bodied person 5 seconds, but as Kemp and Phillip Grice, a recent Georgia Tech Ph.D. graduate, point out in a recent PLOS ONE paper, “for individuals with profound motor deficits, slow task performance would still increase independence by enabling people to perform tasks for themselves that would not be possible without assistance.”

A separate study with Henry, considered to be an “expert user,” showed how much potential there is with a system like this:

Henry also discovered an unanticipated use for the robot. He controlled the robot to simultaneously hold out a hairbrush to scratch his head and a towel to wipe his mouth. This allowed him to remain comfortable for extended periods of time in bed without requesting human assistance (two sessions of approximately 2.5 hours and 1 hour in length). Henry stated that “it completely obviated the need for a human caregiver once the robot was turned on (always the goal),” and that “once set up, it worked well for hours and kept me comfortable for hours.” This was a task that designers had not anticipated, and was the most successful use of the robot in terms of task performance and user satisfaction, as the deployed research system provided a clear, consistent benefit to the user and reduced the need for caregiver assistance during these times.

Obviously, a PR2 is probably overkill for many of these tasks, and also not likely to be available to most people who could use an assistive robot. But the interface that Georgia Tech has created here could be applied to many different kinds of robots, including lower-cost arms (like UC Berkeley’s Blue) that wouldn’t necessarily need a mobile base to be effective. And if an arm could keep someone independent and comfortable for hours instead of a human caretaker, it’s possible that the technology could even pay for itself.

[ Georgia Tech ]

The Conversation (0)