Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
An article in Spectrum last year described what we called “Generation Smartphone” and claimed that “the smartphone’s role as constant companion, helper, coach, and guardian has only just begun.” And anyone who has watched a 2-year-old spend a morning with an iPad can easily picture that.
What about at the other end of the spectrum—the age spectrum, that is?
A recent article in The New York Times quoted a Carnegie Mellon University roboticist, Jim Osborn, as saying, “There are two trends that are going in opposite directions: One is the increasing number of elderly people, and the other is the decline in the number of people to take care of them.”
I thought we’d get him on the show to tell how robots can, to use his words again, “fill in the gaps.” He’s the executive director of the Robotics Institute’s Quality of Life Technology Center at CMU. His degrees are in biomedical engineering, and in the past he has sent robots into active volcanoes and into the Three Mile Island and Chernobyl nuclear accident sites. He joins us by phone.
Jim, welcome to the podcast.
Jim Osborn: Thanks for having me.
Steven Cherry: Jim, you have a robot at Carnegie Mellon that can separate an Oreo cookie and scrape out the creamy middle. Is this an important task when it comes to caring for the elderly?
Jim Osborn: It’s exemplary of what a robot that needs to do things with a lot of dexterity is all about, but I don’t think that there’s a screaming market out there for pulling Oreos apart among any age group. But the ability to manipulate everyday stuff and to do it with the precision and agility that human fingers can do it—that’s pretty important.
Steven Cherry: The Silicon Valley startup Willow Garage has a robot that’s been used for assistive care, and Georgia Tech has a robot, Cody, that can bathe a person. Are there any robots that are designed specifically for the elderly?
Jim Osborn: I wouldn’t say specifically for any age group. The point that we’re trying to hit is a need for everyday assistance or assistance with what an occupational therapist would call an “instrumental activity of daily living.” That’s cooking a meal, doing household chores, things of that nature, stuff around the house. And the more fundamental things they call “activities of daily living,” which have to do with eating and bathing, hygiene, getting yourself from one seating surface to another, in and out of bed, and so on. And to some extent, those things become more challenging for all of us as we get older.
Steven Cherry: So what are some of the specific technical challenges in caring for not necessarily the elderly, I guess, but anybody who needs a lot of care?
Jim Osborn: Well, there’s a lot of ways that the robots are going to have to interact with the person, and there are certain just demands of the robotic hardware and software in understanding the environment, understanding the different objects that have to be manipulated, understanding the person and their geometry and configuration, and the fact that person can move.
There’s also a whole different set of issues that arise because people are people and have a wide range of preferences and abilities, and those preferences and abilities are often changing with time. And so we have to think in terms of the robot plus person working as a cooperative pair, and the ways in which the split of responsibility in doing any particular task, let’s say making breakfast, how that’s going to occur, how that might be defined by the person.
For example, the person might want to do all of the actual cooking but would like the robot to retrieve all the ingredients. That might be somebody who has some mobility impairment, could even be arthritis, and they don’t move around as fast, and so how they would interact would be very different.
So you could imagine that if the person needs help lifting something, then the robot cooperating to provide the extra physical force to lift that frying pan, for example, provides a much different type of a challenge than just a split of responsibilities—“You do this, I do that.”
At this point in time, where we are with the types of control modes, we’re looking at ways in which—we call it “assistive teleoperation,” and that means the split of the person actually doing the guidance and the robot computing the guidance can move along a spectrum, from one limit being the robot does everything, and the other limit being the person does everything, and that moving along that spectrum might shift over time. So we’re looking at it not just as a technical problem, but it’s also something of a social problem in the sense of how do the people want the robots to work with them?
Steven Cherry: Yeah, I’m kind of interested in, there’s kind of an ethical dimension to that, which I’m going to get to in a minute, but I’m wondering about just the sort of interaction. I wonder if there’s going to be a sort of uncanny valley in care for the elderly, and what I have in mind is, in the original uncanny valley, computer graphics produce more and more humanlike appearances until they get so good that people don’t see them as a better representation but the reverse. They see the images as a kind of uglier, deformed human. And, you know, if robots get smarter and smarter at tasks like reminding people of their medicine and bathing them, maybe we’ll expect them to do even more, including things that they can’t do.
Jim Osborn: That’s a very, very important point, that we have to respect the expectations. And I’ve been around robotics long enough to remember the days which we produced a lot of hype. There was a lot of promise about what robots would be able to do. They actually can do a lot of things that we were promising back in the ’80s now—it’s just taken a lot longer to get there than we said it would, and that has to be the case now, too. We’ve got to set a very realistic expectation.
I think what’s going to happen is that there will be a steady infusion. The less-challenging, meaning the tasks that don’t require physical manipulation, things like, like you mentioned, doing reminding, being something of a social companion, following a person around the house just to check up on them, make sure they’re okay and haven’t fallen. Those sorts of noncontact operations, they’re easy to do. They’ll be introduced first, and we have to make sure people understand that’s what the robot is there for.
And so a robot that watches and might detect a fall and be able to alert and call for help probably won’t be the one that helps a person off the floor, and we’ve got to make it very clear that that’s not what the robot’s intended to do, but it’s still serving a useful function. So just like anything else, there’s kind of a product consideration that needs to be respected in the eyes of the consumer, and we’re going to have to do a good job of being mindful of that.
Steven Cherry: Yeah, I guess there’s a little bit of data so far about some of these interactions, and the New York Times article that I mentioned quoted the noted MIT researcher Sherry Turkle. She studies the effects of technology on society in general, and she apparently said, “Giving old people robots to talk to is a dystopian view that is classified as being utopian.”
Jim Osborn: Well—and I can dispute her contention. There some truth to that, but I think, again, the intention and the expectation we’re trying to set is we’re not looking for robots to replace people; we’re looking for robots to augment and fill in gaps of what people provide in terms of care and support and social interaction.
We often think of an example of one of the machines we’re working on also has two arms and is also based on a wheelchair platform. This is with our colleagues at Pitt, and it would be targeted at people who have combined mobility as well as upper-limb impairments, and those kind of people often have what’s referred to as an “attendant,” a human that would come and, say, in the morning get them dressed, ready for the day.
On a blizzard here in Pittsburgh in the morning, the traffic might be such that the attendant can’t get there until 10:00 a.m., and that person would be more or less stuck, can’t really get going at all. Alternatively, a robot that could fill in that gap, do some of those things that would have required a person on the spot, that could make a huge difference in that person’s life, and it’s not to replace the attendant—it’s to fill in a gap or to provide a similar service to time that the human attendant couldn’t be there. So there’s a completely other way of looking at it, and that’s what we’re really aiming for. So it’s not a replacement; it’s an addition.
Steven Cherry: I guess in some cases it’s going to end up being a replacement more than anything else, right? I mean, I quoted you in the beginning on the demographics that are driving this, and I gather the gap between care needers and caregivers is already pretty large, and it’s going to grow.
Jim Osborn: Yeah, that’s true. And, again, I think it’s not inconsistent with this idea of a robot’s being something that’s an amplifying effect of the people who are there. Whether they be professional or informal caregivers, it’s still operative. You think now of somebody who is, say, a visiting nurse, and she might be, or he, seeing between four and eight clients on any given day, and with the services of robots that are more or less under her supervision, her ability to provide for those four to eight people, instead of the one to two hours that each of them gets, they get something more than that.
So the business models we think might have very much the idea of the caregivers being involved, in fact, the caregivers being the ones that are responsible for the robots. It’s a very, very reasonable way in which things could proceed, and there’s analogies of that kind of a model in lots of other industries, where the person’s efficiency and ability to get work done is, in fact, amplified and increased through technology.
Steven Cherry: You apparently overestimated in the past how quickly these changes would take place, and I’m going to invite you to do that again. At some point, the robots will be better than the home caregivers in pretty much every way, and, I mean, that could be, I don’t know—when will that be?
Jim Osborn: Wow, grandchildren of mine, maybe? I don’t know how old you are. That’s a ways off. Now, that said, there’s always things that robots will do better, in the sense that it’s a very repetitive task, so I could imagine a robotic window washer.
One of the things our colleagues at Georgia Tech did is study what these older people would like these robots to do, and unsurprisingly, I think, the men said they’d like it to do home maintenance: wash windows, cut grass, and things like that. And the women said indoor chores and tasks.
And for some of those things, the robot will probably eventually beat people, because it’s a very scripted kind of thing. Hang a curtain? That’s pretty hard. That takes an awful lot of finesse and reasoning that a person is probably going to be beating robots at for a long time to come. Microwaving a meal? The robots could probably do that right 100 times out of 100.
So it depends on the kind of things that people would be looking for the robots to do, and if there’s a big enough appetite or big enough market for that kind of thing, yeah, you could see robots that could outperform people. May not be very interesting things that they do, but maybe that’s the point. There’s a lot of reasons we’ve had things automated in the past because they’re dirty, they’re dull, some cases dangerous, so here we’re probably talking about the dull, and robots don’t really get bored, so that’s a plus for them.
Steven Cherry: Well, Jim, it’s a huge problem we have coming up. It’s sort of the demographic equivalent of climate change, so thanks for being one of the people to take it on, and thanks for joining us today.
Jim Osborn: Glad to do it .
Steven Cherry: We’ve been speaking with Jim Osborn of Carnegie Mellon University’s Robotics Institute about caring for the elderly with robots.
For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
Photo: Yuriko Nakao/Reuters
This interview was recorded Thursday, 6 June 2013.
Segment producer: Barbara Finkelstein; audio engineer: Francesco Ferorelli
Read more “Techwise Conversations,” find us in iTunes, or follow us on Twitter.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.