As I come down the hallway, heads start popping out of cubicles and offices, all eyes turning in my direction. Some of my colleagues laugh, some frown. One looks terrified and flees. That’s what happens, I suppose, when you show up at the office as a robot.
The robot is acting as my stand-in at work. For a week last spring, it roamed around IEEE Spectrum’s New York City office while I sat in my pajamas at home in Brooklyn. From my laptop, over the Net, I could steer the robot, peer through its cameras, and talk to my colleagues. It’s a bit like a video game, but instead of a virtual character, you’re controlling a real avatar.
The robot has an alien-looking head with two big round eyes that’s perched on a thin carbon-fiber pole. One eye captures high-definition video; the other shoots a green laser beam. The laser isn’t for zapping coworkers you dislike but for pointing at things. My robotic proxy rolls on a two-wheeled base that balances just like a Segway. This is no humanoid C-3PO. It looks more like a floor lamp.
The robot is called QB, though my colleagues promptly nickname it EriBot. QB is what is known as a telepresence robot. It’s the creation of Silicon Valley start-up Anybots, which will start selling the machines this month. Each will cost US $15 000—not exactly a bargain for a robot that doesn’t even have arms (or a positronic brain, for that matter). But Anybots says that as a communications platform, QB lets remote workers collaborate with others in ways that a wall-mounted monitor in a conference room could never permit.
Embodying a QB, you’d be able to join impromptu meetings, drop by a coworker’s office, even gossip at the water cooler. You could tour a distant facility or observe a live demonstration without having to hop on a plane. To paraphrase management guru Peter Drucker, why transport a whole body to work when all you need is the brain—a brain you can upload into a robot anywhere?
Indeed, becoming a robot has its advantages. Every morning, while my colleagues dragged their carbon-based bodies to the office, I’d open my laptop at home or in a coffee shop and, with a few mouse clicks, incarnate my robotic self. Call it robocommuting.
My goal was to find out how my robotic life compared to the real thing. But I also wanted to explore something more profound: Will telepresence robots eventually take people’s places at work, whether we like it or not?
Artificial intelligence pioneer Marvin Minsky extolled the promises of telepresence in a 1980 manifesto in Omni magazine. “Eventually telepresence will improve and save old jobs and create new ones,” he wrote. “Later, as we learn more about robotics, many human telepresence operators will be able to turn their tasks over to the robots and become ‘supervisors.’”
Today at least five companies are selling or will soon start selling telepresence robots. Like QB, these are still relatively simple machines—glorified laptops on wheels. But proponents say that as computers, sensors, and motors get better and cheaper, telepresence robots will advance too, revolutionizing engineering collaborations, health care, even manual labor. Could this be the future of work?
To prepare an article like this one, I’d normally travel to different places to talk to people and see things up close. But given that this story is about telepresence, why not let robots do the reporting for me?
Here’s how it started. Sitting at my computer in New York City, I log on to a QB robot at the Anybots office in Mountain View, Calif. There I meet Trevor Blackwell, Anybots’ founder and CEO, an amiable guy with gray-white hair, thin glasses, and a soul patch. An entrepreneur, he founded Anybots in 2001 because he couldn’t believe “there still weren’t robots helping around [my] home and office.”
Using QB, I follow Blackwell as he shows me around. Hanging from a crane is one of the first robots he and his small crew built. It’s a sophisticated humanoid called Monty. Slap on sensor gloves and a backpack of electronics and you can get the robot to instantly replicate your movements, whether you’re grasping a teacup or operating a power drill.
Monty also has a technology that became a key part of QB: a custom self-balancing wheel system. Blackwell says it’s better than standard three- or four-wheeled bases for driving over bumps and around tight corners. It’s also quite stable, which he once demonstrated by planting a kung fu kick on Monty’s chest. The robot held its ground.
Blackwell’s initial goal was to design a robot servant like Rosie from “The Jetsons.” But rather than build an autonomous robot, which technologically was just too difficult, his idea was to have a human worker remotely control the robot. He envisioned the machine doing chores at people’s homes or operating the fryolator at McDonald’s.
Cost issues and technical complications eventually forced Anybots to scale back its vision. The company decided to focus instead on telepresence. Teleoperated robots have long been used to extend a human’s reach into distant locations, such as space and deep under water, and in hazardous places like mines and nuclear reactors. But such robots are designed to perform specific tasks. Anybots wanted to focus on robots that let people be at a remote location. “After 100 years of advances in communications, where we discovered how to transmit text, voice, images, why not try to transmit presence?” Blackwell asks.
Building on what the company learned with Monty, it designed QA, a humanoid-looking robot with a sleek, white plastic body. That design got streamlined further, and QB was born. Behind its simple appearance is a neat combination of hardware and software. Inside the robot’s base, the engineers crammed a computer running FreeBSD—a Unix-like operating system—two Wi-Fi cards, two DC motors, a pack of four 14.4-volt lithium-ion batteries, and a set of gyroscopes and accelerometers. Blackwell and three colleagues wrote software from scratch to control the robot and handle all networking and communication functions.
At the end of my tour at Anybots, I asked Blackwell if I could borrow a QB, expecting him to say, “Sure, just send me a $15 000 check.” Instead he said, “When do you want it?”
One February morning, FedEx delivered a huge, military-grade Hardigg case to Spectrum’s office. This is how QB travels, nestled amid custom-formed foam. Anybots once tried to buy QB a plane ticket and bring it into the cabin, but the airline was not keen on the idea of a mechatronic passenger.
The robot I borrowed, QB No. 7, came with a companion: a friendly Anybots engineer named Erin Rapacki. Her job was to help set up the robot—a preproduction prototype—and probably to babysit it, too, to make sure I wouldn’t drive it out of the office onto the streets of midtown Manhattan.
Setting up QB took several hours. Rapacki started by loading its base with the batteries, which can keep the robot going for 6 to 8 hours. Working out of the Anybots office in California, Daniel Casner, another engineer, logged on to the robot to tweak its network configuration so that data packets could get through our firewall. He also programmed the robot to switch seamlessly among the different Wi-Fi routers in the office. With the robot ready, my robotic existence commenced.
Sitting at my laptop at home, I launched Firefox and installed a plug-in created by Anybots to establish a link between my computer and the robot. I then went to the Anybots Web site, logged on with a password, and clicked on a button to connect to the robot. A video appeared on my screen, indicating I could start driving by tapping on the arrow keys of my keyboard.
At the office, the QB played a short jazz tune, announcing to those nearby that the robot, as Rapacki put it, “had a soul in it.”
During my tests I experienced a delay of up to a second between pressing a key and the actual movement of the robot. It took me a few minutes to adjust to these conditions. Soon, though, driving became so natural I didn’t even think about it.
The robot’s main camera is a 5-megapixel wide angle, but it doesn’t have the same field of view as the human eye. To see the floor just ahead of the wheels, you have to press the shift key, which switches the video to a secondary camera that faces down. The robot also has a scanning laser range finder for detecting obstacles. It steers or stops the robot if you’re about to hit something—very helpful for avoiding furniture and feet.
QB has a small LCD on its forehead. Normally, it would show a live video feed of my face. But at the time of my experiment, Anybots was still debugging this feature, so the LCD displayed only a generic avatar. This meant that my voice was my main means of communication. The audio stream had some crackling, but otherwise I could chat normally.
That afternoon, I ran into the colleague who had fled earlier. He again tried to avoid me, but I gave chase (QB’s top speed is 5.6 kilometers per hour, enough to follow people at a fast pace). I cornered him near the pantry and told him it was just me in a robot body.
“I get nervous,” said Alan Gardner, our editorial researcher. “I have trouble relating to robots.”
Other colleagues, though, were bolder. They wanted to know what I saw and how I controlled the QB. But just as I was showing off my capabilities, pirouetting and zigzagging down the corridor, I got disconnected from the robot. EriBot was now mindless.
I clicked on my computer and reestablished the link. “Where did you go?” someone asked. I explained what had happened. But something was still wrong: I couldn’t steer the robot. I then asked senior editor Phil Ross something I’d never imagined asking anyone: “Phil, can you please reboot me?”
So what is it like to be a QB? It’s not exactly “Star Trek” teleportation, but being transplanted to another place as a robot is really cool. And if you were wondering, yes, you can get actual work done.
As a robot, I tried to interact in different ways with my colleagues. QB and I sat in on a typical editorial meeting, with two dozen people in a conference room and a few more participating via speakerphone. The editors who call in always complain that being “in the box” for hours is an ordeal. Participating as a robot, I think, makes a huge difference, mainly because when you speak, people look at the robot and you feel you have their attention. Even rolling into the room and choosing your place around the table gives you a better sense of “being there.”
But the best interactions I had as a robot were the informal ones. One day, I wheeled over to photo editor Randi Silberman Klett’s cubicle, where we held a spontaneous meeting to discuss art for this very article. Casual encounters like this, I realized, are the big advantage of telepresence robots over conventional communication systems. There are times when you just have to walk over to a colleague and look together at a screen or a piece of paper and see each other’s reactions.
Pamela Hinds, codirector of Stanford University’s Center for Work, Technology, and Organization, says there’s a “huge need” for better technologies to support such “on the ground” collaboration. “Almost all of the technology that is out there is primarily geared to support meetings,” says Hinds, who’s tested a QB. She predicts that more and more companies will embrace telepresence robots.
As a robot, I participated in social events that I wouldn’t have been able to while working at home. Our editorial tea was one example. Once a month, the Spectrum staff gathers in the conference room to hang out, drink tea, and eat an assortment of snacks. Attending this kind of gathering may not seem like a big deal, but a common complaint among remote workers is that they feel isolated and lonely, a problem some refer to as “water cooler withdrawal.”
When I showed up at tea as QB, my colleagues dressed the robot in a scarf and hat and then took pictures. I got back at them by having editor in chief Susan Hassler log on to the robot from her home in Connecticut, unannounced.
“Hello there,” she said.
“Who’s...who’s there?!” they answered, bewildered to be hearing her voice suddenly coming from the robot. Yep, the next time you notice a robot standing over your shoulder, it might be your boss.
Overall, I was impressed with the QB. Driving it was incredibly easy, and the communication worked well. But I also see big challenges for this technology.
The first one is cost. Telepresence robots now come in various models of varying sophistication. But nobody quite knows yet how much companies are willing to shell out. The idea of offices populated by telepresence robots “sounds like a great science fiction story,” says Dan Kara, CEO of Robotics Trends, a market research firm in Framingham, Mass. “But are companies willing to pay for these robots? What advantages do they have over conventional technologies?”
Telepresence robots will also have to compete with inexpensive PCs that have webcams and with mobile phones that offer videoconferencing capabilities, says Tandy Trower, a Microsoft veteran who led the company’s robotics efforts and recently started his own robotics company. “A mobile telepresence robot can deliver a different experience, but there will be inevitable comparisons, especially in the corporate world,” he says.
Another issue involves safety and reliability. One problem I had with my QB experiment was the spotty Wi-Fi coverage in our office (ironic given that IEEE—ahem—created the 802.11, or Wi-Fi, standard). Sometimes the video would freeze or become pixelated. Other times I’d get disconnected from the robot altogether, without even knowing whether the machine had stopped rolling.
Anybots insists that the robot, which weighs in at 16 kilograms, is safe around people. But still, what if someone else wrested away control of my QB and used it for mischief? Last year, researchers at the University of Washington reported that some teleoperated robots are indeed vulnerable to hacker attacks. The study focused on cheaper home robots like WowWee Group’s Rovio and Erector’s Spykee, but the problem may exist in the corporate sphere as well. Imagine a hacker gaining control of your robot and using it to insult your boss or escape the building.
My QB had other issues. It had to be rebooted quite a few times, a task that required manually pushing a button on its base. Having a human tend to the robot’s needs seemed strange, to say the least. Is this how robots will begin to enslave humanity?
Finally, there are the social and psychological questions that telepresence robots raise. Will people accept robotic coworkers? And what about the person behind the robot: Can the operator get used to being a robot?
To answer that question, I decided to talk to someone with more telepresence mileage than I have. Dallas Goecker is an electrical engineer who lives in Seymour, Ind., and works at Willow Garage, a robotics start-up in Menlo Park, Calif. He and a colleague have built a telepresence robot called Texai, and for over a year now Goecker has used it to robocommute nearly every workday. Probably no one else in the world has logged as many robotic hours.
Goecker tells me he’s gotten used to being a robot. It’s much better than when he had only a phone and Skype for collaborating with colleagues. Like me, he especially appreciates being able to participate in casual conversations. “That’s when a lot of design work happens,” he says. For Goecker, inhabiting a robot has become so natural that he sometimes can’t recall whether he did something—a discussion with a coworker, for example—in person or as a robot. “It can become one and the same,” he says.
Willow has built 25 Texai units. The company won’t say how much each one costs or what it plans to do with them, only that several companies are testing the robot. Google cofounder Sergey Brin reportedly appeared at a recent gala dinner as a Texai.
Given Goecker’s experience and my own, I’m convinced that telepresence robots will become popular in many offices. I’m not saying that every worker should have one or that all companies should invest in them. But for companies with multiple offices and workers whose jobs involve lots of travel or teamwork, the robots make sense. And as the technology improves, such robots will make sense not only for office workers but for other workers, too.
Even now, several companies are testing a robot by Vgo Communications to allow their employees to observe live experiments taking place in distant locales. And doctors are interacting with patients in some 200 U.S. hospitals via telepresence robots built by InTouch Health. If this type of robot becomes cheaper, safer, and more reliable, it could be used to monitor the elderly or infirm in their own homes, even performing blood pressure measurements and other simple tests.
Some roboticists speculate that future telepresence robots will also revolutionize factory jobs. Imagine equipping such robots with strong arms and dexterous hands. Remote human workers would then be able to perform tasks that autonomous manipulators can’t easily handle, like transporting materials in a warehouse or assembling circuit boards in an electronics plant.
“Manual labor could easily be done without leaving your home,” Marvin Minsky wrote in his 1980 Omni essay, adding, “One region of the world could export the specialized skills it has. Anywhere.”
If the visionaries are right, seeing the world through a robot’s eyes and roaming around in an artificial body will become a familiar experience for many people. So don’t be surprised if one day you’re asked to vacate your office and start working from home: Your avatar may be on its way.
Now please excuse me—I think I may need a reboot. Phil?