Automaton iconAutomaton

Japanese Snake Robot Goes Where Humans Can't

Japanese robotics company HiBot has unveiled a nimble snake bot capable of moving inside air ducts and other narrow places where people can't, or don't want to, go.

The ACM-R4H robot, designed for remote inspection and surveillance in confined environments, uses small wheels to move but it can slither and undulate and even raise its head like a cobra.

The new robot, which is half a meter long and weighs in at 4.5 kilograms, carries a camera and LEDs on its head for image acquisition and can be fitted with other end-effectors such as mechanical grippers or thermo/infrared vision systems.

Despite its seemingly complex motion capabilities, "the control of the robot is quite simple and doesn't require too much training," says robotics engineer and HiBot cofounder Michele Guarnieri.

"All [degrees of freedom] can be easily controlled by a game-style joystick, including the motion of recovering from an upside-down position."

The company says applications include the inspection of ducts, pipes, and ceilings, as well as remote surveillance and security. Indeed, I bet the CIA and other spy agencies could find some uses for this bot!

Watch the ACM-R4H in action:

HiBot is a spin-off of Tokyo Tech's Hirose-Fukushima Lab, which has brought to life some of the world's most amazing mechanical snakes. The company is transforming some of the research creatures into commercial-grade systems.

The ACM-R4H is smaller than other HiBot snake models, so it can easily enter and zigzag through tight spaces. The head and tail segments can move up and down and the middle joint can turn left and right.

It can negotiate 90 degree corners inside an air duct, for instance, or move inside pipes less than 14 centimeters in diameter. It can also overcome obstacles on its path.

The current version relies on a tether connected to a control unit, which provides communication and power (the control box has a rechargeable battery that lasts for over 3 hours).

The user interface shows images from the camera and a set of data from the robot, including power consumption, temperature, and position of each joint. It also shows a 3D image of the robot's current position that the operator can use for assisting with navigation.

Another tool to help with controlling and planning missions for the robot is a 3D simulator, called V-REP, that HiBot offers with its robots or as a stand-alone program:

HiBot, which also develops power line inspection robots, says some customers using the robot -- and most won't disclose what they're using for -- had no issues with the tether. "But we can change the robot architecture to have wireless communication," Guarnieri says.

And though the robot is resistant to water splashes, it can be made completely waterproof, he adds. You never know what people will use it for...

Below, some more snake bot videos, just because it's so cool to watch these lifelike machines. The first video shows the ACM-R3H, which is a long wheeled machine -- watch the entertaining demonstration on a Japanese TV show!

The other video shows the ACM-R5H, capable of slithering on the ground and also swimming. Yes, this snake bot swims just like the real thing.

The Robots Podcast Interviews Futurama's David X. Cohen

Futurama's David X. Cohen

Good news everyone! In its latest episode, the Robots Podcast interviews David X. Cohen, the head writer and executive producer of Futurama.

In the year 3,000, robots are an integral part of society. Futurama's anti-hero is a robot called Bender, whom Wikipedia describes as a "foul-mouthed, heavy-drinking, cigar-smoking, kleptomaniacal, misanthropic, egocentric, ill-tempered robot." Other robots include Donbot, a criminal robot heading the robot mafia and Calculon, a hopelessly self-absorbed robot heading the robot supremacy society. There's even a "Robot Santa," which, due to a programming error, judges everyone to be naughty and goes on yearly Christmas rampage across Futurama's universe.

Futurama is foremost a comedy show, and its flawed robots are foremost theatrical characters. But Cohen and colleagues are science buffs (Cohen himself is a former Harvard and Berkeley graduate and even worked at the Harvard robotics lab for a while) and take joy and pride in providing the occasional "science relief" -- the "z-ray" on Bender's head shown in the picture to the left is one such example (more on that in our previous interview).

As becomes clear in his Robots Podcast interview, Cohen deeply cares about the way science and technology are portrayed in Futurama. It is a difficult balancing act, but an important one given the wild success of Futurama (now in its fifth season!) and the subtle but enormous influence of science fiction on robotics: I suspect sci-fi has had some influence on the career choice, goals and dreams of most roboticists I know, and it certainly does greatly affect public perception.

I, for one, love the influence and am a huge Futurama fan. Thanks for the interview, David X.!

Images: "Futurama" TM and (C) 2009 Twentieth Century Fox Film Corporation. All Rights Reserved.

Google's Autonomous Car Takes To The Streets

google autonomous car

Remember 5 years ago when a bunch of robotic cars managed to navigate through the desert all by themselves? And remember 3 years ago when a bunch of robotic cars managed to navigate through a (fake) urban area all by themselves? Well, today it’s the future, and autonomous robotic cars from Google have already logged 140,000 miles on busy, complex city streets and highways with only occasional human intervention, and 1,000 miles without any human control whatsoever.

Read More

Humanoid Baby Diego-San Looking for Makeover Advice

I’m sure you remember Diego-San, whom we spotted in an issue of Kokoro News back in January. Reactions to these pictures were… Well, let’s just say, reactions were decidedly mixed. And by decidedly mixed, I mean predominantly negative. Diego-San’s createor, Dr. Javier Movellan, has been exploring possible alterations to Diego-San’s face, and has made this concept public:

As Dr. Movellan pointed out in one of his comments on our post, a lot of what’s relevant about designing the appearance of a humanoid robot is simply about trial and error:

“Everybody has strong opinions about why the current version generates such negative reactions: face too large, robot babies are freaky, skin texture is wrong, mixing mechanical body with biological face is scary, giganto-babies are scary … For just about every theory examples can be given that contradict the theories. The truth is nobody really knows. It is a trial and error process.”

With that in mind, Dr. Movellan is looking for some feedback (constructive feedback, please) on what you do and don’t like about this new concept for Diego-San’s face. Personally, I’d say it’s a good start, with the helmet, antenna and exposed electronics all reinforcing the fact that the robot isn’t intending to fool you into thinking it’s real. However, I’d be curious as to what the effect would be if more of the human features were removed. Like, what is strictly necessary for the robot to accomplish its research goals, which may not necessarily involve a substantial amount of expression recognition? Does Diego-San need ears, for example? A nose?

While one route might be to make it less human, the other route would be to make it much more cartoony. So basically, keep all the human features, just make it look intentionally fake… Again, the idea being that you’re reinforcing the fact that the robot isn’t trying to fool you into thinking it’s human.

Anyway, please let Dr. Movellan know what you think by posting a comment. For more background, read through some of the comments on our original post, and Plastic Pals has a very interesting interview with Dr. Movellan here.

[ UCSD Machine Perception Lab ] VIA [ Plastic Pals ]
[ Original Kokoro News Article (*.PDF) ]

Berkeley Bionics Introduces eLEGS Robotic Exoskeleton

The woman in this picture is Amanda Boxtel, who has had a T11/12 spinal injury for 18 years. She’s a paraplegic, but she’s now able to walk with the aid of eLEGS, a robotic exoskeleton system from Berkeley Bionics. You probably remember Berkeley Bionics from their cargo-carrying exoskeleton, HULC, which they’ve since licensed to Lockheed Martin for production for the military. eLEGS is largely based on HULC, except designed for (eventual) home use. The system is relatively light at 45 pounds, and you strap into it by yourself while sitting down. After only a few hours of practice, paraplegics are able to use eLEGS to stand up and walk:

eLEGS is very efficient, and allows for an entire day of walking without needing to be recharged. It’s also extremely quiet, which is very important for a device that is designed to allow you to move around and interact with people in public and social situations.

berkeley bionics elegs

I made a point of asking how exactly the interface between the user and the system works, and was told that it was proprietary, “but nice try.” In general, however, it appears as though eLEGS senses arm movements through ’smart crutches’ (it also looks like there’s some kind of sensor attached to each upper arm), and as the user moves one crutch forward, eLEGS moves the opposite leg. However, to some extent eLEGS learns and adapts to each user, so there must be some other stuff going on under the hood.

eLEGS will be available next July to a select group of rehab centers, but from the beginning, eLEGS was designed for people to take home and use by themselves. By 2013, eLEGS should be available for purchase for something in the low six figures, although the eventual target price is something in the neighborhood of $50k, which is equivalent to a top of the line wheelchair.

I especially liked what Amanda says at the end of the video:

“This is not a wave of the future. The eLEGS is right now. I don’t have to be hopeful… This is reality.”

It’s pretty awesome to realize how true that is.

[ Berkeley Bionics ]

Humanoid Robots Rise. Now, Can They Walk?


Watch out, Asimo, there are some new humanoids on your tail! Photo: Honda

Japan has long held world dominance when it comes to full-body walking humanoid robots. There's the pioneering Waseda robots, the impressive HRP series, the diminutive but nimble Sony Qrio and Toyota Partner robots, and of course, the country's most famous emissary: the charismatic, child-size, astronaut-like Honda Asimo, which ambles, runs, and climbs stairs with (almost) perfect precision. Until recently, only South Korea -- with its Hubo and Mahru robots -- had demonstrated humanoids with legs as impressive as those of their Japanese counterparts.

Now other countries are trying to catch up. Below I describe four humanoids that may give the Asian humanoids a run for their money. Or as one editor here put it, these robots may kick your Asimo.

But first, a digression. Every time I encounter a roboticist building legged humanoids, I ask the same question, Why do we need legged humanoids? Wheels appear to be easier and cheaper to implement and provide great maneuverability -- so why legs?

The answer they give me is two-fold: First, they argue that robots with human-shaped bodies are more apt to navigate human environments. So if we want robots to operate in our homes and offices, where there are stairs, uneven surfaces, and shaggy rugs, we need legs. The second part of the answer is that by building walking humanoids we can better understand how humans walk, balance, and move our bodies to do things like pirouette on a toe or perform incredible kicks.

After hearing their answer, my next question to the humanoid builder is, And why is it so hard to create full-body walking humanoids? Researchers have been working on this for over three decades and it seems we're still taking, well, baby steps. When can we expect a quantum leap in humanoid legged locomotion?

The answer is too complex -- and too interesting -- to summarize here; I will have to write another post on this topic. For now, let's just say there is a preferred walking control scheme, but some researchers are betting on competing approaches, and that although dc motors are the preferred actuators, some groups are seeking alternatives such as compact, powerful linear actuators.

Okay, so here are the four humanoids. Let me emphasize we're showing here only full-body adult-size humanoids. Yes, there are small humanoids capable of walking, of course, like the Nao; there are also robot legs that can run at full speed -- but that lack an upper body; and there are full-body humanoids that still have to show they can take a step. So the robots below are the ones we think could take on Asimo in robot race, or soccer match.

Did we leave out a robot you think should be here? Let us know in the comments section below.

REEM-B
Pal Robotics, Barcelona

Reem-B was designed to assist humans with everyday tasks, says Davide Faconti, founder of Pal Robotics. The 1.47-meter-high robot, unveiled two years ago, can walk at a relatively slow speed of 1.5 kilometers per hour, but thanks to powerful actuators in its legs and arms, Reem-B "is probably the strongest humanoid in the world," says Faconti, boasting that his robot can carry a 12-kilogram payload—say, a big watermelon. Try that, Asimo.


Photo: PAL Robotics

Watch Reem-B walking. The video is a bit old. I'd love to know if Pal has continued to improve the robot's mobility and see what it can do today.

JUSTIN
Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), Oberpfaffenhofen-Wessling, Germany

Justin is by far one of the most impressive humanoids unveiled in recent years. Its lightweight, strangely shaped arms are amazingly dexterous, and the German researchers are consistently pushing the envelope in terms of hardware and software design. At every major robotics conference you can expect to see Justin showing off a new trick.


Photos: Institute of Robotics and Mechatronics/DLR

The thing is, Justin, at this point, is not actually a full-body humanoid. It's currently an upper body with head, torso, and two arms that can be mounted on a fixed base or a four-wheeled mobile platform [see photo above].

The reason we're including it here is because DLR researchers have demonstrated early this year a pair of legs that we suspect may become Justin's lower body.

The legs use the same powerful yet lightweight motors employed in Justin's arms. The idea was to explore joint torque-based control concepts for biped balancing and walking, according to Christian Ott, the lead researcher working on the legs.

If Justin's lower body turns out to be as nimble as its upper body, this robot will be able to do things we have never seen a robot doing before.

Watch the DLR legs in action:

CHARLI
Virginia Tech's Robotics & Mechanisms Laboratory, Blacksburg, Va.


Photo: Virginia Tech

We wrote about CHARLI before. CHARLI is the first untethered, autonomous, full-size walking humanoid robot built in the United States, according to Virginia Tech roboticist Dennis Hong. Hong loves creating acronyms for his robots. CHARLI stands for Cognitive Humanoid Autonomous Robot with Learning Intelligence.

There are actually two CHARLI models. One, smaller, called CHARLI-L uses motors and a linkage system of pulleys and springs to generate movement. Hong and his team are now building a heavier version, CHARLI-H, to be equipped with custom-made linear actuators. See CHARLI-H's future leg on the photo, right.

Hong is secretive about these new actuators, saying only they will help mimic how human limbs move. (They rely on compliance, or "springiness," at the joints instead of stiff position control like most other humanoid robots use, Hong says.)

I look forward to seeing CHARLI-H play the humanoid league in RoboCup! Will it kick like Roberto Carlos?

Watch CHARLI-L taking somewhat timid steps, but steps nonetheless!

SURENA 2
University of Tehran's Advanced Vehicles Center, Tehran

Finally, we're including here the Iranian robot Surena 2, unveiled a few months ago, just because it was such a surprising development. After the first reports surfaced, some people were skeptical the robot was more than an Asimo-looking plastic shell. But finally, video proved the humanoid was indeed a humanoid.

The 1.45-meter-high robot was developed to help researchers explore aspects of bipedal locomotion, Tehran University professor Aghil Yousefi-Koma told IEEE Spectrum. His team is working on a feedback control system that yields a much more humanlike motion.

Surena might be a slow walker, but it has its tricks: It can bow, stand on one leg, and according to some news reports, dance. Dance-off, Asimo?


Photo: Alireza Sotakbar/ISNA/AP

Surena 2 shows off its skills:

Read also:

NASA Ready to Send Humanoid to Space
Fri, September 03, 2010

Blog Post: In one giant leap for robotkind, NASA's Robonaut will join the crew of the International Space Station

Humanoids vs. Humans in Soccer
Mon, August 09, 2010

Blog Post: Can a team of soccer-playing robots beat the human World Cup champions by 2050?

Humanoid Justin Learning To Fix Satellites
Thu, June 17, 2010

Blog Post: Justin is a dexterous humanoid robot that can make coffee. Now it's learning to fix satellites

Humanoid Mimics Person's Movements
Tue, April 27, 2010

Blog Post: An operator wearing a sensor suit can control this Korean robot's arm and leg movements in real time

Biped Robot Petman Achieves 4.4 mph
Thu, April 22, 2010

Blog Post: Boston Dynamics has released a new video showing its Petman biped robot achieving 4.4 mph on a treadmill

Hubo II Robot Is Lighter and Faster
Tue, March 30, 2010

Blog Post: The creator of Albert Hubo is back with a new, better, sword-wielding humanoid robot

Honda on the Past, Future of Robotics
Tue, February 02, 2010

Blog Post: A short documentary film by Honda, presented at this year's Sundance Film Festival, ponders our robotic future

Researchers Using Rat-Robot Hybrid to Design Better Brain Machine Interfaces

ratcar

A strange creature, half robot, half rat, has been seen scuttling across a laboratory in Japan. It's RatCar, a rat-vehicle experiment that scientists hope could lead to improved mobility for people with disabilities.

Researchers at the University of Tokyo wanted to see whether rats could control a miniature vehicle through the brain signals that move their limbs. They recently presented their project at the IEEE Engineering in Medicine and Biology Society annual conference in Buenos Aires, Argentina.

"We wanted to develop a brain-machine interface system aiming for future wheelchairs that paralyzed patients can control only with thought," says Osamu Fukayama of the university's Medical Engineering and Life Science Laboratory. "RatCar is a simplified prototype to develop better electrodes, devices, and algorithms for those systems."

Unlike some brain-machine interface, or BMI, devices that rely on noninvasive EEG to detect neural activity, the RatCar works through direct contact with the brain. Other researchers have used this technique in getting rats to control robotic grippers and monkeys to control computer cursors and even advanced robot arms.

ratcarIn the RatCar, tiny neural electrodes [the dark dots on the tip of the device shown on the photo, right] were implanted in the motor cortex of rat brains, and the animals were suspended under a lightweight, motorized "neuro-robotic platform" with wheels. The objective was to make the vehicle collaborate with the rats to achieve the locomotion they desire.

The rats were trained on the car by towing it around an enclosed area with the motors disengaged. A vision system positioned above tracked the rats by following colored markers on their backs and the vehicle. It fed the positions into a "locomotion estimation model" program that correlated the motion of the animals with readings from the electrodes.

Next the rats were suspended more tightly to the car so their limbs touched the floor only slightly. The researchers then switched the system into "neuro-robotic mode," with the neural signals used to help drive the car. Six out of eight rats used in the study adapted well to the car.

"The vehicle moved forward synchronously with a rat when it was placed inside," says Fukayama, but he adds that the degree to which the car was being controlled by the rat itself was unclear.

Since the rat would be forcibly moved along with the car, measuring its real intentions became a challenging problem. Another difficulty was that only a small percentage of the electrodes actually recorded neural activity, and the recorded neurons didn't necessarily correlate with target movements. 

Fukayama and colleagues Takafumi Suzuki and Kunihiko Mabuchi plan to perform more experiments to address the uncertainties. They want to confirm that the rats can drive the car in different directions and also measure the force that the rats are exerting when trying to move under the car. That way, they could track differences in its motion and the rats' apparent intentions.The less force, the better the neural link is working.

Rats have helped bring about many medical breakthroughs, and we'll see whether they'll help make thought-controlled wheelchairs commonplace.

Images: University of Tokyo's Medical Engineering and Life Science Laboratory

Read also:

Cyborg Fly Pilots Mobile Robot
Thu, August 26, 2010

Blog Post: Swiss researchers have made a fruit fly steer a mobile robot through an obstacle course

Man Replaces Eye with Bionic Camera
Fri, June 11, 2010

Blog Post: Canadian filmmaker Rob "Eyeborg" Spence has replaced his false eye with a bionic camera eye

Monkey Controls Robot with Mind
Wed, June 02, 2010

Blog Post: A monkey with a brain-machine interface commands a 7-degree-of-freedom robotic arm

Robot Bacteria Builds Pyramid
Thu, March 25, 2010

Blog Post: Researchers made a swarm of bacteria perform micro-manipulations and build a tiny pyramid

Cockroach-Inspired Robot Dashes Off
Tue, October 13, 2009

Blog Post: This UC Berkeley robot can survive a 7-story fall -- and dash off at high-speed

Omniwheels Gaining Popularity in Robotics

A recent workshop that involved some rapid development using RoboCup robots, a demonstration of KUKA's omniMove platform, and my previous posts on KUKA's youBot and the ETH Zurich's Rezero ballbot have gotten me interested in omniwheels.

Omniwheels and their variations (also called omnidirectional wheels, Swedish wheels, Mecanum wheels, or Ilon wheels) are an ingenious invention that allows a platform to move in any direction while facing any other direction.

Historically, omniwheels date back to a 1919 patent by J. Grabowiecki, and have since been created in many different designs. Since their introduction by the Cornell RoboCup team in 2000 (see Raffaello D'Andrea's research paper), they have been widely used in some RoboCup categories, where they allow the soccer robots to move in a straight line, while rotating along the line in order to arrive with the desired orientation.

Apart from the RoboCup setting, omniwheels are also used in omnidirectional conveyor systems, for example for handling packages. As you can see from the top picture (showing the parts of the custom-made omniwheels for the Rezero project), the mechanical design of omniwheels is complex. This, along with the inevitable vibrations and comparatively high wear and tear, have limited their use to a few niche applications.

One such niche is moving very heavy or large goods such as large aircraft wings or the rotor blades of wind energy plants. The picture above shows KUKA's heavy load carrier, the HLC 60000. Built in 2007, it is used for internal logistics with payloads of up to 60 tons. The biggest KUKA omniMove vehicle ever built is 32 meters long, 4 meters wide and has a payload of 100 tons.

Other niches are forklifts for aircraft carriers made by U.S. company Airtrax or some versions of Segway's RMP platform.

Another interesting, but still more futuristic, application of omniwheels is to drive the spheres used in ballbots like Masaaki Kumagai's BallIP and the Rezero (or in cars in the movie "I, Robot," where they allow omnidirectional movement without reorientation of the drive unit).

Will omniwheels become the wheel choice for robotics, or will they remain a niche?

More photos:

Ray Oung's Distributed Flight Array wheel (DFA)

Omniwheel on the Rezero robot at the ETH Zurich in Switzerland

Thanks Raff, Robert, Ray and Mike!

Long Exposure Pictures of Robots Cleaning

This is a set of all of the different long exposure pics that I’ve taken while doing reviews of cleaning robots. I spent a likely excessive amount of time reviewing each robot in detail (and you can read those reviews at the links below), but really, the pictures explain the cleaning patterns better than I ever could. It’s illuminating (so to speak) to see them all right up against one another, so here they are:

iRobot Roomba 560, 45 minutes

Neato Robotics XV-11, 12 minutes

Evolution Robotics Mint (Sweeping Mode), 15 minutes

Evolution Robotics Mint (Mopping Mode), 23 minutes

Three very different robots, three very different cleaning techniques. Check out the individual reviews for more details on each.

[ iRobot ]
[ Neato Robotics ]
[ Evolution Robotics ]

Interview: Ken Goldberg Discusses Telerobots, Androids, and Heidegger


Ken Goldberg explores the interactions between technology, art, and media. Photo: Martin Sundberg

This year saw the invasion of telepresence robots—electromechanical avatars that allow you to be there without actually being there. Today, half a dozen companies are selling, or will start selling, these robots. I’ve tested two of the robots myself, discussing at length their technical merits as well as their practical shortcomings, and even helped a colleague build his own robotic self.

Although the technology behind these robots is fascinating, I’m also interested in the historical and philosophical aspects of telepresence. Telepresence robots didn’t come out of nowhere; they stem from a convergence of different technologies, each with its own history. The advent of robotic telepresence also reflects a moment in time when many of us are becoming ever more connected and available.

 

  special report:
   Telepresence

So what made these robots possible now? What’s so appealing about roaming around as a machine in a remote place? And where is this technology taking us, literally and figuratively?

To explore these themes, I spoke with Ken Goldberg, a robotics professor at the University of California, Berkeley, and a member of IEEE Spectrum’s editorial advisory board. When he’s not building robot cameras to spot wild birds or computer-controlled flexible needles that steer through soft tissue, he’s delving into the interactions between technology, art, and media.

If anyone can make a connection between robots and Heidegger, it’s Ken.

Erico Guizzo: I've recently asked Marvin Minsky what he thought of current telepresence robots. He complained that they don’t have legs. And I’ve seen other people complaining that they don’t have arms. What do you think of their design?

Ken Goldberg: The main issue is cost. When you add legs or arms, you need more actuators, more sensors, more computation, and it gets expensive very fast. But at the same time, robotic parts and technologies are getting better and cheaper. I think it was Brian Carlisle [former CEO of Adept Technology], who said that if you can sell a car, which is a lot of metal and a lot of engineering together, for under 10,000 dollars, we should be able to do the same for robots. We need volume to make them cheaper. So, of course, we want robots that clean up the house and hopefully change diapers. But we have to start somewhere, and the one thing that is easier to do is communication. Today’s telepresence robots are communication tools, and they raise some interesting design questions. For instance, how tall should they be? Rovio [a home robot sold by WowWee] is small, so you probably can’t have an eye to eye conversation, unless you want to talk to your kids. Or your cat. I think height, and eye to eye interaction, is important. Another thing I consider key is the ability to point at things. And that’s easy: You need just a laser and a two-axis gimbal. So there are many design issues to consider if you want to make telepresence a powerful experience.

EG: And why do we want to physically extend ourselves to distant places anyway? Telephone and Skype aren’t enough?

KG: The idea of remote control, that you can click a button here and something happens over there, is a very powerful and satisfying experience. We love our TV and garage remotes. Robots have a very intertwined history with this idea of remote operation. It might go back to Tesla’s famous experiments with a radio-controlled boat, which he demonstrated in New York in 1898. After World War II, the first robots were master-slave telerobots used to handle radioactive substances. Today telerobots are used for exploration, in space and underwater, and more recently for bomb disposal. Now, telepresence is different because you’re not manipulating an object or performing a repair; you’re interacting with people. So there are humans on both ends. The goal is to give the remote operator a sense that he or she is closer to the people on the other end. And hopefully vice versa. Naturally, the telephone and things like Skype are more or less trying to do the same thing. But the key question is, What’s missing? How can you make the experience of “being there” more fulfilling? One of the benefits of the robots is that they enhance the sense of agency, of being an agent, in the remote environment. You are not just a passive conversationalist; you can actually move around and explore. And that really matters. The ability to control where you are is empowering and gives you a different set of possibilities. There’s more spontaneity and discovery.

EG: In 2001 you edited a collection of essays titled, “The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet.” One of the articles, by John Canny and Eric Paulos, describes a telepresence robot [image, right] very similar to the commercial versions we’re seeing today. Why did it take nearly 10 years for these robots to become commercially viable?

KG: Back then when people like John Canny and Eric Paulos were developing various kinds of telerobots and camera systems, the Internet and wireless networks weren’t as fast and reliable as they are today. Now networks have more bandwidth, better quality of service, lots of error correction. That makes a huge difference. The other thing that has changed is that it’s less expensive to build a robot today, because the components you need are getting better and cheaper. So you’re seeing these companies coming out, like Vgo and Anybots, trying to commercialize these robots. The big question is, When can they get the price down to a point where it’s available to a large number of people? When that happens, things will get very interesting.

EG: Today we use cellphones, e-mail, instant messaging, Twitter, Facebook—and soon some of us may be using telepresence robots. We’re staying connected in more ways and for longer periods of time. Where is this going?

KG: I explore some issues related to that in a course I teach with Dreyfus [UC Berkeley philosopher Hubert Dreyfus] on the philosophy of technology. The goal of the course is to discuss what is the “essence” of technology. We want to give students, many of whom will be creators of technology, a broader historical and social perspective to understand technology. Our starting point is a 1954 essay by Heidegger, “The Questions Concerning Technology.” Let me say first that Heidegger is a problematic figure. He wasn’t the nicest guy. But we can’t dismiss everything that he wrote. He’s considered by many as a fundamental philosopher for the 21st century. So, in a nutshell, what Heidegger says in this essay is that technology is really a “mode of being,” a sort of attitude or culture we are immersed in. It’s not something we can consciously adopt. It’s all around us, we’re engulfed in it. He says we’re in a technological mode of being that is all about making the world available. The key concept here is availability. For example he mentions the Rhine River. Rather than approaching the river as primitives, who might ponder how the gods created the river, or artists and poets, who would focus on the beauty of the river, our approach is that the river is a resource to generate power. He argues that we approach the world around us, nature in particular, as something that we should use to make other things available. Where it gets really interesting is that the availability starts to take on momentum of its own. So we don’t necessarily want anything in itself; all we want is everything to be transformable into something that we’ll need in the future. The key thing that Heidegger hints at and he worries about is that this worldview, if it continues in the direction it’s going, will overwhelm us, and then we’ll do the same thing to ourselves: We’ll see ourselves as resources.

EG: I guess Heidegger would have hated telepresence robots…

KG: Yeah, maybe. Telepresence is another form of availability, going beyond the forms we have. As technologists, we’re excited about it, but from Heidegger’s perspective, it’s another step along that trajectory that he is worried about. He’s not a doomsayer; he’s just saying we should beware. At the end of the essay he comes to this point where he sees this supreme danger, when we’ll be engulfed and overwhelmed and we’ll want to make ourselves constantly available. And you see that today with Facebook , Twitter, smartphones, compared to 10 years ago. We’ve become much more instantly available to each other and to the world, and we are willing to put out all kinds of information that were previously private. And also we are, especially the younger generations, happy to be available. So we’re transforming ourselves into resources. But Heidegger also saw a bright side, a way out of this situation. As we move closer to being consumed by this technology mode of being, the positive outcome is that we’ll be jolted into realizing what we’re doing and we’ll have the capability of stepping out of it. In other words, he’s saying we have to hit bottom before we stop the madness.

EG: Almost ten years ago, you participated in a telepresence research project called the Teleactor, using people as proxies for other people. Is the Teleactor a precursor to robotic telepresence?


Teleactor Annamarie Ho. Photo: Bart Nagel

KG: The idea there was to have a person—the teleactor—who would be your proxy and go to a place when you’re unable to go yourself. The teleactor would wear a camera and microphone, transmit that data over the Net, and you’d see and hear through them. So you would finesse the robot part by having a human. The teleactor also had an ear piece and would get instructions from you about where to go and who to talk to. We did a lot of experiments, but the technology was not there yet. First we were using analog video and we were constantly getting interference. Then we switched to Wi-Fi, and it was the first versions of Wi-Fi, and it had its own problems. The network connections were also slow, and we were constantly fighting network delays. And we were using Java applets that were slow and Web interfaces that barely had video capabilities. It was primitive. I wished we had 4G networks to do the audio and video. That would have made things much easier!

EG: So when the operator spoke, the teleactor repeated what was said?

KG: Not in a robotic way. You can imagine that if the teleactor is just literally talking and acting like a robot—that’s sort of creepy. [Laughs.] Our idea was more like an actor taking directions from the remote person. So the teleactor can also improvise a little. Imagine it’s a meeting or a party and I’m connected to a teleactor. The teleactor can see a group of people and go near them, or just wait for you to decide where to go. We wanted to investigate the interactions that would take place and see how they’d compare to normal situations. I think someone should repeat the experiment today. Imagine you hire out-of-work actors, who love to engage with strangers, and send them as teleactors to parties and events. People who can’t attend the party in person would go as teleactors.

EG: And in the future we can replace teleactors with androids! We’re already seeing some steps in that direction. What do you think of telepresence robots that look like people, like the androids Hiroshi Ishiguro is creating?

KG: When we think about robots they are sort of generic in shape and appearance. [Ishiguro] is not doing that at all. His robots are not only very human but also very specific to individual humans. And these are not just functional machines that’d do work or entertain; these things are actually out there to act as surrogates in a very real way. It’s an idea with connections to psychology, mythology, and science fiction. It goes back to Galatea, the Golem, and later Pinocchio and Frankenstein and Blade Runner, and all the attempts to create something that’s very lifelike. He’s really pushing the limits and asking some deep questions about ourselves. It’s a really Cartesian question: Are we automatons or not? If robots become indistinguishable from humans, how are we going to treat them? It also forces us to reflect about our own mortality. It evokes this memento mori—just remember we’re going to die. Life is short: appreciate the present. How will you be remembered, if at all? Can something of me live beyond me? With these robots, just like a portrait or statue or other replica, you’ll be able to have something that conceivably could live beyond your existence. These androids help us explore these questions. Maybe we’ll discover that there are things we want to avoid. But the only way we’re going to know is by experimenting.

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More