Fusion: A Collaborative Robotic Telepresence Parasite That Lives on Your Back

A robot perched behind you can see what you see and control both your arms

2 min read

Evan Ackerman is IEEE Spectrum’s robotics editor.

Keio University's Fusion telepresence system
Photo: Keio University

Most of the telepresence robots that you can buy are appealing because they offer you some sort of mobile agency—like, the ability to remotely drive yourself around. Robots like these are great if you want to, say, find yourself an elephant, but not all that great if you want to help other people out through collaborative tasks that require physical interaction. Collaboration, especially instruction, often depends on the physical act of one person showing another person how to do something, and even if your telepresence robot has an arm or two, it may not be at all intuitive for a remote user have effective direct interactions.

At Keio University in Japan, roboticists have developed a new kind of telepresence robot that’s designed to (as literally as possible) allow you to remotely inhabit the body of someone else in order to assist them with manipulation tasks. (A similar idea, the Tele-Actor, was conceived by roboticist and artist Ken Goldberg and colleagues from UC Berkeley.) The Keio researchers call their system Fusion, and it lives on someone’s back, allowing you to peak over their shoulder and use a second pair of arms to either show them how tasks are done, or even to physically move their limbs for them.

“Fusion” enables body surrogacy by sharing the same point of view of two-person: a surrogate and an operator, and it extends the limbs mobility and actions of the operator using two robotic arms mounted on the surrogate body. These arms can be used independently of the surrogate arms for collaborative scenarios or can be linked to surrogate’s arms to be used in remote assisting and supporting scenarios. 

The operator uses off-the-shelf HMD (Oculus CV1) enabling him to access surrogate body. The surrogate mounts a backpack that consists of three axes robotic head with stereo vision and binaural audio, and two anthropomorphic robotic arms (six degrees of freedom) with removable hands.

Usually, “surrogate” in contexts like these refers to a completely independent robot that’s controlled by a human, as with MIT’s HERMES project—the remote human sees through the robot’s eyes in VR, while controlling its limbs by moving their own. While similar in its principles of operation, Fusion instead lives on the back of a human slave, where it operates with a varying amount of invasiveness:

  • Directed: Where a pair of humanoid hands can assist or instruct the surrogate host.
  • Enforced: Where the hands are replaced with, uh, let’s just go ahead and call them restraints, so that the remote user can exercise direct physical control over the surrogate host.
  • Induced: Where the remote user forcibly direct the surrogate host by yanking them around.

One big advantage of Fusion is that the remote user gets pretty much the exact same perspective as the surrogate host, making it easier to give feedback in physical tasks. Best case scenario, it’s like having a friend standing behind you, gently helping you do your best. Worst case scenario, it’s like having a friend standing behind you, asking why you’re hitting yourself as they force you to repeatedly punch yourself in the face. Either way, I’m sure it would be a unique learning experience.

“Fusion: Full Body Surrogacy for Collaborative Communication,” by MHD Yamen Saraiji, Tomoya Sasaki, Reo Matsumura, Kouta Minamizawa, and Masahiko Inami from Keio University and the University of Tokyo, was presented at SIGGRAPH Emerging Technologies 2018.

[ Keio University ] via [ Dezeen ]

The Conversation (0)