Three years ago, Juan Wachs was brainstorming engineering projects with his Ph.D. students when someone suggested a robotic nurse that could hand instruments to a surgeon during an operation.
“We started laughing,” says Wachs, an assistant professor of industrial engineering at Purdue University, in West Lafayette, Ind. “In the beginning, it was more like a joke. But then one of the students came up with some algorithms that suggested it could actually be done. We said, ‘If we don’t try this, we’ll never know.’ ”
Today, Wachs and his students are part of a multinational team developing a surgical robot, dubbed Gestonurse, designed to respond to hand gestures and verbal commands. The gestures and commands correspond to specific instruments used by surgeons.
The main goal is to reduce medical errors. “Most surgical mistakes are related to miscommunication between operating teams due to understaffing, high noise levels, long working hours, and changing operating room personnel,” says Wachs. “Misunderstandings can result in handling the wrong instruments, delays, and items left inside patients. We found that experienced teams make almost no mistakes because their interactions with one another occur by multimodal communication: speech, gestures, and gaze.”
“So far, the Gestonurse understands 10 commands issued through gestures and/or speech,” says Wachs. “In the future, it will need to understand more verbal and nonverbal commands, such as body stance.”
The Gestonurse takes the form of a robotic arm attached to a camera- and microphone-equipped computer, which translates hand gestures and verbal commands into commands that tell the arm which instrument to give the surgeon. Wachs’s focus is on developing the speech and gestural recognition software for the Gestonurse project.
“The biggest challenge is incorporating human idiosyncrasies,” he says. “People use gestures in different ways. The system has to be trained to recognize individuals. We have to look at the context of the task: how the gestures and body language respond to the procedure, where in the body the instrument is being inserted, and the difference between using hands to talk versus command gestures.”
Wachs and the team are currently working toward building a version of Gestonurse to be tested in mock surgery scenarios, thanks to an anticipated three-year grant of more than US $1 million from the Qatar National Research Fund. The project will involve the collaboration of surgeons at the Hamad Medical Center in Qatar.
Crossing cultures is familiar to Wachs, who grew up in Argentina and received his higher education in Israel before moving to the United States. His fascination with robotics grew while earning a Ph.D. in industrial engineering at Ben-Gurion University of the Negev, in Israel, where he developed a system that used hand gestures to browse MRI images. After graduating in 2006, Wachs spent 18 months as a postdoc at the Naval Postgraduate School, in Monterey, Calif., applying his research toward naval training, before arriving at Purdue in 2009. A few months ago he received a U.S. Air Force Young Investigator award for interface research.
Wachs is also involved in a multidisciplinary team at Purdue collaborating with SRI International in Menlo Park, Calif., to reconfigure its Taurus military robot for medical purposes. The robot was initially designed to disassemble bombs, but it also can be a dexterous surgical assistant, and at 36 by 13 centimeters, it is easily transported between operating rooms.
“It takes a mixture of computing, engineering, and psychology to create man-machine interfaces that are able to read our body language,” says Wachs. “We’re having to invent a whole algorithmic system to understand context. It’s very subjective. It’s a new way of looking at context and understanding intent.”
This article originally appeared in print as “Juan Wachs.”