Using Robots to Train the Surgeons of Tomorrow

Robotic tools may lead to better methods of training and evaluating surgeons

6 min read
Using Robots to Train the Surgeons of Tomorrow

This article is the first of a series that will explore recent advances in surgical and medical robotics and their potential impact on society. More articles, videos, and slideshows will appear throughout the year.

da vinci surgical system robot


Da Vinci surgical system. Photo: Kelleher Guerin

How can the skill of a surgeon be measured? A patient’s body has no buzzer that alerts the surgeon when mistakes occur during an operation. There is no Yelp-like website that ranks a surgeon based on user reviews. It is surprising that people can spend less time selecting a surgeon for an operation than they might selecting a restaurant for dinner or a mechanic to fix their car.

According to a study from the U.S. Agency for Healthcare Research and Quality,  surgical complications, including postoperative infections, foreign objects left in wounds, surgical wounds reopening, and post-operative bleeding, resulted in a total of 2.4 million extra days of hospitalization, $9.3 billion excessive charges, and 32,000 mostly surgery-related deaths in the United States in 2000.

To what extent training is responsible for those errors is unknown. Some argue that most surgeons never achieve true expertise. One thing is certain, though: Residents need better, more effective training. It isn’t sufficient to have residents merely go through the motions; they must be able to practice deliberately. The problem is that residents already work inhumanely long hours (recent regulations limit their training to 80-hour work weeks, but they typically work more than that) and they must learn a growing number of surgical techniques and technologies, which means new generations of surgeons are having less and less time for hands-on practice.

In the past few years, several research groups, including our team at Johns Hopkins University, have been working to analyze and automate the training process using modern robotic surgical tools. Our goals are to create an objective, standardized method of surgical training as well as to reduce the time and cost of having an experienced surgeon in the training loop.

Surgical skill can be broken down into theoretical skill (consisting of factual and decision-making knowledge) and practical skill (the ability to carry out manual tasks such as dissection and suturing). Theoretical skill is often taught in a classroom and is thought to be accurately tested with written examinations like the Medical College Admission Test (MCAT) and the United States Medical Licensing Examination (USMLE). Practical skill, on the other hand, is much more difficult to judge.

Practical skills, such as driving a car, swinging a golf club, or throwing a football, are most effectively taught “in the field” through demonstration. In 1889, Sir William Halsted at Johns Hopkins University revolutionized surgical training by developing an apprentice-style technique still being used in most modern training programs of surgical residents today. According to this method, a resident would “see one, do one, teach one,” implying that after minimum exposure and the completion of a procedure once, a resident will have mastered the skill and will be capable of teaching the next novice. (Residents practice certain procedures more than once, but the principle is still that one time is really all the exposure they’d need before going out in the field and performing on their own.) Although many talented surgeons are trained this way, the method is time consuming, and evaluating a student’s performance is a subjective task that varies depending on the student/teacher pair. The method also involves a lot of yelling.

With the advent of technologies such as robotic surgical systems and medical  simulators, researchers now have the tools to analyze surgical motion and evaluate surgical skill. Our group is studying human-machine interaction for surgical training and assistance in multiple contexts with increasing levels of complexity. The first level involves a system that understands what the human and environment are doing. The next level of interaction is for machines to provide assistance to a human operator through augmentation. The last level is to have a robot perform a task autonomously. We’ll describe the state of research in each of these areas.

Understanding the surgical environment

language of surgery


Language of surgery. Photo: Carol Reiley

There is an active effort to develop new approaches to surgical training and evaluation. Using techniques from speech recognition, our group is developing mathematical models for motion recognition and skill assessment. These models may be the key to standardizing surgical training by decomposing complex surgical tasks like suturing, blunt dissection, and cutting into elementary “chunks” of motion -- and thus decode the “language of surgery.”

These motions can be compared to phonemes, the elementary units of speech. Sequences of subtasks can be constructed like words to form sentences (analogous to various surgical tasks), which can then be used to form paragraphs (analogous to surgical operations). And, just as in speech, a recognition program might call attention to poor “pronunciation” or improper “syntax” in surgical execution, and can try to understand the intent of the surgeon from recorded motion and video data. (This research typically focuses on telepresence surgery as performed using the da Vinci system from Intuitive Surgical.) Using our skill evaluation system, trainees can have their trials evaluated offline or see their trial synchronized with a prerecorded expert trial to shorten the learning curve.

Augmenting the surgical environment

kidney stone image overlay


Kidney stone image overlay. Photo: Balazs Vagvolgyi

Super-surgeon performance can be achieved if human intelligence can be combined with robot accuracy and precision. Computer-integrated surgery, using equipment such as a robotic system with a video display, can enhance human senses by providing additional information. For example, the visualization can overlay a reconstructed CT scan of a tumor on the operating site, or the robot can use force feedback to prevent a surgeon’s hand from puncturing a beating heart.

Studies have shown that superimposing graphics, sounds, and forces over the real-world environment in real-time can assist with training.

Robots with intelligent sensors can address humans’ physiological limitations such as poor vision or hand tremor. Even the best surgeons can use intelligent assistance to improve performance. Force sensing “smart” surgical instruments will allow for safer and more effective surgeries. For example, they can be used to measure the local tissue oxygen saturation on the working surfaces of surgical retractors and graspers so that tissue doesn’t become permanently damaged.

steady hand eye robot


JHU Steady Hand-Eye Robot. Photo: Marcin Balicki

The JHU Steady-Hand Eye Robot is a robot used for retinal microsurgery where the surgeon and the robotic manipulator share the control of the instrument. This reduces hand tremors and allows for precise and steady motion. Shaky-handed surgeons, there’s hope for you yet!

The robot surgeons of the future

Researchers are now moving towards understanding how humans and machines can work together as a team to collaboratively finish a surgical task. Training models can be used to automate portions of a tedious task or to predict surgeons’ intent to automate an instrument change. Automation might also allow a surgeon to utilize more than two arms of the system at the same time: although the da Vinci surgical system has four arms (three to hold tools and one for the camera), the third arm generally sits idle, since humans can only control two arms at any given moment.

university of washington raven surgical robot biorobotics laboratory


University of Washington Raven. Photo: BioRobotics Lab

The University of Washington’s Raven System is an impressive mobile surgical robot used for telesurgery. In the next few months, seven schools are receiving this system as a part of a multi-institutional grant: Johns Hopkins University, UC Santa Cruz, University of Washington, UC Berkeley, Harvard, University of Nebraska, and UCLA. A few orders are already in for the next iteration that include schools in Florida, Toronto, and Minnesota. This standardized research surgical platform will lead to exciting new research in telesurgery and surgical training these next few years.

Raven is a mobile laparoscopic surgical system. Because Raven is modular, it is more portable than massive surgical robots used in hospitals and is able to be reassembled by a team of people. And while most commercial surgical robots weigh nearly half a ton, Raven is only 23 kilograms (about 50 lbs). This makes it ideally situated for hazardous environments.

Telesurgery experiments with the Raven generally involve a surgeon at a safe location operating a robot in the field; for example, underwater in a submarine pod or in the desert under scorching temperatures and gusting winds. Control commands and sensor feedback are transferred over a wireless connection.  Research questions include how time delays affect performance, how multiple surgeons can operate robots together to complete a surgery, and how surgeons can train on the platform most effectively.

The surgical environment in the operating room is unlike any other because of the constantly moving objects, because no two procedures are identical, and because of the sterilization/FDA approval issues. The state of surgical robotics is still a long way from one-button autonomous surgery, but the future of surgical training might be undergoing a major “facelift.”

About the Authors

Carol E. Reiley is currently finishing her doctoral research in surgical robotics at Johns Hopkins University and running TinkerBelle Labs, focused on creating low-cost, do-it-yourself projects. Reiley, who was the student chair on the IEEE Robotics and Automation Society for 2008-2010, earned her bachelors at Santa Clara University in computer engineering and her masters in computer science at Johns Hopkins.

Gregory D. Hager, an IEEE Fellow, is a professor in the computer science department at Johns Hopkins University, where his research interests include computer vision, robotics, medical devices, and human-machine systems. He directs the Computational Interaction and Robotics Lab and is the deputy directory of the NSF Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST).

The Conversation (0)