Computer-based simulators hone operating skills before the patient is even touched
In 1994, U.S. Army Colonel Richard M. Satava, general surgeon at the Walter Reed Army Medical Center in Washington, D.C., made a bold declaration about the future of medicine: “We are in the midst of a fundamental change in the field of medicine which is enabled by the information revolution,” he wrote in a paper entitled “The King is Dead.” Of the many disciplines arising from this new information era, “virtual reality holds the greatest promise,” he proclaimed.
A leading advocate of modernizing medical practice through information technology, Satava envisioned the day when surgeons would hone their skills on virtual reality simulators and medical students would train on virtual cadavers “nearly indistinguishable from a real person.”
Six years have passed, and in a sense, Colonel Satava was wrong: go to any hospital and medicine is practiced much as it was when he wrote his paper. Sleep-deprived medical residents still learn the ropes by working on whatever cases happen to be wheeled through the door. Patients still undergo extraordinarily painfulsurgery, requiring weeks of recovery in bed. And virtual reality (VR) is more likely to be found in the hospital arcade room than in the operating room.
But the technology is starting to make inroads in surgical training. VR “is just beginning to come to that threshold level where we can begin using simulators in medicine the way that the aviation industry has been using it for the past 50 years—to avoid errors,” said Satava, now a professor of surgery at Yale Medical School, in New Haven, Conn.
Rapid improvements in computing power today have opened the way for desktop virtual reality trainers that incorporate realistic graphics and, in some cases, the sense of touch. Affordable commercial simulators, for instance, are now available for practicing such tasks as threading flexible endoscopes down a virtual patient’s throat or manipulating the long surgical instruments used in laparoscopy. Companies and universities are also developing systems that simulate more complex procedures, such as suturing tissue and inserting a catheter into a vein using laparoscopic tools.
These VR trainers can be adjusted to the user, to pinpoint areas of weakness, and they can be used at any time, without the need for supervision. What's more, they prepare the student psychologically for surgical tasks, because complications can be simulated in a safe manner. They can also give objective scores of a student's ability.
Indeed, studies show that computer-based training simulations are at least as good as standard training methods. Eventually, it is hoped, medical students and practicing physicians will be able to walk into the computer laboratory of their local hospital and practice a certain kind of surgery or test their dexterity with surgical tools.
This new technology comes at a critical time. A recent report entitled “To Err Is Human: Building a Safer Health System,” released by the Institute of Medicine in Washington, D.C., estimates that medical errors may cause 100 000 patient deaths each year in the United States alone. Proponents of VR believe that incorporation of the technology into medical training will bring this grim statistic down.
Today, though, only 1 percent of medical students in the United States receives any type of virtual reality training, Satava estimates. For the most part, surgeons still train using crude, age-old methods, like peeling grapes or operating on dead human bodies or anesthetized animals.
Practice makes perfect
Prototypes of VR trainers have been around since the early 1990s, and some commercial products have come to market [see “Play time at medical school”]. For the most part, they are designed to do very specific tasks—sewing veins together, inserting catheters, and the like. It turns out, though, that modeling the myriad textures and responses of the human body is vastly complicated and existing data relatively scant.
“We're good at simulating simple things, like poking needles into chests and drawing fluids out,” said Alan Liu, a VR expert at the Uniformed Services University of the Health Sciences' Medical Simulation Center, in Bethesda, Md. “We're less successful with elaborate procedures, like making incisions and going after tumors. And as for complete end-to-end simulation—prepping a patient, opening up, doing a transplant or repairing an organ, and closing up again—nobody has done anything like that in its entirety.”
Much of the work in medical simulation has been spearheaded by the U.S. military, which has, of course, an interest in ensuring proper medical care on the battlefield [see “The virtual emergency room”].
A typical trainer consists of a mono- or stereoscopic display (either a cathode ray tube or a head-mounted display), a PC or more powerful computer, and an interface device for interacting with the simulation. While the basic goal of all surgery trainers is the same, there is a multitude of styles of simulation.
A heated debate in the field today is how far developers should go to reproduce the look and feel of true surgery. Current technology allows for the simulation of touch [Fig. 1] and sound (such as a patient expressing discomfort), as well as for lifelike images. But the cost of greater realism can be prohibitive. So some companies have turned to simple graphics-only VR simulations that can be run on low-end PCs.
Seeing is believing
Simulating human tissue—be it tooth enamel, skin, or blood vessel—often starts with a sample from a flesh-and-blood person. Depending on the simulation needed, anatomical images can be derived from magnetic resonance images, video recordings, or the Visible Human Project, a computer model of a human developed by the National Library of Medicine, in Bethesda, Md. The image can then be digitally mapped onto a polygonal mesh representing whatever body part or organ is being examined. Each vertex of the polygon is assigned attributes like color and reflectivity from the image of the organ.
For the user to interact with the graphics, there must be software algorithms that can calculate the whereabouts of the virtual instrument and determine whether it has collided with a body part or anything else. Also needed are models of how various tissues behave when cut, prodded, punctured, and so on. Here, too, VR designers often portray the tissue as a polygonal mesh that reacts like an array of masses connected by springs and dampers. The parameters of these models can then be tweaked to match what a physician experiences during an actual procedure. To create graphics that move without flickering, collision detection and tissue deformation must be calculated at least 30 times per second.
Advances in medical graphics may soon allow ordinary medical scans of a patient's anatomy to be enhanced into virtual three-dimensional views—a clear advantage for surgeons who are preparing to do a complicated procedure. Scans from magnetic resonance imaging (MRI) and computed tomography (CT) produce a series of thin slices of the anatomy divided into volume data points, or voxels; these slices can be restacked and turned into 3-D images by a computer [Fig. 2].
(2) A large mass is visible in this color-enhanced image, assembled from computed tomography scans of a patient's lungs and rib cage.GENERAL ELECTRIC CO.
The 3-D images can also be color-enhanced to highlight, say, bone or blood vessels, according to Ricardo Avila, a graphics scientist at General Electric Co.'s Corporate Research & Development unit, in Schenectady, N.Y., where such a system is under development. Imagery of this kind relies on a technique called ray tracing: an algorithm calculates which rays of light from the volume image would enter a virtual eye located at a point that will give the surgeon the desired perspective. The virtual eye can, for example, be induced to move down an esophagus, simulating the path an endoscopic probe would take.
“Three-dimensional renderings provide a concise way of depicting an entire data set, instead of flipping through lots and lots of [two-dimensional] images,” said Avila. A surgeon searching for, say, an abdominal aneurysm would quickly spot it in a volume-rendered display. And planning a procedure to fix it would also be easier.
Reach in and touch someone
Physicians rely a great deal on their sense of touch for everything from routine diagnosis to complex, life-saving surgical procedures. So haptics, or the ability to simulate touch, goes a long way to making VR simulators more lifelike.
It also adds a layer of technology that can stump the standard microprocessor. While the brain can be tricked into seeing seamless motion by flipping through 30 or so images per second, touch signals need to be refreshed up to once a millisecond.
The precise rate at which a computer must update a haptic interface varies, depending on what type of virtual surface is encountered, according to Greg Merril, chairman and founder of HT Medical Systems Inc., of Gaithersburg, Md., which makes simulators for uteroscopy, bronchoscopy, and other procedures. “Softer objects require lower update rates than harder objects,” he explained. A low update rate may not prevent a user's surgical instrument from sinking into the virtual flesh, but in soft tissue, that sinking is what is expected. “If you want something to come to an abrupt stop, it requires a higher update rate than [bumping into] something a little squishy,” Merril said.
Still, simulating squish is no easy task, either. The number of collision points between a virtual squishy object and a virtual instrument is larger and more variable than between a virtual rigid object and an instrument. “The most difficult to simulate is two floppy objects interacting with each other”—such as a colon and a sigmoidoscope, the long bendable probe used to view the colon—“because you have multiple collision points,” noted Merril. In addition, the mechanics of such an interaction are complicated, because each object may deform the other.
HT Medical's latest product, a virtual sigmoidoscope, is designed to simulate such an interaction [Fig. 3]. The user grasps the handle of the sigmoidoscope, which boasts all the bells and whistles of the real thing, including the ability to suction matter, to inflate the colon by blowing in air, and to manipulate tools like tissue-sampling instruments. The instrument leads to an “anatomical reference plate”—in this case, a model of a patient's buttocks—inside of which is a device that both measures the probe's position and delivers a feedback force to the user, by means of actuators and brakes.
(3) A virtual sigmoidoscope from HT Medical Systems Inc. trains doctors to maneuver the flexible probes used to view the colon, shown on the computer display. The system incorporates haptic feedback and video imaging synchronized to the position of the probe. The simulation software warns the user when injury to the 'patient' is imminent, and it can also rate the user's performance.HT MEDICAL SYSTEMS INC.
The rest of the system consists mostly of off-the-shelf components. The haptic device's driver card plugs into a 500-MHz PC equipped with a standard graphics card and a regular color monitor. The software includes a database of graphical and haptic information representing the colon. The graphics, including deformation of virtual objects, is calculated separately from the haptic feedback, because the latter must be updated much more frequently.
Many VR laboratories rely on a haptic interface called the Phantom [Fig. 4]. Developed by Thomas Massie and J. Kenneth Salisbury, both then at the Massachusetts Institute of Technology, in Cambridge, it is now sold by SensAble Technologies Inc., of Woburn, Mass.
(4) The Phantom haptic interface from SensAble Technologies Inc. is popular in virtual reality surgical simulators. The device has three or six degrees of freedom and uses actuators to relay resistance at about 1000 Hz.SENSABLE TECHNOLOGIES INC.
About the size of a desk lamp, the device resembles a robotic arm and has either three or six degrees of freedom and sensors for relaying the arm's position to a PC. The handle of whatever medical instrument is being used in the virtual procedure is attached to the end of the arm. A software package, aptly named Ghost, translates characteristics such as elasticity and roughness into commands for the arm, and the arm's actuators in turn produce the force needed to simulate the virtual environment.
By no means is this a perfect approximation of directly touching an object. Mandayam A. Srinivasan, director of MIT's Laboratory for Human and Machine Haptics (also known as the Touch Lab), likens the haptic system to feeling the world through a stick.
Even so, laboratories like his are working to make the haptics more sophisticated. Most of the haptics algorithms to date have been point-based, Srinivasan noted, meaning that a force is represented for only a single point of contact. This approach can lead to very unreal situations. For instance, the side of a virtual instrument with only one point of contact might end up passing through an organ it should be pressing against.
To avoid such gaffes, the Touch Lab has developed an algorithm that models virtual instruments as lines rather than points. This ray-based rendering calculates the forces from all the collisions along the line, and delivers the resulting force and torque through two three-degree-of-freedom Phantoms. The ultimate goal is to represent the virtual instrument as a 3-D object, although that could also increase the number of computations exponentially.
Fortunately, virtual machines need only be good enough to fool their human users. “By altering the relationship between how touch stimuli are sensed and what is shown visually, you can extend the range of these experiences,” Srinivasan said. For example, visual cues about deformation can sometimes override haptic cues, so that a user thinks an object is harder or softer than what the haptic interface is telling him. That phenomenon may allow VR designers to cut some corners in haptic calculations.
Producing realistic force feedback remains a big challenge. “The major difficulty in modeling organs is the physical behavior,” said Srinivasan. “They have all kinds of complexities you can think of: they are anisotropic, nonhomogeneous, and nonlinear.” In addition, a great deal more physical measurements of tissue will be needed to make realistic haptic maps of complicated parts of the body, such as the abdomen. As he sees it, creating VR simulators with haptic feedback will mean striking the right balance between the tissue model's complexity, the haptic interface device's capabilities, and human perception.
Currently, VR designers rely on trained physicians to make their haptic feedback feel just right. “We'll grab any available surgeon that walks within our range and say, 'Sit down, try this, and tell me what you think,' ” said Uniformed Services University's Alan Liu. “Some will say, 'Nah, too hard,' or 'Too soft,' 'Doesn't feel right,' 'Too mushy.' So we try to tweak the parameters and empirically come up with an optimal value to satisfy everybody—or make everyone equally dissatisfied.”
Teneo Computing, in Princeton, Mass., a medical simulation spin-off of SensAble Technologies, takes the same trial-and-error approach with the Phantom-based dental system it is making for Harvard University. The simulator can mimic a drill and four other dental instruments. According to Teneo's John Ranta, the simulated tooth is made up of several virtual materials with different haptic characteristics. The enamel is smooth and slick, the pulp is soft, and the tooth can be programmed to have a cavity. Ordinarily, the interaction of different types of body tissue makes calculating haptic feedback tricky. But because a tooth is not deformable, the virtual instrument need only calculate the force for the surface it is contacting. “When you're drilling at the top surface of the tooth, you don't have to feel the bottom,” said Ranta.
A clearer picture
In the coming years, VR designers hope to gain a better understanding of the true mechanical behavior of various tissues and organs in the body. If a haptic device is to give a realistic impression of, say, pressing the skin on a patient's arm, the mechanical contributions of the skin, the fatty tissue beneath it, muscle, and even bone must be summed up. The equations to solve such a complex problem are known, but so far the calculations cannot be made fast enough to update a display at 30 Hz, let alone update a haptic interface at 500-1000 Hz.
Simulating open surgery, such as an organ transplant, will also take vastly more computing power than simulating endoscopic procedures. In open surgery, “you have multiple squishy organs all interacting with each other and multiple hard tools interacting with each other and the squishy organs,” noted Merril of HT Medical. “That requires more horsepower than our current PCs.”
To tackle such problems, several labs are now looking to model the mechanics of human tissue. Getting tissue data from live humans is seldom easy, though: few people are willing to go under the knife solely for a researcher to see how much force it takes to pierce their pancreas.
Meanwhile, other research groups are attempting to speed the calculation of complex finite-element models of human tissue through parallel computing and other techniques. Still other labs are attempting to make spring and damper models as well as simplified finite-element models that closely match the more complex and realistic finite-element models, but require less computation.
Beyond just teaching specific skills, VR simulators could lead to better overall understanding of how surgical procedures are learned. “We don't yet have a good handle on how it is people pick up fine motor skills,” said Randy Haluck, a laparoscopic surgeon and co-director of the simulation device and cognitive science laboratory at Pennsylvania State University Medical Center, in Hershey, Pa. Designing simulators and analyzing how they are used forces physicians to recognize what is and is not important in learning a procedure, Haluck said.
VR methods may also prove useful in robotic surgery, a new technique in which surgeons remotely manipulate robotic tools inside the patient's body. Current robotic systems do not allow the surgeon to feel the patient's body. The next step then will be to incorporate haptic interfaces, much like those used in surgical trainers.
The Internet, too, will have an impact on the development of surgical simulators, making it easier to share computerized simulations, for example, and to create a central library of training tools. It may one day also be possible to rate the performance of medical students against a worldwide standard; recently, the European Union funded a two-year study to do just that. Coupled with VR simulators, the Internet should also speed dissemination of new and improved surgical practices. Instead of traveling to a medical center to learn a new technique, surgeons will be able to watch a Webcast and follow along on their own simulators.
To Probe Further
In “Simulation and Virtual Reality in Surgical Education: Real or Unreal” (Archives of Surgery, November 1999, pp. 1203-8), Paul Gorman and his coauthors review the state of the technology. See the Web site at archsurg.ama-assn.org. For a review of the field up to 1994, see “Medicine 2001: The King Is Dead,” by R. Satava, www.csun.edu/cod/94virt/med~1.html.
Using touch in VR simulations is discussed in “Haptics in Virtual Environments: Taxonomy, Research Status, and Challenges” by Mandayam Srinivasan and Cagatay Basdogan, Computers & Graphics, Vol. 21, no. 4, pp. 393-404.
Spectrum editor: Jean Kumagai