The December 2022 issue of IEEE Spectrum is here!

Close bar

"Minority Report" Tech Meets the Operating Room

A gesture-controlled system could allow surgeons to swipe through medical images without dirtying their hands

3 min read
"Minority Report" Tech Meets the Operating Room
Photo: Ben J. Park/Asan Medical Center

Technology showcased in the movie Minority Report, which enabled Tom Cruise to swipe through midair images in the 2002 film, could soon become a staple of hospital operating rooms. A new gesture-controlled computer interface aims to give surgeons easier access to medical images during marathon surgical operations.

The experimental medical system takes advantage of Leap Motion controllers that can sense and track people’s hand gestures. South Korean researchers developed their own “GestureHook” software that can translate the gestures captured by a Leap Motion device into commands for several different types of medical software.

“We thought that using gestures as a new interface for controlling software in hospitals would provide access to computers for surgeons during procedures,” says Ben Joonyeon Park, a software developer on the Medical Information Development Team at the Asan Medical Center in Seoul, South Korea. Manipulating software programs this way, says Park, will let surgeons view 2-D or 3-D images of patients’ body parts without relying on assistants or dirtying their hands by touching a mouse or keyboard.

The system has undergone only one preliminary test, with a plastic surgeon during a double jaw surgery operation. But Park and his colleagues hope that the gesture-controlled system could also help radiologists, who spend much of their time browsing X-ray, MRI, and CT scan images. Their research was detailed in the 14 January 2016 issue of the journal Computational and Mathematical Methods in Medicine.

The team, from the Asan Medical Center and the University of Ulsan College of Medicine, considered a few different motion-tracking technologies at first. Among them was the Microsoft Kinect device that first found success in video games but has also been used for other motion-control applications. But the Korean researchers eventually settled on the Leap Motion device because it provided vector coordinates that were easier to plug into the software they developed.

The biggest innovation came on the software side. Park and his colleagues created their GestureHook software to act as a middleman between the Leap Motion hardware and the myriad medical programs currently in use. Such programs include the PACS (Picture Archiving and Communication System) software that allows surgeons and radiologists to look at 2-D scans or even manipulate 3-D images of certain organs or body parts.

Testing showed that the gesture-controlled system was a few seconds faster, on average, than using a mouse and keyboard to manipulate the viewing angle of head scans from 10 multiple-fracture patients. Researchers also tested the accuracy of the system in recognizing basic hand gestures such as tapping or picking fingers, grabbing hands, or performing mouse clicks and double clicks. The system generally performed well, aside from having some trouble recognizing a left finger pinch gesture and the mouse double-click gesture.

“I don't think the device itself is accurate enough yet to use in hospitals, but this can be corrected by finding out ways to detect outliers,” Park explains. “We will definitely look into other devices such as smart rings, but haven't found the perfect fit yet.”

The South Korean team, which plans to apply for a patent on their system, says its next step involves testing how individual physicians can train the software on their own personalized gestures. They also hope to continue improving the system’s gesture recognition accuracy and the range at which it can detect hand gestures.

The gesture-control technology could even someday work with certain wearable devices. Surgeons who want to keep their hands clean might prefer something such smart bands or rings, Park says. There is also still the possibility of trying out the Microsoft Kinect because it can recognize a much wider range of user activities and gestures than Leap Motion can.

And who knows? A Minority Report-style version of the technology might someday emerge, with images floating in midair.

“I guess the next big leap would be combining this software with holographic technologies,” Park says.

The Conversation (0)

Are You Ready for Workplace Brain Scanning?

Extracting and using brain data will make workers happier and more productive, backers say

11 min read
Vertical
A photo collage showing a man wearing a eeg headset while looking at a computer screen.
Nadia Radic
DarkGray

Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}