'Skywalker' Prosthetic Hand Uses Ultrasound for Finger-Level Control

With an ultrasound sensor, this new type of prosthetic hand allows precision control over each finger

5 min read

Jason Barnes lost part of his right arm in 2012. He can now play the piano by controlling each of his prosthetic fingers.
Photo: Georgia Tech

Robotic hands just keep getting better and better. They're strong, fast, nimble, and they've got sensors all over the place. Capable as the hardware is, robotic hands have the same sort of problem as every other robot: it's very tricky to make them do exactly what you want them to do. This is especially relevant for robot hands that are intended to be a replacement for human hands. Operating them effectively becomes the biggest constraint for the user.

Generally, robotic prosthetic hands are controlled in a way that one would never call easy or intuitive. Some of them sense small muscle movements in the upper arm, shoulders, chest, for example. Some of them use toe switches. In either case, it's not like the user can think about wiggling one of their robotic fingers and have that robotic finger wiggle; it requires the challenging step of translating the movement of one muscle into the movement of another. With practice, it works, but it also makes fine motor control more difficult.

At Georgia Tech, Gil Weinberg, Minoru Shinohara, and Mason Bretan have developed a completely new way of controlling prosthetic limbs. Using ultrasound and deep learning, they've been able to make detailed maps of small muscle movements in the forearm. This has enabled intuitive, finger-level control of a robotic hand. It's so much better than any other control system that the researchers are already calling it “Luke Skywalker’s bionic hand.”

Jason, a participant in the prosthetic experiment, lost part of his arm. But he still has the forearm muscles that used to be attached to fingers. They're not attached anymore, but those muscles are still attached to his brain. When his brain wants to move the fingers that he doesn't have, it sends messages that cause his forearm muscles to actuate in specific patterns. These patterns are too complex to discern with electromyogram (EMG) sensors except in the most superficial way. But with ultrasound, it's possible to make a much more detailed and dynamic map. Throw some deep learning in there (like everybody is doing with everything nowadays), and you can correlate the ultrasound patterns with specific movements of specific fingers with much higher fidelity than ever before.

For more details, we spoke with professor Gil Weinberg, who directs Georgia Tech’s Center for Music Technology.

IEEE Spectrum: According to the press release, you came up with this idea when “the team looked around the lab and saw an ultrasound machine.” There must be more to it than that, right?

Gil Weinberg: The whole story goes like this: We were trying to get finger-by-finger control from EMG but the signal was just too noisy. So, we went to another lab at Georgia Tech in the Applied Physiology program to try a more invasive approach—needle based EMG— with the hope that if the sensors were closer to the muscles, we could better decipher between different finger gestures. But here too, the signal was noisy, possibly a little clearer than with surface EMG, but we still couldn’t get any reliable prediction. This didn’t make sense to us, because obviously if the body “knows” how to control finger-by-finger, why can’t we sense it? Our hypothesis was that the needles were not placed in the best spots next to the correct muscles for accurate sensing. We realized that if we could just see the muscles, we would learn where it would be best to put the needles. Then, we looked around the lab and saw the ultrasound machine :) 

The “eureka” moment came when we saw the ultrasound image of the muscle movements for the first time. It was immediately clear that while the level of electric activity may be similar when different fingers were moving, the trajectory and speed of the muscle movements were visibly distinct and repeatable. The movements on the screen correlated quite well with the different finger gestures. So, instead of just using the ultrasound to determine where to put EMG needles, we decided to stick with ultrasound as a sensor. And to replace our naked eye in determining between the muscle movements, we implemented deep neural network to model the movements and predict the gestures.

Why hasn’t this technique been used before?

We later learned that we were not the first ones to try to detect muscle patterns from ultrasound. However, we were the first to use deep learning.

This allowed us, for the first time, to predict an amputees’ continuous and simultaneous finger gestures​, which makes the control for an amputee completely intuitive. The user doesn’t need to learn a particular set of gestures. They can  just move their muscles as they would regularly, and the prosthetic hand will move accordingly. I believe we were also the first ones to connect these deep learning models to an actual robotic hand. Also, not too many labs care about music the way we do :) And for music, you really need those continuous and simultaneous expressive gestures. And as you may remember, my motto has always been that if our robots satisfy musical demands (music being one of the most subtle and expressive human activities) they will satisfy demands in pretty much any other scenario.

To what extent has this system been created specifically for Jason [the study participant in the video]? What would be involved in adapting it to be used by other amputees?

Human muscles work very similarity across different subjects, so the system can work for anyone. After 30-60 seconds of training for any particular user, the network can be fine-tuned for any minute individual idiosyncratic differences.  

Close up of prosthetic fingers playing the piano.Gif: Georgia Tech

What degree of control can the user have over the motion of the hand and fingers? Is a lightsaber duel possible?

With the deep learning architecture we use, we achieved fully continuous predictions, which can allow for highly dexterous activities. We hope amputees can use this technology for activities such as bathing, grooming and feeding. For lightsaber dueling, there is of course the issue of the prosthetic hardware itself. It will require some strong and flexible motors to continue to hold the saber after Darth Vader hits you hard. As you may have seen in my video, we are currently focusing on developing a hand that would play piano, which has quite a different set of requirements regarding motors and actuation. I believe it would take quite a while before we could make a general purpose human-like hand that could do both activities well. 

What improvements are you working on, and what’s the most challenging thing about this right now?

There are two main directions we are currently pursuing: developing a dexterous and expressive piano playing prosthetic hand, and miniaturizing and improving power consumption for the ultrasound sensor so that it could become easily wearable and possibly commercialized. These are both very challenging tasks. 

Challenging, yes, but the potential here is pretty incredible—maybe not lightsaber incredible, but by the time we re-invent that technology from a long time ago in a galaxy far, far away, Jason will be ready to take advantage of it.

Update: Claudio Castellini, from DLR’s Robotics and Mechatronics Center, let us know about some similar research that he and his group published in 2014. They were able to use ultrasound to predict the amount of pressure applied with each finger by a non-amputee in a virtual piano-playing task. As Castellini describes:

We could predict the amount of pressure our subjects were applying with each single finger (including the rotation of the thumb), in real time, simultaneously and continuously. Such information was sent to a virtual-reality machine in which the subjects could play a dynamic keyboard (a piano in the first case, a harmonium in the second). We engaged then in a multi-user experiment, proving that our system could enable them to reach the required key and to play the corresponding note at the required volume.

This work was presented at ICRA and BioRob.

The Conversation (0)