The Human OS iconThe Human OS

Jason Barnes lost part of his right arm in 2012. He can now play the piano by controlling each of his prosthetic fingers.

'Skywalker' Prosthetic Hand Uses Ultrasound for Finger-Level Control

Robotic hands just keep getting better and better. They're strong, fast, nimble, and they've got sensors all over the place. Capable as the hardware is, robotic hands have the same sort of problem as every other robot: it's very tricky to make them do exactly what you want them to do. This is especially relevant for robot hands that are intended to be a replacement for human hands. Operating them effectively becomes the biggest constraint for the user.

Generally, robotic prosthetic hands are controlled in a way that one would never call easy or intuitive. Some of them sense small muscle movements in the upper arm, shoulders, chest, for example. Some of them use toe switches. In either case, it's not like the user can think about wiggling one of their robotic fingers and have that robotic finger wiggle; it requires the challenging step of translating the movement of one muscle into the movement of an another. With practice, it works, but it also makes fine motor control more difficult.

At Georgia Tech, Gil Weinberg, Minoru Shinohara, and Mason Bretan have developed a completely new way of controlling prosthetic limbs. Using ultrasound and deep learning, they've been able to make detailed maps of small muscle movements in the forearm. This has enabled intuitive, finger-level control of a robotic hand. It's so much better than any other control system that the researchers are already calling it “Luke Skywalker’s bionic hand.”

Read More
Electroencephalogram (EEG) on a 27-year old woman. Epilepsy tracing on the screen.

A Wearable Chip to Predict Seizures

One of the toughest aspects of having epilepsy is not knowing when the next seizure will strike. A wearable warning system that detects pre-seizure brain activity and alerts people of its onset could alleviate some of that stress and make the disorder more manageable. To that end, IBM researchers say they have developed a portable chip that can do the job; they described their invention today in the Lancet’s open access journal eBioMedicine.

The scientists built the system on a mountain of brainwave data collected from epilepsy patients. The dataset, reported by a separate group in 2013, included over 16 years of continuous electroencephalography (EEG) recordings of brain activity, and thousands of seizures, from patients who had had electrodes surgically implanted in their brains

Scientists at IBM Research Australia then used that dataset to train deep learning algorithms called neural networks. The algorithms learned to identify patterns of brain activity associated with the onset of a seizure. IBM runs the neural networks on TrueNorthits ultra-low power neuromorphic computing chip. At the size of a postage stamp, the chip could be used in a wearable device for people with epilepsy, or connected to a mobile device.

Read More
An up-close image shows the leg of the velvet worm, including tiny protrosions on the surface and a cross-section of the muscle fibers of its interior.

Nanometer-Scale 3D Images Show Velvet Worm Up Close

Euperipatoides rowelli is ready for its closeup.

Biomedical physicists have used a new computed tomography (CT) machine to produce 400-nanometer-scale 3D images of a tiny limb of a newborn velvet worm to test the system’s ability to produce hyper-detailed scans of biological samples.

Such images could help scientists probe evolutionary linkages among fundamental and ancient animal species. Its inventors also hope it can illuminate fleshy features for medical applications such as tumor analysis.

The tabletop tomography setup was assembled by researchers at the Technical University of Munich, who discuss its pros and cons in a recent paper in the Proceedings of the National Academy of Sciences.

Read More
A Korean teenager staring intently at a smartphone.

Internet Addiction Creates Imbalance in the Brain

New research has linked Internet addictions with a chemical imbalance in the brain. In the small study, presented today at the annual meeting of the Radiological Society of North America in Chicago, 19 participants with addictions to phones, tablets, and computers exhibited disproportionately high levels of a neurotransmitter that inhibits brain activity. 

The good news: After nine weeks of therapy, the participants’ brain chemicals normalized, and their screen time decreased, says Hyung Suk Seo, a professor of neuroradiology at Korea University in Seoul, who presented the study

Read More
A computer and equipment in the foreground show the monitoring of vital signs for a patient in the background with black straps around his chest and wrist.

Simultaneous Touchless Monitoring of Several Patients' Vital Signs

Hospitals might soon be able to keep tabs on the health of dozens of people from afar. Engineers have come up with a new touch-free method for monitoring heart rate, blood pressure, and breathing of several people at the same time.

A small RFID tag placed near the body—say, in a shirt pocket or wrist cuff—is all the system needs in order to read these vital signs. In addition to being more comfortable for patients, the technology could make it easier, faster, and cheaper for hospitals and care facilities to monitor the health of residents.

Continuously monitoring vital signs can be a nuisance. It requires electrodes stuck to skin amid a tangle of wires, or tight arm cuffs and sitting still. Wearable sensors like wristbands, soft wireless patches, and temporary tattoos can gather this information, but those might not often be a good option for the elderly, infants, or people with medical conditions.

Measuring vital signs without any skin contact is already possible with radio waves. For tracking respiration, the idea is to bounce RF beams off someone’s body and measure the periodic changes in the reflected signal caused by the person’s breathing. But measuring heartbeat is more challenging. What’s more, says Edwin Kan, a professor of electrical and computer engineering at Cornell University, this method is “very vulnerable to movement in room. Another person moving on the side causes a lot of interference.”

So Kan and his graduate student Xiaonan Hui have devised a method that relies on near-field coupling. The near-field is the region of the electromagnetic field right around an RFID antenna, a distance of up to one wavelength, or 35 centimeters, away.

The system requires a small antenna and RFID tag to be placed near the chest and a wrist; they can be up to 10 centimeters away from the body. An RFID reader is placed two meters away.

Read More
Two of the vertical cavity surface-emitting lasers used in a new optical cochlear implant are shown here next to a matchstick. Each laser rests within a sapphire box.

Optical Cochlear Implant Turns Light Against Hearing Loss

Blinking lights could soon serve a whole new purpose. Recent findings have led German, Swiss, and Austrian researchers to develop a prototype hearing implant based on the concept that a series of laser pulses can trigger auditory signals from hair cells located within the inner ear.

An array of near-infrared lasers can produce a soundwave using what’s called the optoacoustic effect, the researchers believe. In their device, tiny vertical cavity surface-emitting lasers, which pulsate light at a spectrum of 1.4 to 1.9 microns, act upon the fluid within the nautilus-shaped cochlear canals in the inner ear.

Basically, the infrared light is absorbed by the liquid inside the cochlea. A small fraction of the liquid will expand due to heat. If that happens rapidly enough, it generates a soundwave inside the duct of the cochlea. This stimulates or moves tiny hair cells located there, which in turn sends a signal along the auditory nerve which the brain understands as sound.

Over the last three years, the researchers have built tiny laser arrays and completed tests on guinea pigs, finding they could generate action potentials, the signals carried by auditory nerves, using vertical laser light and the optoacoustic effect. They compared stimuli in the guinea pigs from the laser array with an acoustic click. Both generated nerve signals matching in form and amplitude.

Read More
Enzymes unlock drug molecules from the architecture of the nanoparticles when magnetic forces are turned on.

Magnetic Field Controls Drug Delivery

Researchers have developed a new way to control the delivery of drugs to the body using nanoparticles and a weak magnetic field. The inventors, at the University of Georgia in Athens, GA tested their system with a chemotherapy drug, and published the results today in the journal Nature Catalysis

Patients undergoing treatments for cancer and other diseases often must take drugs that affect the whole body, when they really only need the medicine in a small area. Chemotherapy drugs typically “act on all cellskilling cancer cells and also healthy cells,” says Sergiy Minko, a professor at the University of Georgia and an author of the report. As a result, “a big number of patients die because of complications” from the drugs, he says. 

Read More
A chest x-ray colorized by a Stanford algorithm to highlight possible areas of pneumonia

Stanford Algorithm Can Diagnose Pneumonia Better Than Radiologists

Stanford researchers have developed a machine-learning algorithm that can diagnose pneumonia from a chest x-ray better than a human radiologist can. And it learned how to do so in just about a month.

The Machine Learning Group, led by Stanford adjunct professor Andrew Ng, was inspired by a data set released by the National Institutes of Health on 26 September. The data set contains 112,120 chest X-ray images labeled with 14 different possible diagnoses, along with some preliminary algorithms. The researchers asked four Stanford radiologists to annotate 420 of the images for possible indications of pneumonia. They selected that disease because, according to a press release, it is particularly hard to spot on X-rays, and brings 1 million people to U.S. hospitals each year.

Within a week, the Stanford team had developed an algorithm, called CheXnet, capable of spotting 10 of the 14 pathologies in the original data set more accurately than previous algorithms. After about a month of training, it was ahead in all 14, the group reported in a paper released this week through the Cornell University Library. And CheXnet consistently did better than the four Stanford radiologists in diagnosing pneumonia accurately.

The researchers looked at CheXnet’s performance in terms of sensitivity—that is, whether it correctly identified existing cases of pneumonia, and how well it avoided false positives. While some of the four human radiologists were better than others, CheXnet was better than all of them [See graph below].

The Stanford approach also creates a heat map of the chest x-rays, with colors indicating areas of the image most likely to represent pneumonia; this is a tool that researchers believe could greatly assist human radiologists.

I couldn’t be more thrilled—and hopeful that all the radiologists at Stanford will embrace this technology immediately, because I know firsthand how beneficial it could be.

Last December, my then-18-year-old son went to the Stanford emergency room with an extremely high fever and cough. He had a chest x-ray for suspected pneumonia; it was read as negative so he was given an I.V. for dehydration, medication for his fever, and was sent home.

A week later, he was back in the ER in the middle of the night, this time disoriented, with an even higher fever that wasn’t responding to medication. Again, a chest x-ray was read as negative, and he was tested for every disease one could imagine. But all he was given were fluids, and eventually he was released with no diagnosis.

Two days after that, we got a call from radiology—a routine review of x-rays from the weekend had changed the medical opinion to pneumonia—a diagnosis that had been missed twice. Antibiotics started bringing his fever down within 24 hours.

Next time I bring a kid to the Stanford ER, I’m asking for a CheXnet consult.

A woman lies in a hospital bed with the mindBEAGLE EEG headpiece on, surrounded by two family members and a doctor.

mindBEAGLE Brain-Computer Interface Gives Non-Speaking, Immobilized People a Voice

An Austrian organization is using brain-computer interface technology to help people in one of life’s most horrifying plights: being cognitively aware, but trapped in a body that can’t move, speak, blink or communicate in any way. The developer of the system, g.tec medical engineering, is commercializing the system, and this week held an explanatory workshop at Society for Neuroscience’s meeting in Washington D.C. 

The tool gives people in this dreaded condition, called locked-in syndrome, the ability to answer yes-or-no questions using only their thoughts. The technique can also aid communication with people in unresponsive wakefulness states, but not people in comas where there in no cognitive function.

Read More
An illustration shows a human brain with a glowing chip on its surface.

Q&A: The Ethics of Using Brain Implants to Upgrade Yourself

Neurotechnology is one of the hottest areas of engineering, and the technological achievements sound miraculous: Paralyzed people have controlled robotic limbs and computer cursors with their brains, while blind people are receiving eye implants that send signals to their brains’ visual centers. Researchers are figuring out how to make better implantable devices and scalp electrodes to record brain signals or to send electricity into the brain to change the way it functions.

While many of these systems are intended to help people with serious disabilities or illnesses, there’s growing interest in using neurotech to augment the abilities of everyday people. Companies like Facebook and Elon Musk’s Neuralink are developing consumer devices that may be used for brain-based communication, while some startups are exploring applications in entertainment. But what are the ethical implications of medding with our brains? And how far will we take it?

Anders Sandberg is “not technically a philosopher,” he tells IEEE Spectrum, although it is his job to think deeply about technological utopias and dystopias, the future of AI, and the possible consequences of human enhancement via genetic tweaks or implanted devices. In fact, he has a PhD in computational neuroscience. So who better to consult regarding the ethics of neurotech and brain enhancement? 

Sandberg works as a senior research fellow at Oxford’s Future of Humanity Institute (which is helmed by Nick Bostrom, a leading AI scholar and author of the book Superintelligence that explores the AI threat). In a wide-ranging phone interview with Spectrum, Sandberg discussed today’s state-of-the-art neurotech, whether it will ever see widespread adoption, and how it could reshape society.

Read More
Advertisement

The Human OS

IEEE Spectrum’s biomedical engineering blog, featuring the wearable sensors, big data analytics, and implanted devices that enable new ventures in personalized medicine.

 
Editor
Eliza Strickland
New York City
Contributor
Emily Waltz
Nashville
 
Contributor
Megan Scudellari
Boston
 

Newsletter Sign Up

Sign up for The Human OS newsletter and get biweekly news about how technology is making healthcare smarter.

Load More