Big players in software are putting their weight behind artificial intelligence as a way to improve health care decision making. New computer models stretch the limits on how early doctors can spot disease, or how quickly molecular compounds can be screened for use as new drugs.

In the past week, news of several such models crossed IEEE Spectrum’s desk, which we’ve rounded up for you here. IBM announced a computational model that predicts heart failure, Stanford University reported a deep learning algorithm that predicts the safety of drug compounds, and Intel announced a competition to find an algorithm for early detection of lung cancer. 

IBM model predicts heart failure

IBM Research, the innovation arm of multinational tech giant IBM, and collaborators have developed a machine learning model that predicts heart failure up to two years before a patient would typically be diagnosed. The researchers trained the model using hidden signals gleaned from electronic health records and doctors’ notes.

Heart failure—a chronic condition in which the heart muscle isn’t strong enough to pump enough blood to meet the body’s needs—is hard to predict. In fact most people don’t know they have a problem until they land in the hospital. “By the time a patient is diagnosed, very often an acute event has happened and irreversible damage has been caused,” says Jianying Hu, who led the development of the model and is a program director at IBM Research’s Center for Computational Health.

Hu’s group wondered if they could predict the problem well before a person ends up in the hospital. To do that, the group took a fresh analytical look at electronic health data that is routinely collected at doctor visits. “We found that diagnoses of other conditions, medication, and hospitalization records, in that order, provide the most valuable signal for predicting heart failure,” she says. They also mined key information from doctors’ notes using natural language processing techniques. 

The data came from the health records of more than 10,000 people. The model was highly accurate at predicting heart failure up to one year in advance and “degraded gracefully” in accuracy until about two years ahead of a problem, says Hu. “Clearly there’s power in the data that is routinely collected in the care process,” she says.

The model is one of the results of a three-year, $2 million research project funded by the National Institutes of Health. For the grant, IBM Research partnered with researchers at the Sutter Health hospital group and clinical experts at Geisinger Health System. The group reported its predictive model in November, and this week announced additional insights they learned from the research. 

IBM plans to continue the collaboration and research to improve the model by experimenting with a larger dataset from a more diverse patient population. “There is still a lot of work to be done” before the model could be used in clinical settings, Hu says. IBM does not yet have plans to commercialize the software, she says. 

Stanford deep learning algorithm makes meaningful predictions early in the drug discovery process. 

Stanford University researchers have developed a deep learning algorithm that can predict the properties of a potential drug compound using very few data points. The work was described Monday in the journal ACS Central Science.

When developing a new drug, companies spend tons of time screening molecular compounds—trying out different chemical structures to find the safest, most effective one for the job. In the process, they screen for properties such as toxicity, side effects and instability—characteristics that could spell disaster in a clinical trial or commercialized drug.

An algorithm that predicts these properties could shorten or improve the drug screening process. And deep neural networks—a type of machine learning algorithm—have been shown to be pretty good at this. The trouble is, such algorithms must be trained using hundreds to millions of data points. By the time a company has that much data on a compound, its scientists probably already know that they have a good drug candidate.

So Stanford chemist Vijay Pande and his students developed an algorithm that can make meaningful predictions about drug properties based on very little data. To do this, the researchers refined the architecture of a type of machine learning algorithm called one-shot learning, which uses related data to make predictions about new, and very few, data points.

Standard one-shot learning enables a computer to identify a new class—say a giraffe—after having seen a giraffe only one time. Pande and his students mathematically adapted the algorithm so that it would predict the behavior of the molecule in an experimental system, such as in the body. 

They trained the algorithm on two datasets—one with information about the toxicity of different chemicals and the other with information on the side effects of medicines on the market. With that training, they found that the algorithm could predict toxicity and side effects of new compounds better than random chance alone. 

Pande says it’s too soon to know how the technology might fare commercially, “but possibilities include building a start-up around this technology or partnering with those who design drugs,” he says. In addition to his role at Stanford, Pande is a general partner at the venture capital firm Andreessen Horowitz, where he leads the firm’s investments in companies at the cross section of biology and computer science. 

Intel backing AI for early detection of lung cancer

American tech giant Intel last week announced a contest, called the TianChi Healthcare AI Competition, aimed at finding an algorithm for early detection of lung cancer. Participants in the contest will use computerized tomography (CT) scans and clinical records to train algorithms to screen for suspicious growths on the lung, called pulmonary nodules. 

Intel, based in Santa Clara, Calif., will host the competition in collaboration with Alibaba Cloud, the cloud computing arm of the Alibaba Group in Hangzhou, China. Lung cancer is one of the most prevalent forms of cancer in China, and places an enormous burden on the country’s health care system. The contest aims to relieve that burden by aiding the diagnostic process. 

Intel is contributing to the competition its Xeon and Xeon Phi processors, access to its algorithm and mathematics libraries, and deep learning framework software designed for medical image analysis. Alibaba Cloud is providing cloud-based foundational infrastructure. LinkDoc, an oncology-focused big data company in Beijing, will provide high resolution CT scans.

The winner of the contest will receive one million Chinese Yuan—about US $145,000. Results of the competition will be announced in September this year, according to Intel. 

The Conversation (0)

Restoring Hearing With Beams of Light

Gene therapy and optoelectronics could radically upgrade hearing for millions of people

13 min read
A computer graphic shows a gray structure that’s curled like a snail’s shell. A big purple line runs through it. Many clusters of smaller red lines are scattered throughout the curled structure.

Human hearing depends on the cochlea, a snail-shaped structure in the inner ear. A new kind of cochlear implant for people with disabling hearing loss would use beams of light to stimulate the cochlear nerve.

Lakshay Khurana and Daniel Keppeler
Blue

There’s a popular misconception that cochlear implants restore natural hearing. In fact, these marvels of engineering give people a new kind of “electric hearing” that they must learn how to use.

Natural hearing results from vibrations hitting tiny structures called hair cells within the cochlea in the inner ear. A cochlear implant bypasses the damaged or dysfunctional parts of the ear and uses electrodes to directly stimulate the cochlear nerve, which sends signals to the brain. When my hearing-impaired patients have their cochlear implants turned on for the first time, they often report that voices sound flat and robotic and that background noises blur together and drown out voices. Although users can have many sessions with technicians to “tune” and adjust their implants’ settings to make sounds more pleasant and helpful, there’s a limit to what can be achieved with today’s technology.


8 channels


64 channels

Since optogenetic therapies are just beginning to be tested in clinical trials, there’s still some uncertainty about how best to make the technique work in humans. We’re still thinking about how to get the viral vector to deliver the necessary genes to the correct neurons in the cochlea. The viral vector we’ve used in experiments thus far, an adeno-associated virus, is a harmless virus that has already been approved for use in several gene therapies, and we’re using some genetic tricks and local administration to target cochlear neurons specifically. We’ve already begun gathering data about the stability of the optogenetically altered cells and whether they’ll need repeated injections of the channelrhodopsin genes to stay responsive to light.

Our roadmap to clinical trials is very ambitious. We’re working now to finalize and freeze the design of the device, and we have ongoing preclinical studies in animals to check for phototoxicity and prove the efficacy of the basic idea. We aim to begin our first-in-human study in 2026, in which we’ll find the safest dose for the gene therapy. We hope to launch a large phase 3 clinical trial in 2028 to collect data that we’ll use in submitting the device for regulatory approval, which we could win in the early 2030s.

We foresee a future in which beams of light can bring rich soundscapes to people with profound hearing loss or deafness. We hope that the optical cochlear implant will enable them to pick out voices in a busy meeting, appreciate the subtleties of their favorite songs, and take in the full spectrum of sound—from trilling birdsongs to booming bass notes. We think this technology has the potential to illuminate their auditory worlds.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}