Tech Talk iconTech Talk

Injectable Electronics Give Neurology a Shot to the Brain

Tiny electronic meshes that can be injected directly into the brain might one day provide control of prosthetic limbs, repair brain damage, or make artificial eyes, according to scientists who have already inserted the devices into the brains of mice.

Read More

Implant Fights Stroke, Tinnitus by Retraining the Brain

human os icon

Houston-based Microtransponder is using an implanted vagus-nerve simulator to turbocharge one of the brain’s most fundamental functions: learning. “What we’re doing with our VNS pairing therapy is trying to reorganize the brain,” says the company’s R&D director, Navzer Engineer.

Vagus nerve stimulation (VNS) involves a device, usually implanted in the body, that sends electric signals up into the brain from a nerve in the neck. Many of the organizations pursuing vagus-nerve-based treatments are targeting the brain’s centers that release neurotransmitters to treat conditions as various as epilepsy, migraine headache, and heart failure. The vagus links to a deep-brain structures, such as the nucleus basalis and the locus coeruleus, which are stuffed full of neurotransmitters like acetylcholine and norepinephrine—key neurochemicals in the cellular mechanisms of learning and memory. The neurotransmitters tell the brain what to learn and when to learn it, says Engineer. Microtransponder’s system is designed to help the brain learn its way around damage.

Read More

5 Materials Innovations for New Medical Devices

human os icon

The next frontier for electronics could lie inside the human body, with sensors that keep track of biomarkers and brain activity, systems to deliver drugs or monitor exercise levels, and communications networks that allow such devices to call on the processing power of your smartphone and send your data to the doctor’s office.

“We’re moving toward a world where rather than going to the doctor once a month and having a measurement made, we’re going to have continuous monitoring,” says Ifor Samuel, head of the Organic Semiconductor Optoelectronics group at the University of St. Andrews, Scotland. Goaran Gustafsson, an expert in printed electronics and bioelectronics at the Swedish research institute Acreo Swedish ICT, says the idea is sort of OnStar for the body, similar to the General Motors system that lets a technician unlock your car remotely or notify the tow truck when you’ve had a breakdown.

Wiring the body could provide more realistic measurements of everything from blood oxygenation and to stress hormone levels, keeping them in the context of everyday life and perhaps providing early warnings of problems. Samuel and Gustafsson were among researchers at the Materials Research Society’s December meeting in Boston who talked of efforts to develop a network of devices that would keep track of people’s health and intervene where necessary. While most of these proposals will take several years to develop, and will require regulatory approval, here are some ideas these researchers are pursuing.

Read More

Powered Prosthetic Legs Work Better by Tracking EMG

Powered prosthetic legs work better when guided by electrical signals generated by the muscles, says a report published today in the Journal of the American Medical Association (JAMA). The findings suggest that bionic legs that rely on mechanical sensors to control movements would be greatly improved by the inclusion of electromyographic (EMG) data and the algorithms that interpret them.

In the study, teams from the Rehabilitation Institute of Chicago and Northwestern University tried out their system on seven people with above-knee amputations. Each participant was outfitted with 9 EMG sensors on their thighs and hips that were connected to a computer. The participants wore a prototype knee-ankle prosthesis powered by 13 mechanical sensors that measured inertia, load, position, angle, acceleration, velocity and torque of the knee and ankle joints. The prosthesis was developed by Michael Goldfarb, a mechanical engineering professor at Vanderbilt University.

The participants were asked to traverse varying terrain—up ramps, stairs, and across level ground. The prostheses relying on the mechanical sensors alone made errors 14.1 percent of the time. But when the EMG data was incorporated, the prostheses made errors only 7.9 percent of the time—or about half as often.

“That’s a lot,” says Levi Hargrove, a research scientist at the Center for Bionic Medicine at the Rehabilitation Institute of Chicago, who led the study. The seven participants were not told when the EMG data was being used, but “every subject figured it out,” said Hargrove. “At the end of the experiment, we would ask: ‘Which of the two conditions did you prefer?’ And they chose the one that used the EMG signals every time.” 

The advance is good news for people with above-knee amputations. Designing prostheses for these kinds of amputations is complex because they require precise coordination of knee and ankle movements. Particularly tricky is transitioning from one type of walking to another—like from walking on flat ground to climbing stairs.

There are only two powered leg prostheses on the market. One, which provides movement in just the knee joint, is made by an Icelandic company called Ossur. The other, called Biom, has a moving ankle; it was developed by Hugh Herr’s group at MIT. Prosthetics that combine powered knee and ankle movement are all still in the prototype stage.

A problem with these mechanical prototypes is that their designs don’t offer the flexibility necessary to accommodate different gaits. The user has to stop moving and make some kind of exaggerated body motion, or use a remote control to tell the leg what to do next—a problem that is frequently awkward and potentially dangerous. 

Hargrove’s team aims to make walking smoother and safer for people with prosthetic legs. “We want them to be able to approach and walk up stairs the same way you and I would,” says Hargrove. 

Arm prostheses are much more advanced than those for the legs. The “Luke Arm” developed by the DEKA Research and Development Corp, for example, harnesses EMG data from the amputee’s remaining arm muscles and interprets them to allow the prosthetic limb carry out multiple, simultaneous movements in the wrist and fingers that allow pinching or gripping. The device received FDA approval last year. Hargrove and his colleagues helped develop the system, and have since commercialized algorithms that can control the Luke Arm or any other arm prothesis that relies on EMG pattern recognition. 

But systems for the leg—particularly for combined knee-ankle devices—have remained elusive. Why the difference? Arms have been a clinical focus for longer than legs. Plus, technological challenges—developing motors and actuators that are strong enough, light enough, and efficient enough to carry people throughout the day without having to recharge batteries—have been difficult to overcome. But that is changing. “All of these innovations are coming together to make these categories of devices available,” says Hargrove. “So now we need to learn to control them as best we can.”

Hargrove and his team, in a collaboration with Vanderbilt and the U.S. Army, are now testing the systems on 15 participants in home settings. “That’s the real test,” says Hargrove. “We’re trying to understand if this is useful for people in the real world.”

Fraunhofer Optics Could Make Augmented Reality Specs Thinner

The list of companies with an interest in augmented reality eyewear has grown long over the years and includes some of the biggest names: Microsoft, Google, and Sony. However, consumer acceptance has been lackluster, in part because these specs are still a bit bulky.

Researchers led by Peter Schreiber at the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena, Germany, say they’ve created demonstration models of  data glasses that would be much smaller and less obtrusive than the specs available now.

The key to their design is the use of a glass waveguide for light, both for the transmission and display of images.  "We have a very slim design for the eye-piece optics, which fits very well with the geometry of a wave guide," says Schreiber. A simple, elongated rectangular sheet of glass serves both as a wave guide and as the support for a small, transparent display of 8 by 15 millimeters near the eye.

A two-megapixel microdisplay produces the image at one end of the glass sheet. This image is coupled into the glass sheet by a diffraction grating—a thin microlattice deposited on the glass waveguide as a thin, transparent plastic film on the glass.  This image travels through the glass sheet and is coupled out of the wave guide at the other end close to the eye of the user by a similar microlattice.  Held close to the eye it can be viewed via an array of one-mm microlenses.

Unfortunately, using a waveguide for transmitting the image limits its definition on the display to about 800 x 600 pixels. "We have a very slim design, and we pay for this slimness in terms of projected pixels," says Schreiber.  But this would not drastically limit the application of the specs, he claims.  "If you want to use a simple navigation system, which only says, '500 meters to the right,' or …. 'there is a call on your smartphone,' or for simple graphics in a repair manual, then it will work well." 

Up to now the researchers have produced two demonstration prototypes of their data glasses. One is a real near-to-eye display with a micro imager, but it isn’t see-through. The other system cannot display moving images, but is see-through, says Schreiber. "What we have to do in the future is to put both systems together, with a real micro imager and a waveguide for the see-through option,"he says.

An additional benefit of the Fraunhofer system is that by manipulating the images—image processing—one can correct for a user’s farsightedness. Your smartphone is connected wirelessly to your display, and you will have a smartphone app that will allow you to enter your degree of vision defect, such as +1 or +2, says Schreiber. The smartphone will then process the images, allowing you to see a sharp display, says Schreiber.  He argues that moving most of the image processing power and operating software to a smartphone allows a more compact structure for the data glasses, and a longer battery life.  

The technology they are developing might also be of use in several optical instruments, such as microscopes and micromanipulators, where transparent displays could be used for placing cross hairs or a controllable cursor in the view. 

 "For future optical solutions, we are now looking for partners, like startups, to develop products with them," says Schreiber.

Fail: Computerized Clinical Decision Support Systems for Medical Imaging

Computerized systems that help physicians make clinical decisions fail two-thirds of the time, according to a study published today in the Journal of the American Medical Association (JAMA). With the use of such systems expanding—and becoming mandatory in some settings—developers must work quickly to fix the programs and their algorithms, the authors said. The two-year study, which is the largest of its kind, involved over 3,300 physicians. 

Read More

Green Microchips Created on Cellulose Nanofibril Paper

In September 2007, while leafing through his copy of IEEE Spectrum, Zhenqiang "Jack" Ma, an engineer at the University of Wisconsin-Madison whose research focus is microwave electronics, came across a news item that left him baffled: “I was shocked by the number of cell phones that are discarded daily in the U.S. and that are still in working order—426,000 per day. That is a huge number, and as a researcher, I was concerned,” he says. And rightly so; each cell phone contains chips made of poisonous gallium arsenide (GaAs).

In the 26 May issue of Nature Communications, Ma and his colleague, materials scientist Shaoqin “Sarah” Gong, plus collaborators at UW-Madison and the Madison-based U.S. Dept. of Agriculture Forest Products Laboratory (FPL) published research describing a technique for making biodegradable semiconductor chips out of wood. What’s more, they demonstrated that microwave transmitter and receiver chips made this way perform as well as their silicon or GaAs counterparts. “Actually, our work was inspired by the IEEE Spectrum article,” says Ma.

Unlike the silicon, GaAS, and petroleum-based plastic substrates that are used in electronics—and are not biodgradable—the substrate for Ma and company’s "green" chips is made of a type of paper. But unlike paper, which typically consists of wood fibers 10 micrometers thick and bigger—making it rough and fairly easy to tear, they used much smaller fibers. “If you chop down the wood into nanosize fibers, you find that the fibers are single crystals. If you put this material together to make a substrate, it becomes very strong—stronger than the paper we use,” says Ma. “It also becomes transparent and has low RF energy loss," he adds. The cellulose nanofibril (CNF) “paper” they used is about 200 micrometers thick. Although the researchers coated it with a thin epoxy layer to protect it from moisture, this does not affect its biodegradability. "If we put it in a fungus environment, the fungus can still eat it," says the Wisconsin researcher.

To create the green chips, the researchers started out with silicon or GaAs devices sitting atop substrates made of the same material. Then they released the circuits from their original substrates and transfer-printed them onto the nanofibril substrates. Using this technique, the researchers created several microwave GaAs devices, such as arrays of GaInP/GaAs heterojunction bipolar transistors, as well as circuits containing capacitors, RF inductors and Schottky diodes.  The performance of these flexible devices is exactly the same as that of rigid circuits, reports Ma.

The group also demonstrated several silicon-based digital logic circuits on paper substrates. However, these substrates may have a wider range of applications. Nanofibril films may be used in photovoltaic cells and also in displays because they have better light-transmission properties than glass, says Ma.

Using paper substrates would allow a reduction in the amount of GaAs used in chips by a factor of 3000, which would make chips conform to the pollution standards for arsenic set by the U.S. Environmental Protection Agency. Additionally, this technique would help cut costs, reducing the amount of expensive materials, such as gallium arsenide and highly purified silicon, that are packed into electronic gadgets.

"What we are looking at are future applications,” says Ma.  The paper includes a market survey comparing today's production of rigid electronics with the projected flexible electronics production. The volume of flexible electronics is expected to largely exceed rigid electronics. 

CESAsia 2015: Stephen's Show Floor Sightings, Part II

The Arduboy

img

A runaway success on Kickstarter, the Arduboy is a hackable-by-design, credit-card-size system for classic arcade-style games. It’s powered by the same Atmel chip that runs the Arduino. Unlike some Kickstarter projects that have struggled to cross over from prototype to production, the Arduboy is already lining up its ducks with Chinese manufacturers in Shenzhen. If you want to get one, the Kickstarter campaign is still active, US $30 will get you early delivery, scheduled to happen in October.

The WenPod

Behold! The next generation selfie stick! Similar to the kind of gyroscopic stabilization rigs available for video cameras, the WenPod X1 can keep your camera pointing in your direction, even as the selfie stick it’s attached to moves around. It will be available in the United States in June for US $160. 

IBM Watson's Recent Acquisitions Might Make It a Knowledge Machine You Can Actually Use

IBM Watson has been buying some interesting companies and technologies lately:

  • Cognea, a company that had developed a “conversational artificial intelligence program” meant to provide more natural interactions than current voice-controlled assistants like Siri and Cortana.
  • AlchemyAPI, a company that provides natural language processing and image recognition as an on-demand service.
  • Blekko, an alternative search engine known (as much as it was) for its classification and filtering of results

Looking at these capabilities all together provides some hints about Watson’s current weaknesses as well as IBM’s long-term plan for the system. All three are customer-facing technologies which provide different ways to interact with Watson’s collection of data and analytics capabilities.

In January 2014, IBM set aside US$ 1 billion to turn Watson into an actual business, because it’s generally been better at generating publicity than revenue. Despite all the potential that Watson the-general-purpose-AI might have, it’s still a bit unclear what Watson the business will do. For a large company that is having difficulty commercializing their own internal research, buying up smaller companies that already have users or customers is a common approach (Marissa Mayer’s buying spree at Yahoo comes to mind). 

“IBM is notorious for producing a lot of 80-percent solutions,” says Seth Grimes, an industry analyst and consultant who organizes the Sentiment Analysis Symposium. “The technology is there, but you can't just open up a box, download the software, and have it run. I see Watson in that situation right now.” That’s in contrast with AlchemyAPI, which Grimes describes as a “market tested” platform that’s already had cutomers for years. 

In addition to its web interface, AlchemyAPI also provides software development kits to support text analysis and image recognition in several programming languages. Watson does not have that kind of interface available, says Grimes. “They could build one, but for a company like IBM, it's faster, cheaper, and perhaps more feasible to buy it than to try to productize something out of their research organization,” he adds.

But for Watson to live up to IBM’s hopes, it will also have to be usable for non-developers. Cognea’s technologoy might be a step in that direction. Apple and IBM already had a deal to provide Siri with Watson data and analytics, but it’s clear that digital assistants have plenty of room for improvement. Current question answering systems (including the ubiquitous Google search) don’t have much context or state, says Grimes—each question is answered independently. But IBM aspires for Watson to have an interactive learning experience, so Cognea’s conversational model might help users give Watson better contextualized and refined questions.

Finally, the purchase of a search engine might reignite the idea that Watson will be a Google search competitor. “IBM aspires to assist knowledge workers,” says Grimes, “and Google is arguably in that same field.” After all, Google search results have evolved beyond simply listing links to web pages—the Google Now app on Android devices can provide all sorts of contextual information without ever opening a browser. 

But the two companies have very different ideas about how to make money with such knowledge. As Chuck McMannis, Blekko’s VP of Engineering explained it in his comment on Hacker News:

Blekko's key mission has always been to try to find the needles in this exponentially growing pile of hay. And it is something that the folks at Watson really liked about our technology when we first met at their outreach program to connect with startups. That is what lead to their asking us to join them, and no, they weren't particularly interested in the stuff we had done to provide more topical advertising signals.

CESAsia 2015: China’s Maker Scene Is Exploding

On the last day of CESAsia, nervous representatives from maker spaces and universities located in Shanghai and surrounding cities trooped on stage to make presentations to a panel of judges. It was a scene that would be familiar to many makers and hobbyists in other countries, but it was something new to China, as it was the country’s first maker contest. But it won’t be the last, if the rapid growth in the number of maker spaces there is any indication. This growth is now being officially supported by the Chinese Ministry of Industry and Information Technology (MIIT).

“Domestic maker spaces started in 2010, and it was a very niche activity,” said Rocky Zheng, head of the Nanjing Maker Space, through a translator before the judging began. “Until 2013, China had only 15 spaces, then it began booming. By April 2014, there were over 100 spaces.”  

Projects presented during the event included: a headset for bathing dry eyeballs in vapor containing traditional Chinese medicines; a device for harvesting energy from straphangers being swung back and forth as buses and subway trains lurch along their routes; a system that would allow a person to have their text messages appear at the bottom of a television screen; potters clay that had been reformulated so it can be extruded from a modified 3-D printer capable of rendering complex ceramic designs; and an electronic lock that was opened by coded light signals from a smartphone app. The winner was a team from Shanghai that developed a system for warning a driver when he or she is straying out of their lane.

The MIIT has started supporting maker spaces and small tech startups in part by creating an online platform for makers to exchange information about projects. It has also begun raising funds via crowd funding to defray the costs of commercializing projects. The ultimate goal is boosting China’s ability to be technologically creative. Some of the speakers at the contest representing the government seemed to consider makers as nascent business entrepreneurs, and maker spaces as commercial incubators that would feed directly into industry. This was reflected in a second round of the contest, in which teams from small—and not-so-small—companies presented new product ideas such as an automated frying pan. We also saw smart systems for reducing office energy use—more the kind of thing that elsewhere would be seen at a Y Combinator-style event rather than a maker contest.

But several speakers were quick to make the point that while commercial products can come out of maker spaces, it would be a mistake to view them, and the people working in them, so narrowly: “We try to best turn our creativity and hobbies into reality. We may be able to commercialize in some areas, but in others we are simply playful in our effort. Makers go beyond the scope of entrepreneurs in our effort,” says Nanjing Maker Space’s Zheng.

Currently, the goal is to create maker spaces in the provinces, spreading them out from their current concentration in and near cities such as Beijing, Shanghai, and Shenzhou. If this can be done with the same maker spirit espoused by Zheng, China could indeed be seeding a creative tech renaissance.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More