Tech Talk iconTech Talk

Ultrasound Microscopy Helps Image Tiny Blood Vessels

Super-resolution imaging has helped researchers get pictures of microscopic blood vessels in the brain of a live rat using ultrasound, researchers say.

Such research could one day help investigate diseases that modify blood vessels, such as cancer, stroke and thickening of artery walls in the heart and elsewhere, scientists add.

Current techniques for imaging microscopic blood vessels in living organisms are limited by how deep they can penetrate into tissues, the speed with which they can take pictures, and the resolution of the images they can capture. Although conventional medical ultrasound can image both deeply and quickly, it has, at best, offered a resolution of several hundred micrometers. Because waves diffract or spread out as they move, one consequence is that waves of radiation such as ultrasound cannot be used to directly image features smaller than half the wavelength of that radiation.

Read More

Measuring Tiny Magnetic Fields With an Intelligent Quantum Sensor

We know that electrons have spin. And researchers think that the spin of single electrons trapped in nitrogen vacancy centers in diamond might be used to store qubits in future quantum computers. They’ll use light and microwave pulses to control the electron’s spin—either up or down—read it out. However, because spin is a quantum property, as soon as any measurement is made in order to determine the strength of the magnetic field, the spin switches from superposition into either one of the two possible spin states. But by repeating the measurement multiple times and looking at the distribution of up and down spins, it is possible to estimate the magnetic field strength using statistics.

Researchers at Delft University of Technology in the Netherlands and Macquarie University in Sydney, Australia, have developed a method that boosts the sensitivity of the repeated measurements by using a feedback loop. After each spin determination, it adjusts the measurement settings for the next measurement, resulting in a sensitivity that is 100 times as high as that of earlier experiments. The researchers published the design of an “intelligent” quantum sensor controlled by a microprocessor in last week's Nature Nanotechnology.

If an electron is brought into a magnetic field, it undergoes a Zeeman interaction, says Machiel Blok, a physicist at Delft University of Technology who participated in the research. Blok explains that it’s a phenomenon similar to the splitting of spectral lines observed in the sun's atmosphere that is caused by the sun's magnetic field. He added that: 

To measure this interaction, we use a technique called Ramsey interferometry, where we first prepare with a microwave pulse a superposition between the two spin levels. The energy difference between these spin levels depends on the static magnetic field that is present. This can be read out by the spin—that is, how much is in one state, and how much is in the other state.  This we do optically; we can get a resonant excitation of the spin, depending on the spin state. As a result, we get fluorescence if it was in one spin state, and no fluorescence if it was in the other spin state. 

The experiment is repeated multiple times with a preset series of different sensing times whereby the electron is coaxed into a different quantum state each time and allowed to interact with the magnetic field. The magnetic field affects the proportion of these two states, which the researchers can indicate with a series of ones and zeros. The ratio of zeros to ones is an indication of the strength of the magnetic field. 

Up to that point, the methodology of the Delft researchers is no different than earlier research using spins to measure magnetic field strength. But where the methods diverge is that, with each subsequent readout of the electron’s quantum state, the measurement is further refined using Bayesian statistics based on the spin readouts that came before. “By looking at the outcome of each experiment, using Bayesian statistics, you can tune the next step of the experiment, which allows the experimental setup to focus quickly on a better estimate of the magnetic field,” says Blok.

Blok and his collaborators used a commercially-available programmable microprocessor to collect the measurement results in real time—but one that performs the Bayesian update itself.  “We programmed the chip so that it can do this efficiently, and this increased the accuracy of the measurement substantially,” says Blok.  

Although very weak magnetic fields can be measured with SQUIDs, using electron spins has an interesting advantage: spatial resolution. The spatial resolution of SQUIDs, Blok explains, is limited by the size of the loop, which is in the order of a micrometer or even larger.  “Our single-spin sensors, since they are an atomic defect, can, in principle, be of nanometer or subnanometer resolution,” says Blok. To put that size in context, diamond nanocrystals containing nitrogen vacancy centers can be introduced in living single cells, says Blok.

Despite Its Status as a Chip Powerhouse, Taiwan Neglects Supercomputing

A quick glance at the new ranking of top supercomputers gives a surprising showing by one of the world’s technological powerhouses. Taiwan does not possess a single machine powerful enough to make the list. While there are many nations that don’t make the list, Taiwan is peculiar in that it has such an outsized grip on the computer chip industry. What’s more, its political rival, China, not only holds the world’s top machine, it now has more ranking supercomputers than any nation except the United States.

It has been a long decline. Taiwan’s most powerful machine supercomputer, the Advanced Large-scale Parallel Supercluster also known as ALPS or Windrider, ranked 42nd in June, 2011, shortly after its launch.

 But the process of upgrading Taiwan’s supercomputing infrastructure has been slowed by ineffective government budget allocation. Since 2013, the National Center for High-performance Computing (NCHC), which operates Windrider, has failed twice to get its budget boosted enough to strengthen its supercomputing ability. While other countries poured money into the installation of powerful supercomputers as a way to show national power, Windrider fell to 305th then 445th in June 2014 and June 2015. 

“If our three-year budget proposal is approved early next year, Taiwan would gain a much better position on the Top 500 in 2018, when a 2 petaflops system is launched,” says Jyun-Hwei Tsai, Deputy Director General of NCHC. If such a system were launched today, it would rank 36th.

The Ministry of Science and Technology says it understands the importance of supercomputing and prioritized it in its budget proposal as it had in 2013 and in 2014, officials say. However, it’s really up to the Cabinet.

Cabinet spokesman Lih-chyun Sun says the government fully understands the importance of supercomputing and points out that Taiwan has promoted cloud computing and big-data projects. “It remains uncertain when sufficient budget would be made available for new systems. We’re still reviewing the budget proposal. The decision has not yet been made,” Sun says.

“The Cabinet will make a final decision early next year,” adds Tzong-chyuan Chen, Director General of the Department of Foresight and Innovation under the ministry. “In economic recession years, it’s difficult to gain budget for important science and technology projects with long-term impacts, which are not yet felt.”

It wasn't always like this. In June 2002, an IBM system at the NCHC center ranked 60th. In June 2007, the center’s newest system, called IRIS, ranked 35th. 

Taiwan’s IRIS, however, was eventually kicked out of the Top 500 list in November 2009 due to a boom in supercomputer installations in many other countries, such as China. The world’s most powerful system, China’s Tianhe-2, or Milky Way-2, has held the top of the biannual Top 500 list six times in a row. And it is one of 109 systems in China that made the list. The huge increase in China's supercomputing power in recent years can be attributed in part to some government-back companies, such as Sugon and Inspur, which together manufactured 64 of the ranked systems.

According to NCHC’s Tsai, the big strides taken by other countries is a sore point in Taiwan. “We don’t compare ourselves with big countries, such as China, Japan, and the United States. What frustrates us more is that, in South Korea, the momentum of national supercomputing is now stronger than ours,” he says. Currently, South Korea’s two fastest systems rank 29th and 30th.

It’s not as if there isn’t much demand for supercomputing in Taiwan. Currently, Taiwan’s Windrider utilization exceeds 80 percent.  “It’s like a crowded superhighway. And we’ve heard complaints from some users,” NCHC’s Tsai says.

According to Tsai, Windrider is most significantly used in basic physics, chemistry, biomedical imaging. But certain key fields get prioritized access. Those include environmental studies, climate change, earth science, natural disasters, and water resources management.

“Taiwan is prone to natural disasters, such as typhoons, floods and earthquakes. A powerful database, backed by powerful supercomputing systems, is essential for conducting better predictions of typhoons,” Tsai says.

Due to the limit of Taiwan's supercomputing capability, some scientists have taken to building their own computer clusters and speeding up existing resources by making graphics processing unit-based accelerators.

Tzihong Chiueh, a theoretical astrophysicist at National Taiwan University, says they had not relied on NCHC’s system for years. Since 2013, his team has been taking advantages of a self-built system, that can reach tens of teraflops.

“A petaflop-scale system, should it be funded by the government, would certainly be useful to researchers,” he says. “The investment should indeed be prioritized. I hope it can work at least 10 times faster than the current system.”

Testing Einstein's Theories With Satellites Stuck in Eccentric Orbits

In August of last year, when ESA launched its fifth and sixth Galileo navigation satellites, things went wrong. Because of a fault in the upper stage, both spacecraft ended up in elongated elliptical orbits instead of circular ones, making them unusable for navigation. Subsequent corrections of their orbits restored their function as navigation satellites, but their orbits still remained highly elliptical, with a difference of about 8,000 km between their closest and most distant points from Earth.

To the satellite navigation engineers, this was a nuisance requiring changes in the software and the technology. But for physicists, the eccentric orbits offered an unexpected opportunity. Researchers at both Sytèmes de Référence Temps Espace, or SYRTE (a department of Paris Observatory), and ZARM (the Center of Applied Space Technology and Microgravity) at the University of Bremen, Germany, convinced ESA to use the satellites to test more extensively an effect predicted by Einstein's general relativity. They hope to find out more about the extent to which time slows down when the gravitational field diminishes as one moves away from Earth.

This effect, also called gravitational redshift, or time dilation, has been previously observed; the need to correct timing signals transmitted from navigation satellites because their clocks operate slightly slower than those on the Earth's surface is a matter of routine. And the relationship between the slowing of time and the distance from Earth was tested in 1976 with the one-shot experiment, the Gravity Probe A, that reached a height of 10,000 kilometers. Using a two-way microwave link between the ground station and the Gravity Probe A, researchers directly compared the speed of the maser clock aboard the spacecraft to that on the ground, and confirmed the slowing of time with an accuracy of 140 parts in a million.

 Maser clocks are large and complicated instruments that measure time with an extreme precision, and are only carried by navigation satellites.  As their precision allows the accurate measurement of the relativistic slowing of time, the researchers at SYRTE and ZARM (ZARM had even proposed the building of a dedicated satellite in the past) jumped at the opportunity. “There are not many opportunities where you have a good clock on an eccentric orbit in space,”says Sven Herrmann, a physicist at ZARM.  “This is a combination that happened by chance; it was bad luck but also good luck.”

Unlike Gravity Probe A, the Galileo satellites don’t have a microwave link that allows direct access the maser clock frequency.  However, the researchers will be able to use the existing spacecraft-to-ground communication infrastructure, including the GNSS to perform the test.

ESA decided to go ahead with the test after a workshop in February 2015. The data taking, which will last for a year, will start in 2016.  “An important consideration for this decision was that the general relativity tests we are going to perform may be done in a transparent way, without any interference in the nominal operations of the satellites,” says ESA Global Navigation Satellite Systems Senior Advisor Xavier Ventura Traveset.

The Gallileo satellites continuously send messages about their position and the time on their clock, explains Herrmann.

We will reconstruct the data, the clock frequency, from the measured travel times. The satellite transmits a time stamp when the message leaves the satellite, and the receiver on the ground also produces a time stamp, and you calculate the travel time. If you know the position of the satellite, you can reconstruct the behavior of the clock on board the satellite. 

Precise accounting of the distance of the satellites is crucial to the determination of the maser frequencies. The researchers plan to use optical laser ranging, bouncing laser beams on retro reflectors mounted on the satellites to get an accurate measurement.  

The researchers expect that the accuracy of the relativistic slowing of time will be four times as accurate as the Gravity Probe A results. The large number of repeated measurements, as compared with Gravity Probe A’s single measurement, will give researchers a new bunch of numbers to analyze. “We have this large modulation in gravitation now, with 8,000 kilometers change twice per day...that will help us with the statistics,”  concludes Herrmann.

Rewired Rose Plant Becomes Living Cyborg

“Every rose has its thorn,” the song goes, but not every rose has electronic wires running through its body. The futuristic idea of plant cyborgs is making the leap from science fiction to real-world science.

What’s the big deal? 

Swedish researchers have been working on ways to regulate plant growth, using electronic wires grown inside the plants own nutrient channels to host sensors and drug-delivery systems. The aim is to provide just the right amount of plant hormones at just the right time. Such efforts could provide even more precise human control over plant production and agriculture.

A separate but no less exciting project involves embedded biofuel cells that could literally turn plants into solar power plants. If all goes well, sensors and other devices could someday harvest electricity from the natural process of photosynthesis that enables plants to turn sunlight into chemical energy. It’s not often that such a sweet-smelling prospect begins with a humble garden rose. But that’s where the first successful steps toward electronic plants has begun. A team at Linköping University in Sweden has taken a huge step forward with the first experiments demonstrating electronic circuits within the living bodies of plant stems and leaves. Their research is detailed in the 20 November 2015 issue of the journal Science Advances.

They grew electronic wires as long as 10 centimeters within garden rose stems and turned leaves into patchy electronic displays capable of changing colors between light and dark on demand. They also built working transistors—the basic switches at the heart of modern electronics—based on the wires embedded within the plants.

“In a sense, we are then introducing a nervous system into the plants,” says Magnus Berggren, a professor of organic electronics at Linköping University in Sweden.

But the researchers didn’t perform Frankenstein-style surgery to implant the wires. Instead, they made use of the xylem, plants’ natural system of channels that typically carry water and nutrients from the roots to stems, leaves, and flowers.

The team’s early attempts to thread conductive polymer wires through the xylem led to the xylem being clogged or the plants exhibiting severe toxic reactions. But the researchers eventually discovered that a liquid solution containing a polymer called poly(3,4-ethylenedioxythiophene), or PEDOT, could readily be taken up by the xylem and distributed evenly throughout. What’s more, they found, it would eventually form a solid wire capable of conducting electricity. The presence of such “xylem wires” still allows the channels to carry the necessary water and nutrients for plant survival.

Berggren explained how the liquid solution containing dissolved chains of PEDOT-S:H—a chemical variation of PEDOT—was able to form solid wires with the help of both the xylem’s vascular channels and the plants’ delayed immune response: 

After some time, the plant reacts against this unknown material. A common reaction against pathogens or toxic materials involves exchange of monovalent ions with divalent ones. The increase of divalent ions promote self-organization and formation of the actual conducting wires along the inner walls of the xylem channels. In a sense, the plant is helping us to separate the the event of distribution of the conducting and electronic materials from the event of film formation along the xylem walls.

Successful creation of the xylem wires also allowed the researchers to create “organic electrochemical transistors” within the plants; these transistors convert chemical signals into electronic outputs. Such transistors could form the basic hardware for more sophisticated plant cyborg devices. The team even used the plant circuitry to demonstrate digital logic gates—the building blocks for performing more complex electronic and computing operations.

Other experiments turned the leaves of roses into living electronic displays. The Swedish researchers accomplished this by encapsulating a leaf in a syringe filled with a different PEDOT solution. When the syringe’s plunger was pulled up, it created a vacuum that sucked gas out of the leaf through the “stomata” pores on the leaf surface. Once the syringe plunger was pushed down, the PEDOT solution rushed into the pores to fill the spaces between the leaf’s veins.

The result was a patchy network of conductive material within the leaf. Researchers sandwiched the leaves between PEDOT films to create electrical contacts with the PEDOT inside the leaves. That enabled the team to remotely manipulate the material within the leaves, changing their color between lighter and darker patterns. The switch between light and dark typically took about 20 seconds. The researchers observed that a pattern, whether light or dark, would remain visible for about 10 minutes.

The researchers mostly experimented with cut rose stems and leaves, but what works in garden roses could also help create other electronic plants, Berggren said. The basic structure of roses resembles those of larger plants such as trees, which means trees could also theoretically become living plant cyborgs or “e-plants.”

Femtosecond Lasers Drive a New Generation of Network Vector Analyzers

Researchers at the Physikalisch-Technische Bundesanstalt (PTB, the German national standards institute in Braunschweig) have used lasers to improve on a widely used and extremely precise instrument for measuring signal strength and phase in electronic components. The new laser-driven terahertz vector network analyzer (VNA) could offer broader frequency coverage and higher resolution at a lower cost than today's instruments.

By using a VNA to precisely plot the signal amplitude and phase over time, designers and troubleshooters can see how signals are transmitted and reflected by components and connectors. This lets them characterize signal scattering and timing shifts that can affect a design’s performance.  In addition, national standards authorities use VNAs to define voltage pulse standards for calibrating oscilloscopes, photodiodes, electrical pulse generators, and the like.

In research published in IEEE Transactions on Microwave Theory and Techniques, PTB’s Mark Bieler, Heiko Füser, and Klaus Pierz note that though existing VNAs are extremely precise, they have grown more complex and more expensive as operating frequencies have topped 1 THz.

The PTB team constructed a prototype of a simplified vector network analyzer using a femtosecond laser to generate test signals—short, precise voltage pulses. The pulse travels down a short strip of gold (4 mm long and 0.5 micrometers thick) deposited on a gallium arsenide (GaAs) chip. The pulse’s electric field changes the gallium arsenide’s index of refraction, so another laser beam can track and measure phase and amplitude as the signal travels down the strip. The method can resolve signals traveling both up and down the conductor, so researchers can also measure the signal reflected when the pulse encounters another material in the circuit. By generating pulses over a range of frequencies, the PTB scientists can thoroughly characterize the performance of the “device under test.”

Thanks to its relative simplicity and the falling size and cost of femtosecond laser sources, the researchers say, the new VNA is “relatively cheap,” and offers an “incredibly broad frequency bandwidth spanning three orders of magnitude” with a single piece of hardware. The prototype does have at least one drawback, say its originators: so far, its 40-decibel dynamic range (able to detect signals whose amplitudes differ by two orders of magnitude) is considerably smaller than the best conventional VNAs, which have a range of 120 dB (six orders of magnitude)or more.

The new device uses a flash of 800 nanometer light (produce by frequency doubling the 1600 nm light of an erbium-doped pump laser) to activate a “biased photoconductive gap”—essentially an open switch that closes when hit with laser light—at the beginning of the waveguide. This allow current to flow across the gap, but the switch opens again almost immediately. The resulting 90-fs pulse zips down the circuit. A probe beam, a 1600 nm flash from the same erbium-doped laser, shines onto the GaAs substrate. If the passing pulse has changed the GaAs’s index of refraction, the probe beam’s polarization changes accordingly. This can be measured and (with a little manipulation) reveals the shape, amplitude, phase, and direction of the pulse...and of any signals from internal reflections.

To See Deep Inside a Volcano, Use a Telescope Built for Quasars

How do you peer into the hidden heart of a volcano? Use a leading-edge telescope designed to track quasars and supernovae.

The next-generation Imaging Atmospheric Cherenkov Telescope, now under construction in Serra La Nave near Mount Etna in Sicily, is designed to track the highest of high-energy celestial events—objects spitting out cosmic gamma radiation, high-frequency photons with energies from 1 teraelectronvolt  to 100 TeV. (100 TeV is 1014 electron-volts; compare that to the 5 x 103 to 150 x 103 eV of a medical X-ray.)

Read More

New Supercomputer Ranking Shows China Still on Top

The TOP500 supercomputer ranking, published today, shows China’s Tianhe-2 supercomputer remaining at the top of the heap, with its 33.86 petaflops/s. The number-two spot also remains unchanged: Oak Ridge National Laboratory’s Titan supercomputer, which can run at 17.59 petaflops/s.

Indeed, there is little change in the top 10 positions, with only two new names appearing in that elite group, Trinity (number 6, managed and operated by Los Alamos and Sandia National Laboratories) and Hazel-Hen (number 8, at Höchstleistungsrechenzentrum Stuttgart). These systems were both installed in 2015, as was Saudi Arabia’s Shaheen II supcomputer, which ranks 9th. The seven other supercomputers in this grouping date from 2013 or earlier.

As you’d expect with such little turn over at the top of the list, the overall rate of growth in performance of the world’s top supercomputers has been slowing in recent years. But the summed performance of the world’s 500 fastest supercomputers is still up by 55 petaflops/s over the TOP500’s June 2015 ranking.

If there’s any take-home message coming through from today’s ranking, it’s the growing dominance of Chinese supercomputers on the world’s stage. China now has 109 supercomputers in the top 500, up from just 37 in July. At the same time, the U.S. share has dropped from 231 to 200. And the European component is also down, from 141 in July to 108 now.

If China ends up using its computing behemoths to significantly advance its industrial prowess, companies in other parts of the world might well worry about the trajectory evidenced in today’s rankings. But that, of course, is a big “if.”

Not much change was reported in the world's 10 most powerful supercomputers.
1 Tianhe-2 National Super Computer Center in Guangzhou
2 Titan DOE/SC/Oak Ridge National Laboratory
United States
United States
4 K computer RIKEN Advanced Institute for Computational Science (AICS)
5 Mira DOE/SC/Argonne National Laboratory
United States
United States
7 Piz Daint Swiss National Supercomputing Centre (CSCS)
8 Hazel Hen HLRS - Höchstleistungsrechenzentrum Stuttgart
9 Shaheen II King Abdullah University of Science and Technology
Saudi Arabia
10 Stampede Texas Advanced Computing Center/Univ. of Texas United States 5,168.1

Flexible Optogenetics Implants Hack the Sense of Pain

A stretchy electronic implant as small as a your fingertip can control the feeling of pain in mice. That proof of concept could pave the way for future medical implants that hack the human nervous system and offer relief for people living with chronic pain.

Read More

Better Brain-Reading Software Compensates for Squishy Brains and Vacuum Cleaner Noise

What will it take for brain implants to become standard-issue tools for people who are paralyzed? When will they be able to use neural commands to type words or drive motorized wheelchairs?

Research published today the journal Science Translational Medicine might point the way. Scientists and engineers who are part of the BrainGate project reveal that they have designed a better decoder to make sense of electric signals from the brain. Their crucial advance: software that compensates for the irregular nature of those neural signals.

The team is working to make their neural implant not only a functional mind-reading device, but also a practical one that paralyzed people could use in their homes. The system's electrodes are implanted in the motor cortex, where they pick up electric signals from neurons involved in issuing movement commands to the body. In experiments over the last decade, the project’s volunteers have imagined moving their paralyzed arms to control external devices like a robotic arm and a computer cursor

In the newest set of experiments, the researchers showed off their improved decoding software that turns the brain’s electrical signals into commands. Previously, researchers had to stop their experiments frequently to recalibrate the software, because the electrical signals that the electrodes pick up can vary dramatically over the course of an hours-long session.

In prior sessions using the old software, the researchers would spend the first 10 to 30 minutes calibrating the system, essentially teaching it which neural signals translated into which movement commands.

“Then we’d let the participant use it for something practical for 30 minutes or maybe an hour, but then the signal would degrade,” explains Beata Jarosiewicz, lead author of the new paper and an assistant professor at Brown University.

The researchers would then have to make a decision: Should they spend another 10 to 30 minutes recalibrating the system, or call a halt? For BrainGate to become a practical home-use technology, clearly it can’t require users to stop what they’re doing every half hour for recalibration, says Jarosiewicz.

The electrical signals change during a session for two main reasons. Here’s the first reason in highly technical terms: “The brain is kinda squishy,” Jarosiewicz says. Neural tissue shifts slightly when people move their bodies and even as their hearts beat, so stiff electrodes implanted in the tissue come into contact with different brain cells, which are producing different electrical signals. “Even movements on the order of a few microns is enough to change the signal that we’re recording,” says Jarosiewicz.

Signal instabilty also stems from the environment in which recording takes place. The BrainGate team often conducts experiments in participants’ homes to see how their gear functions in real-world settings, so the system can pick up electromagnetic noise from nearby electronics. “Someone might turn on the vacuum cleaner in the other room,” Jarosiewicz says. Suddenly, a signal that used to indicate a certain cursor movement could be obscured.

The primary trick behind the improved decoding software: Each time the user pauses—say at the end of a sentence—the system recalibrates itself, matching the words and letters selected in the sentence to the set of neural recordings from that time span. With this technique, called “retrospective target inference,” it’s constantly relearning which signals translate into which commands. As the signals change, it adjusts accordingly. The video below gives a brief explanation and demonstration.

One participant with Lou Gehrig’s disease used this improved decoder with the typing interface, and showed that it provided good control over the course of six sessions spaced out over 42 days. Jarosiewicz says the next step is to use the decoder not just to control a cursor for the typing program, but to control a computer mouse. With that ability, users could control just about anything that’s connected to the internet. They could find autonomy thanks to the Internet of Things.

There’s still one big stumbling block before someone who’s locked-in can use the BrainGate system to communicate freely or operate robotic assistants. The current implant must be physically connected via cables to a computer, so a technician has to help the user get jacked in. But Jarosiewicz notes that another neural engineer at Brown is now working on a wireless system. “We want people to have the system available 24-7,” she says.   


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More