Tech Talk iconTech Talk


Seismic Tools to Map Earthly Microquakes and Martian Dust Devils

Two new papers demonstrate analytical tools that greatly enhance scientists’ ability to interpret low-amplitude seismic signals…though to very different ends. One shows how to efficiently tease signals of very low-energy events out of the overwhelming flood of data pouring out of seismometric stations around the world. The other shows how weather tilts the land below it, offering a tool for tracking small thermal vortices—dust devils—here and on Mars.

For most of us, seismology means the Big Ones, significant earthquakes and wild jumps of the seismograph. Those spikes are reflected numerically in the ratios of short-term and long-term average ground motions (STA/LTA) that pass a threshold value.

This doesn’t work quite as well for subtler events: those masked by noise, overlapping signals, long-developing signals, and the rumbles and thumps of human activity. Many seismic phenomena, such as the slipping of a particular fault at a particular place, recur over periods ranging from weeks to years, generating a characteristic seismic-wave signature each time. To find miniquakes that haven’t stepped over the detector threshold, seismologists use a template matching technique, reducing the known quake signature to a “correlation coefficient” and then searching the geological database for a match.

Template matching, however, requires a signature for a known event. Finding unknown leitmotifs in the flood of seismic data is more challenging. In its most extreme form, it requires comparing every 10-second snippet of data to every other 10-second snippet of data in the terabyte deluge from the world’s seismographs stations (totaling some 21,899, according to the International Seismological Centre registry).

That fine-toothed-comb-level matching, called autocorrelation, has up until now been the most accurate way of doing these comparisons, consumes a great deal of time and computing power. In a test, an autocorrelation analysis of just one week’s worth of data from a single station near San Jose, Calif., took 9 days and 13 hours.


To speed things up, researchers at Stanford University’s Department of Geophysics and Institute of Computational and Mathematical Engineering, have taken a cue from apps like Shazam, which help users identify snatches of melody that they just can’t put a name to. These programs identify the main characteristics of a complex wave spectrum, compress them, and represent the result as an ordered series of values—a vector “fingerprint.” The fingerprint is used to assign the snippet to a particular bin (locality sensitive hashing). This operation is repeated until the entire data set has been fingerprinted and categorized. The analysts then compare each snippet to all of the other snippets in that bin; when the fingerprints agree and the source wave-forms match, the two events can be identified as originating from the same sort of tectonic movement. 

Lead Stanford author Clara Yoon and her colleagues built their Fingerprint and Similarity Thresholding (FAST) system not on Shazam’s analytical engine, but on Google’s WavePrint fingerprinting package (originally developed for processing images). The team started with the same week-long data file they used for the autocorrelation test. This time, they filtered the signal to concentrate on the 4-to-10 Hertz band (the earthquake’s voice range) and compressed it from 100 samples per second to 20. They then broke the whole dataset into tens of thousands of overlapping 10-second segments, reduced each segment to a 4096-bit binary fingerprint, and assigned it to a bin.

Confining the final time-consuming autocorrelation comparison to a few closely related segments in the same bin slashes computer run-time. The same analysis that required 9.5 days of autocorrelation analysis took 96 minutes of FAST processing. Both the autocorrelation and the FAST analyses detected about the same number of events (86 and 89, respectively), and each of them found 43 previously unreported events. FAST did have a weakness: autocorrelation correctly identified all 24 of the previously reported events, while FAST missed 3. Then again, FAST located 25 events that standard template matching missed (versus 19 for autocorrelation).

FAST’s run-time advantage becomes more marked as the amount of data increases. Analyzing 6 months of seismometer data would take FAST a couple of days. Autocorrelating the same data would eat up about 20 years.

The Dust Devil is in the Details

Elsewhere, an international team has figured out how to use seismometer data to identify and track dust devils, the swirling vortices that seem to come out of nowhere to lift sand and debris into a wild dance. They look like small tornadoes, but they are, in fact, a sort of anti-tornado. Tornadoes draw their energy from a storm system, and are usually associated with thunder, clouds, and rain. Dust devils are born from hot air rising from the surface, in weather that is clear, blazing, and dry.

But, like tornadoes, dust devils form around a core of low pressure. And, in addition to sucking in surrounding air to make a spinning funnel, the reduction in air pressure also reduces the load on the ground beneath. The ground responds by rising up, creating a detectable tilt that extends out beyond the wall of the vortex.

A group of researchers (from Johns Hopkins University Applied Physics Lab in Laurel, Md., the Jet Propulsion Laboratory in Pasadena, Calif., the Institut Supérieur de l'Aéronautique et de l'Espace in Toulouse, France, and the Institut de Physique du Globe de Paris) found that the 1 to 2 millibar pressure drop in typical a 5-meter-diameter dust devil would reduce downward force on the ground by the equivalent of 810 kilograms (just a little less than the curb weight of a Mitsubishi Mirage , cited by Motor Trend as perhaps the lightest vehicle now sold in the U.S.).  The pressure drop for a really big dust devil, on the other hand, could amount to some 300 tons. Their work is reported in Bulletin of the Seismological Society of America (with open access on arXiv).

In an experiment on a dry lake bed in the California desert (on a site operated by JPL and “within sight of the 70-meter Deep Space Network antenna”), Ralph Lorenz of Johns Hopkins and his collaborators confirmed their calculations using a network of eight pressure loggers set up along the 60-meter arms of a cross centered on an existing seismic station. They matched the seismic and barometric data, and found that the seismograph did indeed record the earth tilting up toward the center of a dust devil. The degree of tilt, moreover, corresponded to the size and distance of the vortex…and the seismograph traces of acceleration in the north-south and east-west directions showed the direction of the wind’s closest approach to the center of the cross.

The objective of this research is not so much directed at happenings on Earth, but those on Mars, where dust devils are can often be seen crossing the red deserts in caravans. (See Lorentz’s blog post on Martian dust devils on the Planetary Society website.) In Mars’s rarer atmosphere, a 0.1-millibar pressure drop in a 15-meter-wide dust devil would produce about the same Mitsubishi Mirage-sized decrease in ground force as a typical California devil. Thirty meters away, the ground would tilt by about 5 x 10-9 radians, enough to register on the seismometers planned for NASA’s Mars InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission, scheduled to launch in March 2016.


Camera That Tracks Hidden Moving Objects Could Aid Rescue Missions and Avoid Vehicle Collisions

A new camera system can locate objects hidden around a corner and track their movements with centimeter precision. The camera, which captures images outside its field of view in real time—which is no mean feat—could be used to look for survivors in search-and-rescue scenarios and in vehicle collision avoidance systems.

Cameras that can see stationary objects through walls mostly rely on radar-based imaging technology. But radar systems are typically big, expensive, low-resolution and limited to short distances.

So researchers have switched to other parts of the spectrum. Earlier this year, an MIT team reported a low-cost, high-resolution camera that uses low-power microwaves to create 3-D images of objects hidden behind walls. Optical techniques based on lidar (laser illuminated detection and ranging) have also been used to image hidden objects. But both the microwave and lidar techniques take an hour or more. “That’s not going to work if you want to know whether a car is coming around the corner,” says Genevieve Gariepy, a physics doctoral student at Heriot-Watt University in Edinburgh, UK.

Gariepy, Daniele Faccio, and their colleagues designed a new system that is similar to lidar, but much faster and more sensitive. It can precisely detect a hidden object’s position in just a few seconds, “so we can track the object’s motion in real time,” she says.

Read More

How Supercomputing Can Survive Beyond Moore's Law

Today’s technology makes a 1-exaflop supercomputer capable of performing 1 million trillion floating-point operations per second almost inevitable. But pushing supercomputing beyond that point to 10 exaflops or more will require major changes in both computing technologies and computer architectures.

Read More
A clock displays 60 seconds during a leap second event

The Eight-Year Leap Second Delay Might Not Be As Bad As It Seems

After I posted a curtain-raiser about the debate over the fate of the leap second at the World Radiocommunication Conference in Geneva last month, I settled in for a wait. 

The leap second, if you haven’t come across it before, is the stray second that is added intermittently to atomic-clock–based Coordinated Universal Time (UTC) to keep it in sync with the unsteady rotation of the Earth.

The question of whether to keep or drop the leap second from UTC has a long and contentious history, and several people I interviewed said they expected negotiations to last through most of the four-week-long meeting. 

Instead, “everything was really settled at the end of the second week,” says Vincent Meens of France’s National Center for Space Studies. And the decision was to delay the decision: the question was placed on hold until the 2023 World Radiocommunication Conference, which will be the meeting after the next WRC meeting.

That might sound like kicking the proverbial can down the road—and especially bad news for those who think that adding leap seconds threatens modern networks and systems. But the eight-year delay might not be as bad as it sounds. If the leap second were dropped this year, there would likely have been a grace period to allow systems to adjust to the new order; the proposal submitted this year by the Inter-American Telecommunication Commission, for example, would have waited until 2022 to make the change to UTC active. 

Meens expects that if a decision is made to eliminate the leap second in 2023, it would be accompanied by swift action. “The idea is not to wait. So if it’s decided [to eliminate the leap second] it should be right when the new radio regulation is put into force. The new time scale would be in the beginning of 2024,” Meens says. So what looks like an eight-year delay right now might only wind up being a couple of years.

Of course, that outcome will likely depend on what’s done in the meantime (i.e. a good amount of consensus-building and leg work). There is a long list of organizations (see paragraph five in that link) that are expected to take part in studies leading up to WRC-23. And in the midst of all that, Nature’s Elizabeth Gibney reports, responsibility for the definition of UTC will be shifting away from the International Telecommunication Union and toward the international body that already manages International Atomic Time as well as the SI units of measure. She says the change in responsibility is unlikely to accelerate the decision.

In fact, says Brian Patten of the U.S. National Telecommunications and Information Administration, the International Telecommunication Union can’t make the change by itself. “The ITU cannot alone make a decision about leap seconds,” he says, as the organization is responsible for distributing the time scale not making it. As for a speedy resolution in 2023, Patten says it’s too early to call: “we will have to see what happens in the joint work and discussions,” he says. “We can’t speculate on what the outcome will be when a report is delivered to WRC-23 on the status of the work.”

Although Meens predicts swift implementation if the leap second is eliminated, he can’t predict which way the decision will go. He’s had a role for years in international deliberations over the leap second, but even he was surprised by the outcome of this meeting. “I thought this was going to go until the end of the conference,” Meens says. “This was a particular subject where it was hard to find gray between white and black.”

He theorizes the decision to delay might have come about in part because the international participants of the WRC wanted to focus on other difficult subjects—in particular, the allocation of radio-frequency bands for mobile devices. It’s hard to imagine we won’t be demanding even more spectrum in eight years time. But perhaps it will be less of a distraction the next time around. 

The Final Acts (pdf) of the conference are now available (the UTC decision is in RESOLUTION COM5/1).


A Blast of Plasma Makes Plants Grow Faster

According to a United Nations forecast, demand for food and animal feed will nearly double over the next 25 years as the world adds another two billion people. Scientists at Kyushu University in southwest Japan think what will help save us is the fourth state of matter—plasma.

 Researchers around the world are investigating how blasting seeds with ionized gas can help boost plant growth. Now the Kyushu research team has developed a much simpler plasma technology that it claims can both significantly increase crop yields and shorten harvest time. Kazunori Koga, a plasma engineer and associate professor in Kyushu University, described the technique at the American Vacuum Society’s 62nd International Symposium & Exhibition in San Jose, Calif. last month.

Read More

Mobile Phone Data Predicts Poverty in Rwanda

A picture of how wealthy or poor people are can be reconstructed from anonymized data generated by mobile phones, according to researchers analyzing cell phone data from Rwanda.

Personal information that mobile devices gather such as a person's location often gets anonymized by stripping it of names, home addresses, phone numbers, and other obvious identifying details. Such metadata often get shared, and underlies popular services such as Google's real-time monitoring of road traffic.

However, anonymized data can still divulge a great deal about individuals, suggesting that the process does not protect privacy as well as often thought. For instance, anonymized credit card data can easily be used to identify credit card users, and analyzing the movements of your social contacts can help generate a relatively complete picture of your movements.

Read More

Ultrasound Microscopy Helps Image Tiny Blood Vessels

Super-resolution imaging has helped researchers get pictures of microscopic blood vessels in the brain of a live rat using ultrasound, researchers say.

Such research could one day help investigate diseases that modify blood vessels, such as cancer, stroke and thickening of artery walls in the heart and elsewhere, scientists add.

Current techniques for imaging microscopic blood vessels in living organisms are limited by how deep they can penetrate into tissues, the speed with which they can take pictures, and the resolution of the images they can capture. Although conventional medical ultrasound can image both deeply and quickly, it has, at best, offered a resolution of several hundred micrometers. Because waves diffract or spread out as they move, one consequence is that waves of radiation such as ultrasound cannot be used to directly image features smaller than half the wavelength of that radiation.

Read More

Measuring Tiny Magnetic Fields With an Intelligent Quantum Sensor

We know that electrons have spin. And researchers think that the spin of single electrons trapped in nitrogen vacancy centers in diamond might be used to store qubits in future quantum computers. They’ll use light and microwave pulses to control the electron’s spin—either up or down—read it out. However, because spin is a quantum property, as soon as any measurement is made in order to determine the strength of the magnetic field, the spin switches from superposition into either one of the two possible spin states. But by repeating the measurement multiple times and looking at the distribution of up and down spins, it is possible to estimate the magnetic field strength using statistics.

Researchers at Delft University of Technology in the Netherlands and Macquarie University in Sydney, Australia, have developed a method that boosts the sensitivity of the repeated measurements by using a feedback loop. After each spin determination, it adjusts the measurement settings for the next measurement, resulting in a sensitivity that is 100 times as high as that of earlier experiments. The researchers published the design of an “intelligent” quantum sensor controlled by a microprocessor in last week's Nature Nanotechnology.

If an electron is brought into a magnetic field, it undergoes a Zeeman interaction, says Machiel Blok, a physicist at Delft University of Technology who participated in the research. Blok explains that it’s a phenomenon similar to the splitting of spectral lines observed in the sun's atmosphere that is caused by the sun's magnetic field. He added that: 

To measure this interaction, we use a technique called Ramsey interferometry, where we first prepare with a microwave pulse a superposition between the two spin levels. The energy difference between these spin levels depends on the static magnetic field that is present. This can be read out by the spin—that is, how much is in one state, and how much is in the other state.  This we do optically; we can get a resonant excitation of the spin, depending on the spin state. As a result, we get fluorescence if it was in one spin state, and no fluorescence if it was in the other spin state. 

The experiment is repeated multiple times with a preset series of different sensing times whereby the electron is coaxed into a different quantum state each time and allowed to interact with the magnetic field. The magnetic field affects the proportion of these two states, which the researchers can indicate with a series of ones and zeros. The ratio of zeros to ones is an indication of the strength of the magnetic field. 

Up to that point, the methodology of the Delft researchers is no different than earlier research using spins to measure magnetic field strength. But where the methods diverge is that, with each subsequent readout of the electron’s quantum state, the measurement is further refined using Bayesian statistics based on the spin readouts that came before. “By looking at the outcome of each experiment, using Bayesian statistics, you can tune the next step of the experiment, which allows the experimental setup to focus quickly on a better estimate of the magnetic field,” says Blok.

Blok and his collaborators used a commercially-available programmable microprocessor to collect the measurement results in real time—but one that performs the Bayesian update itself.  “We programmed the chip so that it can do this efficiently, and this increased the accuracy of the measurement substantially,” says Blok.  

Although very weak magnetic fields can be measured with SQUIDs, using electron spins has an interesting advantage: spatial resolution. The spatial resolution of SQUIDs, Blok explains, is limited by the size of the loop, which is in the order of a micrometer or even larger.  “Our single-spin sensors, since they are an atomic defect, can, in principle, be of nanometer or subnanometer resolution,” says Blok. To put that size in context, diamond nanocrystals containing nitrogen vacancy centers can be introduced in living single cells, says Blok.

The Windrider supercomputer

Despite Its Status as a Chip Powerhouse, Taiwan Neglects Supercomputing

A quick glance at the new ranking of top supercomputers gives a surprising showing by one of the world’s technological powerhouses. Taiwan does not possess a single machine powerful enough to make the list. While there are many nations that don’t make the list, Taiwan is peculiar in that it has such an outsized grip on the computer chip industry. What’s more, its political rival, China, not only holds the world’s top machine, it now has more ranking supercomputers than any nation except the United States.

It has been a long decline. Taiwan’s most powerful machine supercomputer, the Advanced Large-scale Parallel Supercluster also known as ALPS or Windrider, ranked 42nd in June, 2011, shortly after its launch.

 But the process of upgrading Taiwan’s supercomputing infrastructure has been slowed by ineffective government budget allocation. Since 2013, the National Center for High-performance Computing (NCHC), which operates Windrider, has failed twice to get its budget boosted enough to strengthen its supercomputing ability. While other countries poured money into the installation of powerful supercomputers as a way to show national power, Windrider fell to 303rd then 445th in June 2014 and June 2015. 

“If our three-year budget proposal is approved early next year, Taiwan would gain a much better position on the Top 500 in 2018, when a 2 petaflops system is launched,” says Jyun-Hwei Tsai, Deputy Director General of NCHC. If such a system were launched today, it would rank 36th.

The Ministry of Science and Technology says it understands the importance of supercomputing and prioritized it in its budget proposal as it had in 2013 and in 2014, officials say. However, it’s really up to the Cabinet.

Cabinet spokesman Lih-chyun Sun says the government fully understands the importance of supercomputing and points out that Taiwan has promoted cloud computing and big-data projects. “It remains uncertain when sufficient budget would be made available for new systems. We’re still reviewing the budget proposal. The decision has not yet been made,” Sun says.

“The Cabinet will make a final decision early next year,” adds Tzong-chyuan Chen, Director General of the Department of Foresight and Innovation under the ministry. “In economic recession years, it’s difficult to gain budget for important science and technology projects with long-term impacts, which are not yet felt.”

It wasn't always like this. In June 2002, an IBM system at the NCHC center ranked 60th. In June 2007, the center’s newest system, called IRIS, ranked 35th. 

Taiwan’s IRIS, however, was eventually kicked out of the Top 500 list in November 2009 due to a boom in supercomputer installations in many other countries, such as China. The world’s most powerful system, China’s Tianhe-2, or Milky Way-2, has held the top of the biannual Top 500 list six times in a row. And it is one of 109 systems in China that made the list. The huge increase in China's supercomputing power in recent years can be attributed in part to some government-back companies, such as Sugon and Inspur, which together manufactured 64 of the ranked systems.

According to NCHC’s Tsai, the big strides taken by other countries is a sore point in Taiwan. “We don’t compare ourselves with big countries, such as China, Japan, and the United States. What frustrates us more is that, in South Korea, the momentum of national supercomputing is now stronger than ours,” he says. Currently, South Korea’s two fastest systems rank 29th and 30th.

It’s not as if there isn’t much demand for supercomputing in Taiwan. Currently, Taiwan’s Windrider utilization exceeds 80 percent.  “It’s like a crowded superhighway. And we’ve heard complaints from some users,” NCHC’s Tsai says.

According to Tsai, Windrider is most significantly used in basic physics, chemistry, biomedical imaging. But certain key fields get prioritized access. Those include environmental studies, climate change, earth science, natural disasters, and water resources management.

“Taiwan is prone to natural disasters, such as typhoons, floods and earthquakes. A powerful database, backed by powerful supercomputing systems, is essential for conducting better predictions of typhoons,” Tsai says.

Due to the limit of Taiwan's supercomputing capability, some scientists have taken to building their own computer clusters and speeding up existing resources by making graphics processing unit-based accelerators.

Tzihong Chiueh, a theoretical astrophysicist at National Taiwan University, says they had not relied on NCHC’s system for years. Since 2013, his team has been taking advantages of a self-built system, that can reach tens of teraflops.

“A petaflop-scale system, should it be funded by the government, would certainly be useful to researchers,” he says. “The investment should indeed be prioritized. I hope it can work at least 10 times faster than the current system.”

This story was corrected on 17 December. Windrider was the 303rd ranked supercomputer in June 2014.


Testing Einstein's Theories With Satellites Stuck in Eccentric Orbits

In August of last year, when ESA launched its fifth and sixth Galileo navigation satellites, things went wrong. Because of a fault in the upper stage, both spacecraft ended up in elongated elliptical orbits instead of circular ones, making them unusable for navigation. Subsequent corrections of their orbits restored their function as navigation satellites, but their orbits still remained highly elliptical, with a difference of about 8,000 km between their closest and most distant points from Earth.

To the satellite navigation engineers, this was a nuisance requiring changes in the software and the technology. But for physicists, the eccentric orbits offered an unexpected opportunity. Researchers at both Sytèmes de Référence Temps Espace, or SYRTE (a department of Paris Observatory), and ZARM (the Center of Applied Space Technology and Microgravity) at the University of Bremen, Germany, convinced ESA to use the satellites to test more extensively an effect predicted by Einstein's general relativity. They hope to find out more about the extent to which time slows down when the gravitational field diminishes as one moves away from Earth.

This effect, also called gravitational redshift, or time dilation, has been previously observed; the need to correct timing signals transmitted from navigation satellites because their clocks operate slightly slower than those on the Earth's surface is a matter of routine. And the relationship between the slowing of time and the distance from Earth was tested in 1976 with the one-shot experiment, the Gravity Probe A, that reached a height of 10,000 kilometers. Using a two-way microwave link between the ground station and the Gravity Probe A, researchers directly compared the speed of the maser clock aboard the spacecraft to that on the ground, and confirmed the slowing of time with an accuracy of 140 parts in a million.

 Maser clocks are large and complicated instruments that measure time with an extreme precision, and are only carried by navigation satellites.  As their precision allows the accurate measurement of the relativistic slowing of time, the researchers at SYRTE and ZARM (ZARM had even proposed the building of a dedicated satellite in the past) jumped at the opportunity. “There are not many opportunities where you have a good clock on an eccentric orbit in space,”says Sven Herrmann, a physicist at ZARM.  “This is a combination that happened by chance; it was bad luck but also good luck.”

Unlike Gravity Probe A, the Galileo satellites don’t have a microwave link that allows direct access the maser clock frequency.  However, the researchers will be able to use the existing spacecraft-to-ground communication infrastructure, including the GNSS to perform the test.

ESA decided to go ahead with the test after a workshop in February 2015. The data taking, which will last for a year, will start in 2016.  “An important consideration for this decision was that the general relativity tests we are going to perform may be done in a transparent way, without any interference in the nominal operations of the satellites,” says ESA Global Navigation Satellite Systems Senior Advisor Xavier Ventura Traveset.

The Gallileo satellites continuously send messages about their position and the time on their clock, explains Herrmann.

We will reconstruct the data, the clock frequency, from the measured travel times. The satellite transmits a time stamp when the message leaves the satellite, and the receiver on the ground also produces a time stamp, and you calculate the travel time. If you know the position of the satellite, you can reconstruct the behavior of the clock on board the satellite. 

Precise accounting of the distance of the satellites is crucial to the determination of the maser frequencies. The researchers plan to use optical laser ranging, bouncing laser beams on retro reflectors mounted on the satellites to get an accurate measurement.  

The researchers expect that the accuracy of the relativistic slowing of time will be four times as accurate as the Gravity Probe A results. The large number of repeated measurements, as compared with Gravity Probe A’s single measurement, will give researchers a new bunch of numbers to analyze. “We have this large modulation in gravitation now, with 8,000 kilometers change twice per day...that will help us with the statistics,”  concludes Herrmann.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More