It is easy to track how much the U.S. federal government invests in research every year. Tracking the output of that investment is a whole lot harder.
A research team led by economists at the National Bureau of Economic Research has now tried to measure the impact of public and private research investments by looking at the job outcomes of funded PhD recipients. They found that nearly 40% of federally and non-federally funded PhDs went to work in the industry. These graduates were more likely to get high-paying jobs, jobs in high-tech and professional service industries, and at firms that perform R&D.
All of these are characteristics correlated with higher productivity, the researchers say in a paper published in the December 10 issue of the journal Science. This shows that doctoral recipients transfer their knowledge out to the economic marketplace, boosting local and national economic growth.
Digital doppelgängers of anyone can be generated from hundreds of images of them collected from the Internet, from celebrities such as Tom Hanks and Arnold Schwarzenegger to family members and historical figures, say researchers at the University of Washington in Seattle.
Their work suggests that creating 3-D digital models of people based on anything from family photo albums and videos to historical collections could one day be possible. Such models could be controlled like puppets and made to do and say anything.
In fact, it could be relatively easy to create such models of “anyone in the future, when there are tons of digital photos,” says study lead author Supasorn Suwajanakorn, a computer scientist at the University of Washington. “Photos on Facebook, for some users, are already enough to reconstruct their controllable models.” Suwajanakorn and his colleaguesSteven Seitz and Ira Kemelmacher-Shlizerman will detail their findings on 16 December at the International Conference on Computer Vision in Chile. Their research was funded by Samsung, Google, Intel and the University of Washington.
Two new papers demonstrate analytical tools that greatly enhance scientists’ ability to interpret low-amplitude seismic signals…though to very different ends. One shows how to efficiently tease signals of very low-energy events out of the overwhelming flood of data pouring out of seismometric stations around the world. The other shows how weather tilts the land below it, offering a tool for tracking small thermal vortices—dust devils—here and on Mars.
For most of us, seismology means the Big Ones, significant earthquakes and wild jumps of the seismograph. Those spikes are reflected numerically in the ratios of short-term and long-term average ground motions (STA/LTA) that pass a threshold value.
This doesn’t work quite as well for subtler events: those masked by noise, overlapping signals, long-developing signals, and the rumbles and thumps of human activity. Many seismic phenomena, such as the slipping of a particular fault at a particular place, recur over periods ranging from weeks to years, generating a characteristic seismic-wave signature each time. To find miniquakes that haven’t stepped over the detector threshold, seismologists use a template matching technique, reducing the known quake signature to a “correlation coefficient” and then searching the geological database for a match.
Template matching, however, requires a signature for a known event. Finding unknown leitmotifs in the flood of seismic data is more challenging. In its most extreme form, it requires comparing every 10-second snippet of data to every other 10-second snippet of data in the terabyte deluge from the world’s seismographs stations (totaling some 21,899, according to the International Seismological Centre registry).
That fine-toothed-comb-level matching, called autocorrelation, has up until now been the most accurate way of doing these comparisons, consumes a great deal of time and computing power. In a test, an autocorrelation analysis of just one week’s worth of data from a single station near San Jose, Calif., took 9 days and 13 hours.
To speed things up, researchers at Stanford University’s Department of Geophysics and Institute of Computational and Mathematical Engineering, have taken a cue from apps like Shazam, which help users identify snatches of melody that they just can’t put a name to. These programs identify the main characteristics of a complex wave spectrum, compress them, and represent the result as an ordered series of values—a vector “fingerprint.” The fingerprint is used to assign the snippet to a particular bin (locality sensitive hashing). This operation is repeated until the entire data set has been fingerprinted and categorized. The analysts then compare each snippet to all of the other snippets in that bin; when the fingerprints agree and the source wave-forms match, the two events can be identified as originating from the same sort of tectonic movement.
Lead Stanford author Clara Yoon and her colleagues built their Fingerprint and Similarity Thresholding (FAST) system not on Shazam’s analytical engine, but on Google’s WavePrint fingerprinting package (originally developed for processing images). The team started with the same week-long data file they used for the autocorrelation test. This time, they filtered the signal to concentrate on the 4-to-10 Hertz band (the earthquake’s voice range) and compressed it from 100 samples per second to 20. They then broke the whole dataset into tens of thousands of overlapping 10-second segments, reduced each segment to a 4096-bit binary fingerprint, and assigned it to a bin.
Confining the final time-consuming autocorrelation comparison to a few closely related segments in the same bin slashes computer run-time. The same analysis that required 9.5 days of autocorrelation analysis took 96 minutes of FAST processing. Both the autocorrelation and the FAST analyses detected about the same number of events (86 and 89, respectively), and each of them found 43 previously unreported events. FAST did have a weakness: autocorrelation correctly identified all 24 of the previously reported events, while FAST missed 3. Then again, FAST located 25 events that standard template matching missed (versus 19 for autocorrelation).
FAST’s run-time advantage becomes more marked as the amount of data increases. Analyzing 6 months of seismometer data would take FAST a couple of days. Autocorrelating the same data would eat up about 20 years.
The Dust Devil is in the Details
Elsewhere, an international team has figured out how to use seismometer data to identify and track dust devils, the swirling vortices that seem to come out of nowhere to lift sand and debris into a wild dance. They look like small tornadoes, but they are, in fact, a sort of anti-tornado. Tornadoes draw their energy from a storm system, and are usually associated with thunder, clouds, and rain. Dust devils are born from hot air rising from the surface, in weather that is clear, blazing, and dry.
But, like tornadoes, dust devils form around a core of low pressure. And, in addition to sucking in surrounding air to make a spinning funnel, the reduction in air pressure also reduces the load on the ground beneath. The ground responds by rising up, creating a detectable tilt that extends out beyond the wall of the vortex.
A group of researchers (from Johns Hopkins University Applied Physics Lab in Laurel, Md., the Jet Propulsion Laboratory in Pasadena, Calif., the Institut Supérieur de l'Aéronautique et de l'Espace in Toulouse, France, and the Institut de Physique du Globe de Paris) found that the 1 to 2 millibar pressure drop in typical a 5-meter-diameter dust devil would reduce downward force on the ground by the equivalent of 810 kilograms (just a little less than the curb weight of a Mitsubishi Mirage , cited by Motor Trend as perhaps the lightest vehicle now sold in the U.S.). The pressure drop for a really big dust devil, on the other hand, could amount to some 300 tons. Their work is reported in Bulletin of the Seismological Society of America (with open access on arXiv).
In an experiment on a dry lake bed in the California desert (on a site operated by JPL and “within sight of the 70-meter Deep Space Network antenna”), Ralph Lorenz of Johns Hopkins and his collaborators confirmed their calculations using a network of eight pressure loggers set up along the 60-meter arms of a cross centered on an existing seismic station. They matched the seismic and barometric data, and found that the seismograph did indeed record the earth tilting up toward the center of a dust devil. The degree of tilt, moreover, corresponded to the size and distance of the vortex…and the seismograph traces of acceleration in the north-south and east-west directions showed the direction of the wind’s closest approach to the center of the cross.
The objective of this research is not so much directed at happenings on Earth, but those on Mars, where dust devils are can often be seen crossing the red deserts in caravans. (See Lorentz’s blog post on Martian dust devils on the Planetary Society website.) In Mars’s rarer atmosphere, a 0.1-millibar pressure drop in a 15-meter-wide dust devil would produce about the same Mitsubishi Mirage-sized decrease in ground force as a typical California devil. Thirty meters away, the ground would tilt by about 5 x 10-9 radians, enough to register on the seismometers planned for NASA’s Mars InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission, scheduled to launch in March 2016.
A new camera system can locate objects hidden around a corner and track their movements with centimeter precision. The camera, which captures images outside its field of view in real time—which is no mean feat—could be used to look for survivors in search-and-rescue scenarios and in vehicle collision avoidance systems.
Cameras that can see stationary objects through walls mostly rely on radar-based imaging technology. But radar systems are typically big, expensive, low-resolution and limited to short distances.
So researchers have switched to other parts of the spectrum. Earlier this year, an MIT team reported a low-cost, high-resolution camera that uses low-power microwaves to create 3-D images of objects hidden behind walls. Optical techniques based on lidar (laser illuminated detection and ranging) have also been used to image hidden objects. But both the microwave and lidar techniques take an hour or more. “That’s not going to work if you want to know whether a car is coming around the corner,” says Genevieve Gariepy, a physics doctoral student at Heriot-Watt University in Edinburgh, UK.
Gariepy, Daniele Faccio, and their colleagues designed a new system that is similar to lidar, but much faster and more sensitive. It can precisely detect a hidden object’s position in just a few seconds, “so we can track the object’s motion in real time,” she says.
Today’s technology makes a 1-exaflop supercomputer capable of performing 1 million trillion floating-point operations per second almost inevitable. But pushing supercomputing beyond that point to 10 exaflops or more will require major changes in both computing technologies and computer architectures.
The leap second, if you haven’t come across it before, is the stray second that is added intermittently to atomic-clock–based Coordinated Universal Time (UTC) to keep it in sync with the unsteady rotation of the Earth.
The question of whether to keep or drop the leap second from UTC has a long and contentious history, and several people I interviewed said they expected negotiations to last through most of the four-week-long meeting.
Instead, “everything was really settled at the end of the second week,” says Vincent Meens of France’s National Center for Space Studies. And the decision was to delay the decision: the question was placed on hold until the 2023 World Radiocommunication Conference, which will be the meeting after the next WRC meeting.
That might sound like kicking the proverbial can down the road—and especially bad news for those who think that adding leap seconds threatens modern networks and systems. But the eight-year delay might not be as bad as it sounds. If the leap second were dropped this year, there would likely have been a grace period to allow systems to adjust to the new order; the proposal submitted this year by the Inter-American Telecommunication Commission, for example, would have waited until 2022 to make the change to UTC active.
Meens expects that if a decision is made to eliminate the leap second in 2023, it would be accompanied by swift action. “The idea is not to wait. So if it’s decided [to eliminate the leap second] it should be right when the new radio regulation is put into force. The new time scale would be in the beginning of 2024,” Meens says. So what looks like an eight-year delay right now might only wind up being a couple of years.
Of course, that outcome will likely depend on what’s done in the meantime (i.e. a good amount of consensus-building and leg work). There is a long list of organizations (see paragraph five in that link) that are expected to take part in studies leading up to WRC-23. And in the midst of all that, Nature’sElizabeth Gibney reports, responsibility for the definition of UTC will be shifting away from the International Telecommunication Union and toward the international body that already manages International Atomic Time as well as the SI units of measure. She says the change in responsibility is unlikely to accelerate the decision.
In fact, says Brian Patten of the U.S. National Telecommunications and Information Administration, the International Telecommunication Union can’t make the change by itself. “The ITU cannot alone make a decision about leap seconds,” he says, as the organization is responsible for distributing the time scale not making it. As for a speedy resolution in 2023, Patten says it’s too early to call: “we will have to see what happens in the joint work and discussions,” he says. “We can’t speculate on what the outcome will be when a report is delivered to WRC-23 on the status of the work.”
Although Meens predicts swift implementation if the leap second is eliminated, he can’t predict which way the decision will go. He’s had a role for years in international deliberations over the leap second, but even he was surprised by the outcome of this meeting. “I thought this was going to go until the end of the conference,” Meens says. “This was a particular subject where it was hard to find gray between white and black.”
He theorizes the decision to delay might have come about in part because the international participants of the WRC wanted to focus on other difficult subjects—in particular, the allocation of radio-frequency bands for mobile devices. It’s hard to imagine we won’t be demanding even more spectrum in eight years time. But perhaps it will be less of a distraction the next time around.
The Final Acts (pdf) of the conference are now available (the UTC decision is in RESOLUTION COM5/1).
Researchers around the world are investigating how blasting seeds with ionized gas can help boost plant growth. Now the Kyushu research team has developed a much simpler plasma technology that it claims can both significantly increase crop yields and shorten harvest time. Kazunori Koga, a plasma engineer and associate professor in Kyushu University, described the technique at the American Vacuum Society’s 62nd International Symposium & Exhibition in San Jose, Calif. last month.
A picture of how wealthy or poor people are can be reconstructed from anonymized data generated by mobile phones, according to researchers analyzing cell phone data from Rwanda.
Personal information that mobile devices gather such as a person's location often gets anonymized by stripping it of names, home addresses, phone numbers, and other obvious identifying details. Such metadata often get shared, and underlies popular services such as Google's real-time monitoring of road traffic.
Super-resolution imaging has helped researchers get pictures of microscopic blood vessels in the brain of a live rat using ultrasound, researchers say.
Such research could one day help investigate diseases that modify blood vessels, such as cancer, stroke and thickening of artery walls in the heart and elsewhere, scientists add.
Current techniques for imaging microscopic blood vessels in living organisms are limited by how deep they can penetrate into tissues, the speed with which they can take pictures, and the resolution of the images they can capture. Although conventional medical ultrasound can image both deeply and quickly, it has, at best, offered a resolution of several hundred micrometers. Because waves diffract or spread out as they move, one consequence is that waves of radiation such as ultrasound cannot be used to directly image features smaller than half the wavelength of that radiation.
We know that electrons have spin. And researchers think that the spin of single electrons trapped in nitrogen vacancy centers in diamond might be used to store qubits in future quantum computers. They’ll use light and microwave pulses to control the electron’s spin—either up or down—read it out. However, because spin is a quantum property, as soon as any measurement is made in order to determine the strength of the magnetic field, the spin switches from superposition into either one of the two possible spin states. But by repeating the measurement multiple times and looking at the distribution of up and down spins, it is possible to estimate the magnetic field strength using statistics.
Researchers at Delft University of Technology in the Netherlands and Macquarie University in Sydney, Australia, have developed a method that boosts the sensitivity of the repeated measurements by using a feedback loop. After each spin determination, it adjusts the measurement settings for the next measurement, resulting in a sensitivity that is 100 times as high as that of earlier experiments. The researchers published the design of an “intelligent” quantum sensor controlled by a microprocessor in last week's Nature Nanotechnology.
If an electron is brought into a magnetic field, it undergoes a Zeeman interaction, says Machiel Blok, a physicist at Delft University of Technology who participated in the research. Blok explains that it’s a phenomenon similar to the splitting of spectral lines observed in the sun's atmosphere that is caused by the sun's magnetic field. He added that:
To measure this interaction, we use a technique called Ramsey interferometry, where we first prepare with a microwave pulse a superposition between the two spin levels. The energy difference between these spin levels depends on the static magnetic field that is present. This can be read out by the spin—that is, how much is in one state, and how much is in the other state. This we do optically; we can get a resonant excitation of the spin, depending on the spin state. As a result, we get fluorescence if it was in one spin state, and no fluorescence if it was in the other spin state.
The experiment is repeated multiple times with a preset series of different sensing times whereby the electron is coaxed into a different quantum state each time and allowed to interact with the magnetic field. The magnetic field affects the proportion of these two states, which the researchers can indicate with a series of ones and zeros. The ratio of zeros to ones is an indication of the strength of the magnetic field.
Up to that point, the methodology of the Delft researchers is no different than earlier research using spins to measure magnetic field strength. But where the methods diverge is that, with each subsequent readout of the electron’s quantum state, the measurement is further refined using Bayesian statistics based on the spin readouts that came before. “By looking at the outcome of each experiment, using Bayesian statistics, you can tune the next step of the experiment, which allows the experimental setup to focus quickly on a better estimate of the magnetic field,” says Blok.
Blok and his collaborators used a commercially-available programmable microprocessor to collect the measurement results in real time—but one that performs the Bayesian update itself. “We programmed the chip so that it can do this efficiently, and this increased the accuracy of the measurement substantially,” says Blok.
Although very weak magnetic fields can be measured with SQUIDs, using electron spins has an interesting advantage: spatial resolution. The spatial resolution of SQUIDs, Blok explains, is limited by the size of the loop, which is in the order of a micrometer or even larger. “Our single-spin sensors, since they are an atomic defect, can, in principle, be of nanometer or subnanometer resolution,” says Blok. To put that size in context, diamond nanocrystals containing nitrogen vacancy centers can be introduced in living single cells, says Blok.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.