Tech Talk iconTech Talk

Why Do Adult Community College Grads Pursue Engineering?

The STEM crisis might be debatable, but the White House Office of Science and Technology claims that the US will need one million additional STEM graduates within the next decade to stay competitive. Community colleges are an important resource to tap into in order to meet that goal.

Getting more community college students to pursue four-year STEM degrees would also boost the diversity of the STEM workforce, since these colleges have a history of enrolling underrepresented students.

In a new study, education professors Taryn Allen and Yi Zhang at the University of Texas at Arlington address how to encourage the transfer of adult students from community colleges to four-year engineering institutions.

Read More

Freeze Ray! (Almost)

Zapping something with a laser usually means heating it up. After all, you’re hitting it with a focused beam of high-energy radiation. But scientists at the University of Washington have used a laser to cool a liquid, reducing it from room temperature to just above 0°C.

Read More

NIST Unveils “All-in-One” Robotic Millimeter-Wave Antenna Test Facility

Millimeter-wave communication is coming. Along with it comes mm-wave antennas and greater challenges in testing. Gone are the days when antenna calibration for far-field characterization revolved around football-field-size installations and towers tens of meters tall. By the 1960s, antenna testing for near-field measurements moved indoors; those results could then be extrapolated to real-world far-field values.

Properly testing today’s antennas requires measurements at thousands of positions, each accurate to within one-hundredth of a wavelength. For signals at 183 gigahertz (the emission line for atmospheric water vapor absorption), which have a wavelength of 1,638 micrometers, the probe must be within 33 μm of its ideal position in every dimension on every measurement. (To properly calibrate an antenna at 500 GHz will require positioning accurate to within 15 μm.)

With that in mind, researchers at the Communications Technology Laboratory of the National Institute of Standards and Technology (NIST) in Boulder, Colo., have coupled off-the-shelf components with innovative feedback controls to build a Configurable Robotic Millimeter-Wave Antenna (CROMMA) test facility—a breakthrough for millimeter-wave research and, possibly, the prototype for an “all-in-one” antenna tester. NIST researcher Joshua A. Gordon and his colleagues describe CROMMA in IEEE Transactions on Antennas and Propagation.

In place of the traditional array of rotary tables and individual actuators used to jockey probe and test antennas into positions, CROMMA uses industrial robots: a six-axis Yasawa Motoman MH50-35 robot arm and a six-axis controller to position the test probe, and a hexapod robot (Physik Instrumente M-840) and rotating stage to position the antenna under test. Together, these components control position along three axes, plus angular pitch, yaw, and roll.

The robot arm can position a 35-kilogram probe almost anywhere in a 1-meter-radius working volume. The test-antenna support can maneuver a 30-kg load within a 25-by-25-by-50-mm box, and can vary the tilt within a range of 15 to 30 degrees.

The hexapod accuracy is well within the single micrometer that mm-wave requires. The probe arm, however, comes out of the box with a 70-μm limit on position repeatability.

To gain greater precision, the developers added several layers of feedback control. They started with a laser tracking system comprising an array of spherical mirror reflectors mounted on the test floor, the robot arm, and the test antenna table. They further increased precision by mounting “more sophisticated” commercial laser targets the probe arm. Finally, they added a set of three machine vision cameras to double-check the probe’s position from outside the moving robot-hexapod system.

To integrate these overlapping sets of position and rotation information and attain final accuracies within 25 μm in position and 0.01 degrees of rotation, the NIST team developed a Coordinated Metrology Space (CMS). The CMS combines the separate reference frames of the probe, the test antenna, the hexapod, and the rotating stage under the hexapod. The result is an extraordinarily precise picture of the relative positions and orientations of the probe and test antenna.

The group used a commercial 50-GHz vector network analyzer (with a frequency extender) to generate the signal from the test antenna and measure the signal amplitude and phase at the probe antenna. The tight control allowed the engineers to move the probe accurately through almost any path—including the spherical, cylindrical, and planar configurations commonly used to calibrate antennas. The 100-mm-radius spherical near-field test, for example, required 76,000 separate probe locations. The root mean square (rms) radius actually measured was 99.977 mm, with an rms error of 22 μm.

The NIST researchers checked the system by extrapolating far-field signal values from the 100-mm near-field data, and then comparing the extrapolations with data actually collected from far-field runs at radii of 1000 mm from the test antenna. The agreement was very close (with some increase in noise at wide angles between transmitter and probe).

In sum, the authors say, CROMMA makes it possible to have a single facility capable doing tests that previously required multiple antenna set-ups, “thus making possible a truly all-in-one antenna characterization facility.” More work is required to reach that goal, of course—particularly an increase in positioning accuracy to meet the demands of 500 GHz components.

Computer Learns to Write Its ABCs

A new computer model can now mimic the human ability to learn new concepts from a single example instead of the hundreds or thousands of examples it takes other machine learning techniques, researchers say.

The new model learned how to write invented symbols from the animated show Futurama as well as dozens of alphabets from across the world. It also showed it could invent symbols of its own in the style of a given language.

The researchers suggest their model could also learn other kinds of concepts, such as speech and gestures.

Read More

Investing in PhD Research Pays Off

It is easy to track how much the U.S. federal government invests in research every year. Tracking the output of that investment is a whole lot harder.

A research team led by economists at the National Bureau of Economic Research has now tried to measure the impact of public and private research investments by looking at the job outcomes of funded PhD recipients. They found that nearly 40% of federally and non-federally funded PhDs went to work in the industry. These graduates were more likely to get high-paying jobs, jobs in high-tech and professional service industries, and at firms that perform R&D.

All of these are characteristics correlated with higher productivity, the researchers say in a paper published in the December 10 issue of the journal Science. This shows that doctoral recipients transfer their knowledge out to the economic marketplace, boosting local and national economic growth.

Read More

Celebrity Digital Dopplegangers

Digital doppelgängers of anyone can be generated from hundreds of images of them collected from the Internet, from celebrities such as Tom Hanks and Arnold Schwarzenegger to family members and historical figures, say researchers at the University of Washington in Seattle.

Their work suggests that creating 3-D digital models of people based on anything from family photo albums and videos to historical collections could one day be possible. Such models could be controlled like puppets and made to do and say anything.

In fact, it could be relatively easy to create such models of “anyone in the future, when there are tons of digital photos,” says study lead author Supasorn Suwajanakorn, a computer scientist at the University of Washington. “Photos on Facebook, for some users, are already enough to reconstruct their controllable models.” Suwajanakorn and his colleagues Steven Seitz and Ira Kemelmacher-Shlizerman will detail their findings on 16 December at the International Conference on Computer Vision in Chile. Their research was funded by Samsung, Google, Intel and the University of Washington.

Read More

Seismic Tools to Map Earthly Microquakes and Martian Dust Devils

Two new papers demonstrate analytical tools that greatly enhance scientists’ ability to interpret low-amplitude seismic signals…though to very different ends. One shows how to efficiently tease signals of very low-energy events out of the overwhelming flood of data pouring out of seismometric stations around the world. The other shows how weather tilts the land below it, offering a tool for tracking small thermal vortices—dust devils—here and on Mars.

For most of us, seismology means the Big Ones, significant earthquakes and wild jumps of the seismograph. Those spikes are reflected numerically in the ratios of short-term and long-term average ground motions (STA/LTA) that pass a threshold value.

This doesn’t work quite as well for subtler events: those masked by noise, overlapping signals, long-developing signals, and the rumbles and thumps of human activity. Many seismic phenomena, such as the slipping of a particular fault at a particular place, recur over periods ranging from weeks to years, generating a characteristic seismic-wave signature each time. To find miniquakes that haven’t stepped over the detector threshold, seismologists use a template matching technique, reducing the known quake signature to a “correlation coefficient” and then searching the geological database for a match.

Template matching, however, requires a signature for a known event. Finding unknown leitmotifs in the flood of seismic data is more challenging. In its most extreme form, it requires comparing every 10-second snippet of data to every other 10-second snippet of data in the terabyte deluge from the world’s seismographs stations (totaling some 21,899, according to the International Seismological Centre registry).

That fine-toothed-comb-level matching, called autocorrelation, has up until now been the most accurate way of doing these comparisons, consumes a great deal of time and computing power. In a test, an autocorrelation analysis of just one week’s worth of data from a single station near San Jose, Calif., took 9 days and 13 hours.

Shazam

To speed things up, researchers at Stanford University’s Department of Geophysics and Institute of Computational and Mathematical Engineering, have taken a cue from apps like Shazam, which help users identify snatches of melody that they just can’t put a name to. These programs identify the main characteristics of a complex wave spectrum, compress them, and represent the result as an ordered series of values—a vector “fingerprint.” The fingerprint is used to assign the snippet to a particular bin (locality sensitive hashing). This operation is repeated until the entire data set has been fingerprinted and categorized. The analysts then compare each snippet to all of the other snippets in that bin; when the fingerprints agree and the source wave-forms match, the two events can be identified as originating from the same sort of tectonic movement. 

Lead Stanford author Clara Yoon and her colleagues built their Fingerprint and Similarity Thresholding (FAST) system not on Shazam’s analytical engine, but on Google’s WavePrint fingerprinting package (originally developed for processing images). The team started with the same week-long data file they used for the autocorrelation test. This time, they filtered the signal to concentrate on the 4-to-10 Hertz band (the earthquake’s voice range) and compressed it from 100 samples per second to 20. They then broke the whole dataset into tens of thousands of overlapping 10-second segments, reduced each segment to a 4096-bit binary fingerprint, and assigned it to a bin.

Confining the final time-consuming autocorrelation comparison to a few closely related segments in the same bin slashes computer run-time. The same analysis that required 9.5 days of autocorrelation analysis took 96 minutes of FAST processing. Both the autocorrelation and the FAST analyses detected about the same number of events (86 and 89, respectively), and each of them found 43 previously unreported events. FAST did have a weakness: autocorrelation correctly identified all 24 of the previously reported events, while FAST missed 3. Then again, FAST located 25 events that standard template matching missed (versus 19 for autocorrelation).

FAST’s run-time advantage becomes more marked as the amount of data increases. Analyzing 6 months of seismometer data would take FAST a couple of days. Autocorrelating the same data would eat up about 20 years.

The Dust Devil is in the Details

Elsewhere, an international team has figured out how to use seismometer data to identify and track dust devils, the swirling vortices that seem to come out of nowhere to lift sand and debris into a wild dance. They look like small tornadoes, but they are, in fact, a sort of anti-tornado. Tornadoes draw their energy from a storm system, and are usually associated with thunder, clouds, and rain. Dust devils are born from hot air rising from the surface, in weather that is clear, blazing, and dry.

But, like tornadoes, dust devils form around a core of low pressure. And, in addition to sucking in surrounding air to make a spinning funnel, the reduction in air pressure also reduces the load on the ground beneath. The ground responds by rising up, creating a detectable tilt that extends out beyond the wall of the vortex.

A group of researchers (from Johns Hopkins University Applied Physics Lab in Laurel, Md., the Jet Propulsion Laboratory in Pasadena, Calif., the Institut Supérieur de l'Aéronautique et de l'Espace in Toulouse, France, and the Institut de Physique du Globe de Paris) found that the 1 to 2 millibar pressure drop in typical a 5-meter-diameter dust devil would reduce downward force on the ground by the equivalent of 810 kilograms (just a little less than the curb weight of a Mitsubishi Mirage , cited by Motor Trend as perhaps the lightest vehicle now sold in the U.S.).  The pressure drop for a really big dust devil, on the other hand, could amount to some 300 tons. Their work is reported in Bulletin of the Seismological Society of America (with open access on arXiv).

In an experiment on a dry lake bed in the California desert (on a site operated by JPL and “within sight of the 70-meter Deep Space Network antenna”), Ralph Lorenz of Johns Hopkins and his collaborators confirmed their calculations using a network of eight pressure loggers set up along the 60-meter arms of a cross centered on an existing seismic station. They matched the seismic and barometric data, and found that the seismograph did indeed record the earth tilting up toward the center of a dust devil. The degree of tilt, moreover, corresponded to the size and distance of the vortex…and the seismograph traces of acceleration in the north-south and east-west directions showed the direction of the wind’s closest approach to the center of the cross.

The objective of this research is not so much directed at happenings on Earth, but those on Mars, where dust devils are can often be seen crossing the red deserts in caravans. (See Lorentz’s blog post on Martian dust devils on the Planetary Society website.) In Mars’s rarer atmosphere, a 0.1-millibar pressure drop in a 15-meter-wide dust devil would produce about the same Mitsubishi Mirage-sized decrease in ground force as a typical California devil. Thirty meters away, the ground would tilt by about 5 x 10-9 radians, enough to register on the seismometers planned for NASA’s Mars InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission, scheduled to launch in March 2016.

Camera That Tracks Hidden Moving Objects Could Aid Rescue Missions and Avoid Vehicle Collisions

A new camera system can locate objects hidden around a corner and track their movements with centimeter precision. The camera, which captures images outside its field of view in real time—which is no mean feat—could be used to look for survivors in search-and-rescue scenarios and in vehicle collision avoidance systems.

Cameras that can see stationary objects through walls mostly rely on radar-based imaging technology. But radar systems are typically big, expensive, low-resolution and limited to short distances.

So researchers have switched to other parts of the spectrum. Earlier this year, an MIT team reported a low-cost, high-resolution camera that uses low-power microwaves to create 3-D images of objects hidden behind walls. Optical techniques based on lidar (laser illuminated detection and ranging) have also been used to image hidden objects. But both the microwave and lidar techniques take an hour or more. “That’s not going to work if you want to know whether a car is coming around the corner,” says Genevieve Gariepy, a physics doctoral student at Heriot-Watt University in Edinburgh, UK.

Gariepy, Daniele Faccio, and their colleagues designed a new system that is similar to lidar, but much faster and more sensitive. It can precisely detect a hidden object’s position in just a few seconds, “so we can track the object’s motion in real time,” she says.

Read More

How Supercomputing Can Survive Beyond Moore's Law

Today’s technology makes a 1-exaflop supercomputer capable of performing 1 million trillion floating-point operations per second almost inevitable. But pushing supercomputing beyond that point to 10 exaflops or more will require major changes in both computing technologies and computer architectures.

Read More

The Eight-Year Leap Second Delay Might Not Be As Bad As It Seems

After I posted a curtain-raiser about the debate over the fate of the leap second at the World Radiocommunication Conference in Geneva last month, I settled in for a wait. 

The leap second, if you haven’t come across it before, is the stray second that is added intermittently to atomic-clock–based Coordinated Universal Time (UTC) to keep it in sync with the unsteady rotation of the Earth.

The question of whether to keep or drop the leap second from UTC has a long and contentious history, and several people I interviewed said they expected negotiations to last through most of the four-week-long meeting. 

Instead, “everything was really settled at the end of the second week,” says Vincent Meens of France’s National Center for Space Studies. And the decision was to delay the decision: the question was placed on hold until the 2023 World Radiocommunication Conference, which will be the meeting after the next WRC meeting.

That might sound like kicking the proverbial can down the road—and especially bad news for those who think that adding leap seconds threatens modern networks and systems. But the eight-year delay might not be as bad as it sounds. If the leap second were dropped this year, there would likely have been a grace period to allow systems to adjust to the new order; the proposal submitted this year by the Inter-American Telecommunication Commission, for example, would have waited until 2022 to make the change to UTC active. 

Meens expects that if a decision is made to eliminate the leap second in 2023, it would be accompanied by swift action. “The idea is not to wait. So if it’s decided [to eliminate the leap second] it should be right when the new radio regulation is put into force. The new time scale would be in the beginning of 2024,” Meens says. So what looks like an eight-year delay right now might only wind up being a couple of years.

Of course, that outcome will likely depend on what’s done in the meantime (i.e. a good amount of consensus-building and leg work). There is a long list of organizations (see paragraph five in that link) that are expected to take part in studies leading up to WRC-23. And in the midst of all that, Nature’s Elizabeth Gibney reports, responsibility for the definition of UTC will be shifting away from the International Telecommunication Union and toward the international body that already manages International Atomic Time as well as the SI units of measure. She says the change in responsibility is unlikely to accelerate the decision.

In fact, says Brian Patten of the U.S. National Telecommunications and Information Administration, the International Telecommunication Union can’t make the change by itself. “The ITU cannot alone make a decision about leap seconds,” he says, as the organization is responsible for distributing the time scale not making it. As for a speedy resolution in 2023, Patten says it’s too early to call: “we will have to see what happens in the joint work and discussions,” he says. “We can’t speculate on what the outcome will be when a report is delivered to WRC-23 on the status of the work.”

Although Meens predicts swift implementation if the leap second is eliminated, he can’t predict which way the decision will go. He’s had a role for years in international deliberations over the leap second, but even he was surprised by the outcome of this meeting. “I thought this was going to go until the end of the conference,” Meens says. “This was a particular subject where it was hard to find gray between white and black.”

He theorizes the decision to delay might have come about in part because the international participants of the WRC wanted to focus on other difficult subjects—in particular, the allocation of radio-frequency bands for mobile devices. It’s hard to imagine we won’t be demanding even more spectrum in eight years time. But perhaps it will be less of a distraction the next time around. 

The Final Acts (pdf) of the conference are now available (the UTC decision is in RESOLUTION COM5/1).

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More