Tech Talk iconTech Talk

3D-Printed Plastic Sensor Could Make Siri a Better Listener

Apple’s Siri and Microsoft’s XBox video game consoles still sometimes struggle to hear their owners in a noisy room. A 3-D printed sensor prototype could solve that problem by giving electronic devices the sensitivity to pick out a single voice or sound.

Read More

4-D Light Field Displays Are Exactly What Virtual Reality Needs

Ever since our first experience with a prototype of the Oculus Rift, we’ve been getting more and more excited about high quality consumer virtual reality hardware. The first production version of the Rift is almost here, and when it arrives (probably in early 2016), you might even be able to justify its rumored $1,500 cost.

Good as the Rift is (and it’s very, very good), it’s taken this long for Oculus to get the hardware ready because fooling your eyes and brain to the extent that the Rift (or any other piece of VR hardware) does is a very tricky thing to pull off. The vast majority of us have an entire lifetime of experience of looking at the world in 3-D, and we notice immediately when things aren’t quite right. This can lead to headaches, nausea, and a general desire never to try VR ever again. A big part of what makes VR so difficult is that it’s continually trying to convince your eyes that they should be focusing on a scene in the distance, when really, they’re looking at a screen just a few inches away.

The current generation of VR displays use a few different techniques to artificially generate images that appear to have depth despite being displayed on flat screens. But there’s one that they're missing out on—one that could make VR displays much more comfortable to use. The same sort of 4-D light field technology that allows the Lytro camera to work its magic could solve this problem for VR as well.

Read More

Microresonators: Transmitting 40 Communication Channels with One Laser

Interconnections in powerful computers and linking "blades" in data centers will increasingly rely on optical communication links. Currently, this still requires an individual laser with individual control circuitry for each channel.  Now researchers at Purdue University have developed a new technology that allows a single laser to transmit data over a number of individually controlled channels, at different frequencies, simultaneously. They published this research online in the 10 August edition of the journal Nature Photonics. 

The key component of this technology is a tiny microresonator. It’s a 100-micrometer-wide optical waveguide loop or microring made from silicon nitride. Because it is as thin as a sheet of paper, it can easily be integrated on silicon chips.  The microresonator replaces a whole tabletop studded with the complement of optical components and resonators that are now required to create a mode-locked laser.

In the experimental setup, a pump laser is connected to the resonator. The researchers pump the resonator with a continuous-wave laser at one frequency, explains Minghao Qi, an associate professor of electrical and computer engineering at Purdue. The resonator, though small, can hold a huge amount of power, which leads to non-linear interaction. “Normally, if we pump anything into the resonator, and the interaction is linear, the input and output frequencies are the same,” says Qi.  “When the interaction is non-linear, it basically generates higher-order harmonics—new frequencies.”

Qi adds that, because the spacing between the different frequency peaks are the same, the resonator is called a frequency comb. The frequencies can be tuned by changing the resonance frequency of the resonator. This is achieved by an electric heater, a tiny gold wire overlaying the resonator. Changing the temperature changes the resonator’s refractive index, which in turn changes the resonance frequency. 

While the experimental setup works well with discrete light pulses, the researchers also noted the presence of “dark pulses,” or very short intervals where no light is transmitted. These intervals can occur every one or two picoseconds, which is a hundred times faster than the switching speed of the most advanced microprocessors. “The advantage of a dark pulse is that this can be repeatedly generated and that means it is very reliable and we can control it. If you want a bright pulse, then it is a very tricky process,” says Qi.

According to the Purdue researchers, they showed that dark pulses can be converted into bright pulses. “So by creating a dark pulse first, you have a process that is robust and controllable,” says Qi.

Besides facilitating high-volume optical communications in computers, microresonators could also be used in optical sensors and in spectroscopy. If you want to probe a compound at many different wavelengths you can use a tunable laser to excite the molecule at those different wavelengths. With conventional lasers, you have to tune the laser to a different frequency for every measurement, which takes time. What’s more, tunable lasers are expensive, explains Qi. But with the Purdue team’s improved laser, “If your probe light itself has many, many frequencies, you are basically doing a spectral scanning, with all the frequencies in one shot,” Qi adds.

For the moment, the scientists have yet to put microresonator on a chip with all the other components. “This will be our next step,” says Qi.

Model Predicts Whether NFL Teams Will Run or Pass

National Football League (NFL) playbooks are the size of telephone books. They’re filled with dozens and dozens of plays, each designed so that a team can play to its strengths while taking advantage of its opponents’ weaknesses. Despite the endless variations, they all basically boil down to two options for the offense: pass or run. No matter how intricately designed an offensive play is, if the defense can sniff out whether the ball will be tossed down field or toted along the ground, it gains a tremendous advantage. (Yes, we know that teams punt and kick field goals and extra points after touchdowns. But we’re not talking about that right now.) 

Earlier this week, a pair of statisticians from North Carolina State University showed off a model they built that predicts whether a specific team will call a passing or running play with a high degree of accuracy. They presented the model  in Seattle at JSM 2015, a joint conference of statistical and mathematical societies.

William Burton, an undergraduate who is majoring in industrial engineering and minoring in statistics, and Michael Dickey, who graduated in May with a degree in statistics, used a listing of actual NFL offensive plays from the 2000 through 2014 seasons that had been compiled by a company called Armchair Analysis to figure out the ratio of passes to runs. They showed empirically what fans already understood anecdotally: the aerial attack is being utilized ever more frequently. Pass plays were called on 56.7 percent of the time in 2014, compared with 54.4 percent in 2000.

But what makes a team decide whether to run or throw? Burton and Dickey looked at a host of factors that affect a team's play selection. Among these are: the distance to the first-down marker, whether it’s first, second, third or fourth down, how much time is left on the game clock, the team’s score in relation to its opponent’s, and field position. For example, there’s a high probability that the coach will opt for a passing play if the other team is leading by three points, there’s a minute left in the fourth quarter, the offense is facing third down at its own 30-yard line, and needs to advance 7 yards to pick up a fresh set of downs. On the other hand, a team that’s leading by 7 points, facing the same down and distance at the same point in the game, might very likely run the ball (to avoid an interception and to take time off the clock so the other team can’t mount a score-tying drive before time runs out).

For their system, Burton and Dickey developed logistic regression models—methods used to, for example, predict if someone will default on a mortgage—and random forest models—a machine learning method. But they quickly realized that teams’ strategies differ significantly in each of a game’s quarters. To account for that, they produced six separate logistic regression models: one each for the first, second, and third quarters, plus one for the fourth quarter if the offensive team is winning, another if it is losing, and a third for when the score is tied. They tested their models on 20 randomly selected games. Overall, the models accurately predicted pass or run on 75 percent of downs. The models’ best performance was related to a 2014 game between the Jacksonville Jaguars and Dallas Cowboys. Their predictions proved correct on 109 out of 119 offensive plays—a 91.6-percent accuracy rate.

Burton and Dickey say that anyone, including NFL coaches and fans rooting for their teams at home, can use the tool to make educated guesses about what will happen each time the ball is snapped.

DARPA Funds Stage 2 of XS-1 Spaceplane Design Competition

The Space Shuttle was originally intended to make getting to space easy, inexpensive, and routine, with an initial goal of a launch nearly every week. It didn't quite play out that way, and we’re now back to tossing things into orbit on top of massively expensive rockets that are good for only a single one-way trip. It’s a system that works (most of the time), but it's not a system that’s efficient.

Just because the Space Shuttle didn’t magically open orbit up to everyone doesn’t mean that getting to space through a reusable platform is a bad idea. In commercial spaceflight circles, the thinking is that reusability is the best way to help drive down costs. DARPA, the U.S. Defense Department’s blue-sky research arm, wants a piece of the action; but in typical DARPA fashion, the agency is looking for something even more futuristic. It has just awarded nearly $20 million to three aerospace companies to continue design work on a reusable spaceplane that will be able to launch a satellite into space every single day.

Read More

See Through Walls by the Glow of Your Wi-Fi

It used to be that a bad guy besieged by police could just shoot out the lights and hide in the dark. As if it weren’t enough that today’s cornered malefactors have to worry about night vision goggles, tomorrow’s thugs may also have to worry about the soft radio glow of wireless routers and mobile communications towers.

Researchers at University College London (UCL) have devised a system for detecting the Doppler shifts of ubiquitous Wi-Fi and mobile telephone signals to “see” people moving, even behind masonry walls 25 centimeters thick. The method, which could be useful in situations from hostage-takings to traffic control, won the Engineering Impact Award in the RF and Communications category at this National Instrument’s NI Week 2015 meeting (which convened in Austin, Tex., 3-9 August).

Other researchers—notably Dina Katabi and Fadel Adib of MIT—have built through-wall radars in the household communication bands, but these are active radars that transmit as well as receive. The UCL technique uses only passive radiation—from Wi-Fi routers (using emissions in any of the IEEE 802.11 b, g, n, ac), ambient GSM and LTE mobile signals, and other sources—so there is nothing to betray the surveillance. The system calculates the positions of hidden target by comparing two signals: a reference channel, receiving the baseline signal from the Wi-Fi access point or other RF source, and a surveillance channel, which picks up Doppler-shifted waves reflecting from the moving subject.

Tan and company built their “high Doppler resolution passive Wi-Fi radar” on two multi-frequency, software-defined, FPGA-based transceivers (National Instruments’ USRP, or Universal Software Radio Peripheral. The system compares the reference and surveillance signals, interprets the very small frequency shifts, and reveals the hidden subject’s location and motion.

By tweaking the processing parameters—increasing signal-integration time and lowering sensitivity thresholds—the engineers could coax the passive radar into “seeing” quite subtle movements, even hand gestures. At this stage, the device doesn’t produce anything resembling a photograph of the subject. It delivers a radar-style scatter plot, a flare of color that says, “Here he is!” along with a variety of signal data.  The system is described in more detail in a paper that Tan and UCL colleagues Qingchao Chen, Karl Woodbridge, and Kevin Chetty presented at the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), held 19-24 April in South Brisbane, Australia.

Samsung, Nokia Show 5G Tech at NI Week

Two on-stage demonstrations at National Instruments’ NI Week, which was held in Austin, Texas, from 3–6 August, showed off working infrastructure prototypes for the 5G networks that will be the backbone of Big Data applications and the Internet of Things (the conference’s themes). 5G’s goals include hundred-fold performance improvements over today’s 4G systems, allowing the transmission of tens of megabits per second to thousands of mobile users, and offering peak rates well north of a gigabit per second within an office.

To open Day 1, researchers from Samsung showed off full-dimensional multiple-input, multiple-output (FD-MIMO), one of several parallel techniques for squeezing more information through the airwaves. FD-MIMO sculpts the signals fed to an array of transmission antennas to form virtual beams that can lock in on multiple receivers in three dimensions. This cuts interference from overlapping simultaneous transmissions to other receivers and increases the power of the signal that reaches the target.

The Samsung demo base station transmitted simultaneously at different data rates to four separate receivers. For demonstration purposes, the base station transmitted at 3.5 gigahertz (~86 millimeter wavelength), though production transmitters will likely use carriers in the tens-of-gigahertz range. (The receivers were configured on NI USRP RIOs, or Universal Software Radio Peripheral Reconfigurable IO, a transceiver that can be programmed to reproduce the characteristics of a variety of RF devices over a range of frequencies.)

Initially, in conventional broadcast mode, interference between the four streams garbled the signals and kept any of the data from getting through. Switching to FD-MIMO, however, modulated the signals produced by each of the base station’s 32 antennas to allow beamforming in three dimensions. The transmitter homed in on each of the four receivers to push through separate, clear signals. Throughputs measured at the receivers jumped from essentially zero to as much as 28 megabits per second. The Samsung engineers cautioned, though, that the demonstration was intended to show how much FD-MIMO can improve signal quality, not to showcase a fully blown 5G concept.

(For a quick look under the hood of Samsung’s FD-MIMO, see the Xcell blog of Xilinx’s Steve Leibson.)

On Day 2, engineers from Nokia demonstrated another cornerstone of 5G: higher frequency. In what Nokia’s head of North American radio research Amitabha Ghosh called the first public demonstration of their 5G base station (it debuted at an invitation-only event at the Brooklyn 5G Summit last April), Ghosh and his colleagues sent two MIMO streams across the stage. The 73-GHz (~4 mm) signals used 2 GHz of bandwidth to achieve a combined throughput greater than 10 gigabits per second with a latency under 1 millisecond. (To see the video, go to the NI Week web site, click the “Wednesday” tab, and select “Nokia.”)

Nokia’s proof-of-concept system is the latest iteration of a 1 GHz demonstration displayed last year that used fully programmable FPGA components. Ghosh also reported that 73-GHz signals had been successfully transmitted outdoors to fast-moving receivers and over distances of more than 200 meters.

The results are significant. Some parts of the millimeter-wave spectrum are open in part because they span the resonant frequencies of atmospheric water and oxygen, which produce spikes in atmospheric absorption. While there is a local attenuation trough around 73 GHz (between flanking spikes at 60 and 120 GHz), atmospheric losses are still about 20 times higher than they would be for a 1-GHz carrier. This circumstance had bred widespread doubt that useful signals could be carried at all in that part of the spectrum…doubts that these results have helped to quiet.

For more background, see the 5G vision white papers from Samsung and Nokia

A Computer That Can Sniff Out Septic Shock

Dr. David Hagar treats dozens of patients each day at the intensive care unit at John Hopkins Hospital in Maryland. One of his patients was almost perfectly healthy except for having low blood pressure. Within four hours, the patient died of septic shock.

A group of computer scientists at John Hopkins University partnered with Hagar, and created an algorithm that can predict septic shock and give clinicians more time to treat someone at risk.

Read More

FCC Sets Rules for Copper Phase Out

The U.S. Federal Communications Commission set new ground rules for carriers seeking to replace their old copper telephone networks. Approved by a 3-2 vote at an open meeting yesterday, the rules require carriers to notify customers in advance and to seek FCC approval before reducing services. 

Home landline service has dropped dramatically with the spread of mobile phones. In 2000, almost every U.S. household had a landline phone. Since then, many have dropped landline service, and nearly 50 million of the remaining lines have switched to Voice over IP, which sends voice calls in the user's broadband data stream rather than over traditional telephony’s copper wire pairs. FCC chairman Tom Wheeler and others have been pushing to shift telephone traffic to fiber optics and the Internet.

Critics have charged that phone companies are allowing their old copper networks to decay to force customers to shift to fiber service. But some 37 million households—many of them headed by elderly people—remain on legacy copper, commissioner Mignon Clyburn noted at the hearing. Other holdouts live in rural areas that lack cellular and broadband service. Some prefer copper connections because they are independent of local power lines, and offer better 911 emergency service.

The FCC ruling requires that carriers notify retail customers at least three months before shutting down a copper network, and provide six-months notice to interconnecting carriers using the old lines. (Clyburn complained that that's much less time than the FCC gave before shutting down analog broadcast television, but voted for the measure anyway.) Carriers also must seek FCC approval if the telephone changeover would "discontinue, reduce or impair" service. Details remain to be worked out, but key issues are voice quality and support for 911 emergency calls, alarms, and medical monitors, sw well as assistive technology for the disabled.  

Two dissenting commissioners complained that the new rules would extend regulations and slow adoption of new technology. But Wheeler said that changing technology should not be "an opportunity to erase the historical reality of the responsibility of network [operators] to the users of those services." 

In a separate vote, all five commissioners agreed to require carriers to offer customers backup power supplies that maintain their phone service during prolonged power outages. Traditional copper phone lines are independent of local power, and have a reputation of being more reliable than power grids. But that hasn't stopped landline users from buying cordless phones that go down with the grid. 


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More