Tech Talk iconTech Talk

DARPA Funds Stage 2 of XS-1 Spaceplane Design Competition

The Space Shuttle was originally intended to make getting to space easy, inexpensive, and routine, with an initial goal of a launch nearly every week. It didn't quite play out that way, and we’re now back to tossing things into orbit on top of massively expensive rockets that are good for only a single one-way trip. It’s a system that works (most of the time), but it's not a system that’s efficient.

Just because the Space Shuttle didn’t magically open orbit up to everyone doesn’t mean that getting to space through a reusable platform is a bad idea. In commercial spaceflight circles, the thinking is that reusability is the best way to help drive down costs. DARPA, the U.S. Defense Department’s blue-sky research arm, wants a piece of the action; but in typical DARPA fashion, the agency is looking for something even more futuristic. It has just awarded nearly $20 million to three aerospace companies to continue design work on a reusable spaceplane that will be able to launch a satellite into space every single day.

Read More

See Through Walls by the Glow of Your Wi-Fi

It used to be that a bad guy besieged by police could just shoot out the lights and hide in the dark. As if it weren’t enough that today’s cornered malefactors have to worry about night vision goggles, tomorrow’s thugs may also have to worry about the soft radio glow of wireless routers and mobile communications towers.

Researchers at University College London (UCL) have devised a system for detecting the Doppler shifts of ubiquitous Wi-Fi and mobile telephone signals to “see” people moving, even behind masonry walls 25 centimeters thick. The method, which could be useful in situations from hostage-takings to traffic control, won the Engineering Impact Award in the RF and Communications category at this National Instrument’s NI Week 2015 meeting (which convened in Austin, Tex., 3-9 August).

Other researchers—notably Dina Katabi and Fadel Adib of MIT—have built through-wall radars in the household communication bands, but these are active radars that transmit as well as receive. The UCL technique uses only passive radiation—from Wi-Fi routers (using emissions in any of the IEEE 802.11 b, g, n, ac), ambient GSM and LTE mobile signals, and other sources—so there is nothing to betray the surveillance. The system calculates the positions of hidden target by comparing two signals: a reference channel, receiving the baseline signal from the Wi-Fi access point or other RF source, and a surveillance channel, which picks up Doppler-shifted waves reflecting from the moving subject.

Tan and company built their “high Doppler resolution passive Wi-Fi radar” on two multi-frequency, software-defined, FPGA-based transceivers (National Instruments’ USRP, or Universal Software Radio Peripheral. The system compares the reference and surveillance signals, interprets the very small frequency shifts, and reveals the hidden subject’s location and motion.

By tweaking the processing parameters—increasing signal-integration time and lowering sensitivity thresholds—the engineers could coax the passive radar into “seeing” quite subtle movements, even hand gestures. At this stage, the device doesn’t produce anything resembling a photograph of the subject. It delivers a radar-style scatter plot, a flare of color that says, “Here he is!” along with a variety of signal data.  The system is described in more detail in a paper that Tan and UCL colleagues Qingchao Chen, Karl Woodbridge, and Kevin Chetty presented at the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), held 19-24 April in South Brisbane, Australia.

Samsung, Nokia Show 5G Tech at NI Week

Two on-stage demonstrations at National Instruments’ NI Week, which was held in Austin, Texas, from 3–6 August, showed off working infrastructure prototypes for the 5G networks that will be the backbone of Big Data applications and the Internet of Things (the conference’s themes). 5G’s goals include hundred-fold performance improvements over today’s 4G systems, allowing the transmission of tens of megabits per second to thousands of mobile users, and offering peak rates well north of a gigabit per second within an office.

To open Day 1, researchers from Samsung showed off full-dimensional multiple-input, multiple-output (FD-MIMO), one of several parallel techniques for squeezing more information through the airwaves. FD-MIMO sculpts the signals fed to an array of transmission antennas to form virtual beams that can lock in on multiple receivers in three dimensions. This cuts interference from overlapping simultaneous transmissions to other receivers and increases the power of the signal that reaches the target.

The Samsung demo base station transmitted simultaneously at different data rates to four separate receivers. For demonstration purposes, the base station transmitted at 3.5 gigahertz (~86 millimeter wavelength), though production transmitters will likely use carriers in the tens-of-gigahertz range. (The receivers were configured on NI USRP RIOs, or Universal Software Radio Peripheral Reconfigurable IO, a transceiver that can be programmed to reproduce the characteristics of a variety of RF devices over a range of frequencies.)

Initially, in conventional broadcast mode, interference between the four streams garbled the signals and kept any of the data from getting through. Switching to FD-MIMO, however, modulated the signals produced by each of the base station’s 32 antennas to allow beamforming in three dimensions. The transmitter homed in on each of the four receivers to push through separate, clear signals. Throughputs measured at the receivers jumped from essentially zero to as much as 28 megabits per second. The Samsung engineers cautioned, though, that the demonstration was intended to show how much FD-MIMO can improve signal quality, not to showcase a fully blown 5G concept.

(For a quick look under the hood of Samsung’s FD-MIMO, see the Xcell blog of Xilinx’s Steve Leibson.)

On Day 2, engineers from Nokia demonstrated another cornerstone of 5G: higher frequency. In what Nokia’s head of North American radio research Amitabha Ghosh called the first public demonstration of their 5G base station (it debuted at an invitation-only event at the Brooklyn 5G Summit last April), Ghosh and his colleagues sent two MIMO streams across the stage. The 73-GHz (~4 mm) signals used 2 GHz of bandwidth to achieve a combined throughput greater than 10 gigabits per second with a latency under 1 millisecond. (To see the video, go to the NI Week web site, click the “Wednesday” tab, and select “Nokia.”)

Nokia’s proof-of-concept system is the latest iteration of a 1 GHz demonstration displayed last year that used fully programmable FPGA components. Ghosh also reported that 73-GHz signals had been successfully transmitted outdoors to fast-moving receivers and over distances of more than 200 meters.

The results are significant. Some parts of the millimeter-wave spectrum are open in part because they span the resonant frequencies of atmospheric water and oxygen, which produce spikes in atmospheric absorption. While there is a local attenuation trough around 73 GHz (between flanking spikes at 60 and 120 GHz), atmospheric losses are still about 20 times higher than they would be for a 1-GHz carrier. This circumstance had bred widespread doubt that useful signals could be carried at all in that part of the spectrum…doubts that these results have helped to quiet.

For more background, see the 5G vision white papers from Samsung and Nokia

A Computer That Can Sniff Out Septic Shock

Dr. David Hagar treats dozens of patients each day at the intensive care unit at John Hopkins Hospital in Maryland. One of his patients was almost perfectly healthy except for having low blood pressure. Within four hours, the patient died of septic shock.

A group of computer scientists at John Hopkins University partnered with Hagar, and created an algorithm that can predict septic shock and give clinicians more time to treat someone at risk.

Read More

FCC Sets Rules for Copper Phase Out

The U.S. Federal Communications Commission set new ground rules for carriers seeking to replace their old copper telephone networks. Approved by a 3-2 vote at an open meeting yesterday, the rules require carriers to notify customers in advance and to seek FCC approval before reducing services. 

Home landline service has dropped dramatically with the spread of mobile phones. In 2000, almost every U.S. household had a landline phone. Since then, many have dropped landline service, and nearly 50 million of the remaining lines have switched to Voice over IP, which sends voice calls in the user's broadband data stream rather than over traditional telephony’s copper wire pairs. FCC chairman Tom Wheeler and others have been pushing to shift telephone traffic to fiber optics and the Internet.

Critics have charged that phone companies are allowing their old copper networks to decay to force customers to shift to fiber service. But some 37 million households—many of them headed by elderly people—remain on legacy copper, commissioner Mignon Clyburn noted at the hearing. Other holdouts live in rural areas that lack cellular and broadband service. Some prefer copper connections because they are independent of local power lines, and offer better 911 emergency service.

The FCC ruling requires that carriers notify retail customers at least three months before shutting down a copper network, and provide six-months notice to interconnecting carriers using the old lines. (Clyburn complained that that's much less time than the FCC gave before shutting down analog broadcast television, but voted for the measure anyway.) Carriers also must seek FCC approval if the telephone changeover would "discontinue, reduce or impair" service. Details remain to be worked out, but key issues are voice quality and support for 911 emergency calls, alarms, and medical monitors, sw well as assistive technology for the disabled.  

Two dissenting commissioners complained that the new rules would extend regulations and slow adoption of new technology. But Wheeler said that changing technology should not be "an opportunity to erase the historical reality of the responsibility of network [operators] to the users of those services." 

In a separate vote, all five commissioners agreed to require carriers to offer customers backup power supplies that maintain their phone service during prolonged power outages. Traditional copper phone lines are independent of local power, and have a reputation of being more reliable than power grids. But that hasn't stopped landline users from buying cordless phones that go down with the grid. 

Simple Device Could Convert DC Electric Field To Terahertz Radiation

Terahertz radiation is touted to open up many wondrous possibilities. T-ray technology could allow security officials to detect concealed weapons from a distance, provide accurate medical imaging, and allow high-speed wireless data communication.

One of the challenges in making the technology viable, though, has been developing a compact, efficient, and powerful terahertz source. The sources used today are bulky and costly. Some, such as quantum cascade lasers, require cryogenic temperatures.

A team of physicists now proposes a way to convert DC electric fields into terahertz radiation. They have come up with a seemingly simple nanoscale device—it relies on complex physics, mind you—that consists of a pair of two-dimensional material layers placed on top of a thicker conductor. When a DC electric current is passed through the conductor or the 2-D layer, the device should spontaneously emit terahertz radiation, the researchers say. They report the design this week in the Journal of Applied Physics.

Compared to most other THz sources, which only emit one frequency, the device could be tuned to emit different frequencies in the THz range, says Danhong Huang, a principal research physicist at the Air Force Research Lab in New Mexico. And while this is only a proposed design right now, Huang says that it should, in theory, be possible to make a THz emitter that is several millimeters to a few centimeters in size and emits milliwatts of power.

The 2-D layers could be sheets of any 2-D material like graphene or the more newly discovered cousins silicene or germanene. Graphene should be ideal because of its high conductivity, Huang says. The conductor, meanwhile, would be a semiconductor such as silicon or gallium arsenide that is doped to make it more conducting. The higher the doping, and hence the conductivity of the conductor, the higher the frequency of the output radiation. Using a metal conductor, for instance, would give high-frequency infrared radiation.

The device’s underlying mechanism is surface plasmon resonance: the collective oscillations of conducting electrons. The DC field causes plasmon resonance at the thick conductor’s surface and at the interface between the two 2-D layers. The two plasmons couple together and cause an instability in the oscillations, which induces the emission of THz radiation. Terahertz waves range in frequency from 300GHz–3 THz, corresponding to wavelengths between 1 mm to 0.1 mm.

By adjusting parameters such as the density of conduction electrons in the material or the strength of the DC electric field, it should be possible to tune the frequency of the resulting terahertz radiation. The device should have a very wide frequency tuning range from the higher microwave—microwave radiation ranges from 300MHz–300GHz—up into the lower THz, Huang says. Making a source that emits higher THz frequencies is challenging because it requires very high voltages that can cause the material to break down.

The group is now working with experimental researchers to design a prototype THz emitter. The challenge will be to find or develop the best materials and the optimal combination of 2-D layers and a compatible conductor substrate on which the 2D layer can be grown or deposited, Huang says. 

NASA Commissions Ultra High Temp Chips for Venus Landsailing Rover

If you’re going to absolutely insist on exploring the surface of Venus, there are two enormous problems that need to be dealt with. Problem number one is the enormous pressure, and problem number two is the enormous heat. At 90 atmospheres of pressure and just under 500 degrees Celsius at the surface, very little is going to survive down there for long. The best we’ve managed so far is about two hours in the case of Russia’s Venera 14.

For a Venus lander mission, active cooling of most of the electronics would be necessary, but it would also need sensors, actuators, and microcontrollers that can stand up to Venus’ surface conditions. Trying to keep this stuff from immediate “puddleificaion” isn’t easy, but NASA has just thrown a quarter of a million dollars at a University of Arkansas spinoff to develop Venus-resistant chips for a weird little rover.

Read More

Google Asks France Not to Require Global Right To Be Forgotten

Google has asked France’s data protection agency, CNIL, to retract an order to apply French right-to-be-forgotten rulings to all Google search results. Since a European court ruling last spring, Google has handled right-to-be-forgotten requests only in country-specific versions of it’s search results (see IEEE Spectrum’s story, “Google’s Year of Forgetting”). In a blog post last week, Google’s Global Privacy Counsel, Peter Fleischer, wrote that the company’s representatives had asked CNIL “to withdraw” the June order.

European Union residents unhappy with search results for their name can ask search engine providers to remove links from the results by making the case that the links infringe on their privacy and the information is not in the public interest. A web slip-up by Google revealed last month that 95 percent of the requests so far have been by private citizens, not politicians and criminals, The Guardian reports. If the provider doesn’t grant such a request (almost 60 percent of the time for Google, which handles over nine in ten web searches in Europe), individuals can appeal to their country’s data protection authority for a definitive decision.

Yet last year’s court ruling only confirmed that national data protection agencies have the authority to rule in such cases. It did not specify the scope of such decisions. A comment in a February 2015 report by Google’s privacy advisory council hinted at the present conflict. Council member and German federal justice minister Sabine Leutheusser-Schnarrenberger wrote: “Since EU residents are able to research globally, the EU is authorized to decide that the search engine has to delete all the links globally.”

That, Fleischmann wrote last week, could set a troubling precedent. He wrote, “there are innumerable examples around the world where content that is declared illegal under the laws of one country, would be deemed legal in others: Thailand criminalizes some speech that is critical of its King, Turkey criminalizes some speech that is critical of Ataturk, and Russia outlaws some speech that is deemed to be ‘gay propaganda.’ ”

A CNIL representative said it would make a decision on Google’s request in two months, reports the BBC.

Google to Startups: (Mostly) Free Patents Here!

It’s hard to keep up with Google’s adventures in patents these days. No sooner had its Patent Purchase Promotion ended—July 22 was the date by which Google mailed out final purchase contracts—than it announced another experiment, the Patent Starter Program, the very next day. The starter program could be over even more quickly than the purchase promotion, which gave interested parties only three weeks to decide whether to participate (see “Google’s Patent Portal is Closing Fast”).

Rather than offering to buy patents, however, the new program gives them away.

It’s meant to target startups and developers that may be defenseless when it comes to patent protection. From the program description [pdf]:

All too often these days, the first time a startup has to deal with a patent issue is when a patent troll attacks them. Or when a prospective investor may ask them how they are protecting their ideas.

At no cost, participants can choose two patents from a group of three to five that Google identifies as relevant to their businesses. Startups can also search Google’s portfolio of other patents the company has purchased (although not patents granted to Google itself) for assets the company might be willing to sell.

The catch is that participants must join the License on Transfer (LOT) network, a royalty-free cross-licensing arrangement launched by Google and several other companies in July 2014 to combat trolls. Under LOT, each company grants a license to other members, but that license only goes into effect if the patents in question are transferred to a company outside the LOT. That way, if a troll gets hold of the IP, it finds that there’s nobody (in the network) to sue. Google says that LOT network membership fees for the startups in the Patent Starter Program will be waived for two years.

One other catch: Google will accept only the first 50 applicants that meet its revenue criteria—2014 revenue of $500,000 to $20 million. Asked if that fact would create a mad rush by applicants that don’t want to miss a chance at Google’s largesse, the company’s senior product licensing manager, Kurt Brasch, said that such a result wasn’t the intent. Rather, the company wanted to limit the number so it could properly gauge interest in the idea. “We’ll evaluate it afterward, just like our other experiments,” he said.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More