Tech Talk iconTech Talk

Our Mars Colonization Plan Is Feasible, Mars One CEO Insists

When you start talking about big splashy space exploration plans—say sending the first humans to Mars on a private mission supported by a reality TV deal, eager (theoretical) billionaires, and burials in space—things can get surreal pretty fast. 

So it was last week, when Bas Lansdorp, CEO of Mars One set out to debate two MIT aerospace engineers on what should have been a simple question: is the company’s plan to put humans on Mars feasible? By the end it was not at all clear how Mars One defines the word “plan” and why, after publicly admitting they won’t stick to the schedule they’ve outlined on their website, they’ve been so specific about timelines and budgets.

In case you haven’t been following the saga, Mars One is a private campaign to send volunteers on one-way trips to the Red Planet, where they will live out the rest of their lives in a permanent settlement and send video dispatches of their activities back to Earth. Announced in 2012, the company has very publicly hunted for volunteers for this mission and recently whittled down that list to 100 candidates.

The Mars One plan begins with robotic missions to help set up the habitat and deliver supplies to the Red Planet. The first crew of four would arrive on Mars in 2027 (originally pegged for 2023). Additional four-person crews would follow every two years after that. To sustain this growing settlement, the mission would likely rely on a mix of supply missions and in situ resource utilization, baking Martian soil to extract water and oxygen and pulling nitrogen from the Martian atmosphere to add to the settlement’s supply.

Skepticism has been a running theme since Mars One was announced. In 2012, Wired gave the company’s plan a plausiblity rating of 2 out of 10. And last year, researchers from MIT’s department of aeronautics and astronautics performed an independent technical analysis of the Mars One plan. They found multiple problems. For one thing, spare parts would take up an increasingly large fraction of the available launch mass. And, if the astronauts grow all their own food, the plants could create unsafe levels of oxygen and the rapid depletion of the habitat’s nitrogen gas supplies within a matter of a couple of months. News stories led with the suffocation angle. Mars One didn’t react well to the criticism; one team member called the findings “made up and fake”. 

Last week’s event took place at an annual meeting of The Mars Society in Washington, DC. The debate pitted two of the MIT study authors, Sydney Do and Andrew Owens, against Lansdorp and aerospace consultant Barry Finger of Paragon Space Development Corporation, which recently conducted a study (pdf) of Mars One’s life support needs.

The company has said it will need $6 billion to get the first humans to Mars by 2027. Do and Owens focused their analysis on those numbers, asking whether the company’s plan could be accomplished by that date with that amount of money, in adherance to the “iron triangle” of project management.

Mars One claims the major technology needed to accomplish the company’s plan already exists. But Do and Owens laid out a rather daunting list of things that still need to be developed in order for the mission to succeed. The company will land masses at least twice as heavy as NASA’s Curiosity rover, the heaviest thing yet landed on the surface of the planet. The crew’s habitat must have life support that can survive for the 26 months between resupply missions, a level of endurance they say is 23 months beyond that of the International Space Station’s systems. And the robotic spacecraft that will arrive before humans must have an unprecedented level of capability. Mars One aims to use an intelligent rover to set up the habitat for the humans to follow. “Right now we can’t do this on Earth and this is expected to be done on Mars,” Do said.

Here’s the slide showing the team’s comparison between the Mars One plan and the first 8 years of the Apollo program. Their full presentation can be found here

In response to these criticisms, Lansdorp said that “Mars One’s goal is not to send humans to Mars in 2027 with a $6 billion budget and 14 launches. Our goal is to send humans to Mars, period.” Then, more cryptically, he added, “For that reason I actually consider the study that Andrew and Sydney did a confirmation of Mars One’s plan.”

Lansdorp’s presentation contained a single slide, showing how the concept for Apollo launch vehicles changed over time.

“We’re not going to do, I think, the current design of the mission,” Lansdorp said. He noted that the organization’s plans were based on preliminary work and would change with additional study findings. As an example, Lansdorp cited the recent Paragon study, which found that the mass of the life support system would be higher than expected.

 The company is currently seeking $15 million to finance the buildup of its team and commission additional studies—in particular one by Lockheed Martin on the entry, descent, and landing stages of the mission. He added that it’s not impossible that a billionaire might call up and offer to finance the whole endeavor, which would speed the work along.

It was hard not to come away from this debate thinking the two sides were talking at right angles to one another. Owens and Do took Mars One’s numbers seriously in their analysis. Lansdorp seems to consider the company’s cost estimates and launch dates as notional, or aspirational, figures. 

In discussions with others at the meeting, I’d wondered aloud what repeated delays might do to the image of the company. But Do voiced an even bigger concern after the debate: if Mars One deflates, what will happen when the next plan to go to Mars comes along? Even if the new effort is deemed technically sound and eminently accomplishable, will anyone pay it any mind?

3D-Printed Plastic Sensor Could Make Siri a Better Listener

Apple’s Siri and Microsoft’s XBox video game consoles still sometimes struggle to hear their owners in a noisy room. A 3-D printed sensor prototype could solve that problem by giving electronic devices the sensitivity to pick out a single voice or sound.

Read More

4-D Light Field Displays Are Exactly What Virtual Reality Needs

Ever since our first experience with a prototype of the Oculus Rift, we’ve been getting more and more excited about high quality consumer virtual reality hardware. The first production version of the Rift is almost here, and when it arrives (probably in early 2016), you might even be able to justify its rumored $1,500 cost.

Good as the Rift is (and it’s very, very good), it’s taken this long for Oculus to get the hardware ready because fooling your eyes and brain to the extent that the Rift (or any other piece of VR hardware) does is a very tricky thing to pull off. The vast majority of us have an entire lifetime of experience of looking at the world in 3-D, and we notice immediately when things aren’t quite right. This can lead to headaches, nausea, and a general desire never to try VR ever again. A big part of what makes VR so difficult is that it’s continually trying to convince your eyes that they should be focusing on a scene in the distance, when really, they’re looking at a screen just a few inches away.

The current generation of VR displays use a few different techniques to artificially generate images that appear to have depth despite being displayed on flat screens. But there’s one that they're missing out on—one that could make VR displays much more comfortable to use. The same sort of 4-D light field technology that allows the Lytro camera to work its magic could solve this problem for VR as well.

Read More

Microresonators: Transmitting 40 Communication Channels with One Laser

Interconnections in powerful computers and linking "blades" in data centers will increasingly rely on optical communication links. Currently, this still requires an individual laser with individual control circuitry for each channel.  Now researchers at Purdue University have developed a new technology that allows a single laser to transmit data over a number of individually controlled channels, at different frequencies, simultaneously. They published this research online in the 10 August edition of the journal Nature Photonics. 

The key component of this technology is a tiny microresonator. It’s a 100-micrometer-wide optical waveguide loop or microring made from silicon nitride. Because it is as thin as a sheet of paper, it can easily be integrated on silicon chips.  The microresonator replaces a whole tabletop studded with the complement of optical components and resonators that are now required to create a mode-locked laser.

In the experimental setup, a pump laser is connected to the resonator. The researchers pump the resonator with a continuous-wave laser at one frequency, explains Minghao Qi, an associate professor of electrical and computer engineering at Purdue. The resonator, though small, can hold a huge amount of power, which leads to non-linear interaction. “Normally, if we pump anything into the resonator, and the interaction is linear, the input and output frequencies are the same,” says Qi.  “When the interaction is non-linear, it basically generates higher-order harmonics—new frequencies.”

Qi adds that, because the spacing between the different frequency peaks are the same, the resonator is called a frequency comb. The frequencies can be tuned by changing the resonance frequency of the resonator. This is achieved by an electric heater, a tiny gold wire overlaying the resonator. Changing the temperature changes the resonator’s refractive index, which in turn changes the resonance frequency. 

While the experimental setup works well with discrete light pulses, the researchers also noted the presence of “dark pulses,” or very short intervals where no light is transmitted. These intervals can occur every one or two picoseconds, which is a hundred times faster than the switching speed of the most advanced microprocessors. “The advantage of a dark pulse is that this can be repeatedly generated and that means it is very reliable and we can control it. If you want a bright pulse, then it is a very tricky process,” says Qi.

According to the Purdue researchers, they showed that dark pulses can be converted into bright pulses. “So by creating a dark pulse first, you have a process that is robust and controllable,” says Qi.

Besides facilitating high-volume optical communications in computers, microresonators could also be used in optical sensors and in spectroscopy. If you want to probe a compound at many different wavelengths you can use a tunable laser to excite the molecule at those different wavelengths. With conventional lasers, you have to tune the laser to a different frequency for every measurement, which takes time. What’s more, tunable lasers are expensive, explains Qi. But with the Purdue team’s improved laser, “If your probe light itself has many, many frequencies, you are basically doing a spectral scanning, with all the frequencies in one shot,” Qi adds.

For the moment, the scientists have yet to put microresonator on a chip with all the other components. “This will be our next step,” says Qi.

Model Predicts Whether NFL Teams Will Run or Pass

National Football League (NFL) playbooks are the size of telephone books. They’re filled with dozens and dozens of plays, each designed so that a team can play to its strengths while taking advantage of its opponents’ weaknesses. Despite the endless variations, they all basically boil down to two options for the offense: pass or run. No matter how intricately designed an offensive play is, if the defense can sniff out whether the ball will be tossed down field or toted along the ground, it gains a tremendous advantage. (Yes, we know that teams punt and kick field goals and extra points after touchdowns. But we’re not talking about that right now.) 

Earlier this week, a pair of statisticians from North Carolina State University showed off a model they built that predicts whether a specific team will call a passing or running play with a high degree of accuracy. They presented the model  in Seattle at JSM 2015, a joint conference of statistical and mathematical societies.

William Burton, an undergraduate who is majoring in industrial engineering and minoring in statistics, and Michael Dickey, who graduated in May with a degree in statistics, used a listing of actual NFL offensive plays from the 2000 through 2014 seasons that had been compiled by a company called Armchair Analysis to figure out the ratio of passes to runs. They showed empirically what fans already understood anecdotally: the aerial attack is being utilized ever more frequently. Pass plays were called on 56.7 percent of the time in 2014, compared with 54.4 percent in 2000.

But what makes a team decide whether to run or throw? Burton and Dickey looked at a host of factors that affect a team's play selection. Among these are: the distance to the first-down marker, whether it’s first, second, third or fourth down, how much time is left on the game clock, the team’s score in relation to its opponent’s, and field position. For example, there’s a high probability that the coach will opt for a passing play if the other team is leading by three points, there’s a minute left in the fourth quarter, the offense is facing third down at its own 30-yard line, and needs to advance 7 yards to pick up a fresh set of downs. On the other hand, a team that’s leading by 7 points, facing the same down and distance at the same point in the game, might very likely run the ball (to avoid an interception and to take time off the clock so the other team can’t mount a score-tying drive before time runs out).

For their system, Burton and Dickey developed logistic regression models—methods used to, for example, predict if someone will default on a mortgage—and random forest models—a machine learning method. But they quickly realized that teams’ strategies differ significantly in each of a game’s quarters. To account for that, they produced six separate logistic regression models: one each for the first, second, and third quarters, plus one for the fourth quarter if the offensive team is winning, another if it is losing, and a third for when the score is tied. They tested their models on 20 randomly selected games. Overall, the models accurately predicted pass or run on 75 percent of downs. The models’ best performance was related to a 2014 game between the Jacksonville Jaguars and Dallas Cowboys. Their predictions proved correct on 109 out of 119 offensive plays—a 91.6-percent accuracy rate.

Burton and Dickey say that anyone, including NFL coaches and fans rooting for their teams at home, can use the tool to make educated guesses about what will happen each time the ball is snapped.

DARPA Funds Stage 2 of XS-1 Spaceplane Design Competition

The Space Shuttle was originally intended to make getting to space easy, inexpensive, and routine, with an initial goal of a launch nearly every week. It didn't quite play out that way, and we’re now back to tossing things into orbit on top of massively expensive rockets that are good for only a single one-way trip. It’s a system that works (most of the time), but it's not a system that’s efficient.

Just because the Space Shuttle didn’t magically open orbit up to everyone doesn’t mean that getting to space through a reusable platform is a bad idea. In commercial spaceflight circles, the thinking is that reusability is the best way to help drive down costs. DARPA, the U.S. Defense Department’s blue-sky research arm, wants a piece of the action; but in typical DARPA fashion, the agency is looking for something even more futuristic. It has just awarded nearly $20 million to three aerospace companies to continue design work on a reusable spaceplane that will be able to launch a satellite into space every single day.

Read More

See Through Walls by the Glow of Your Wi-Fi

It used to be that a bad guy besieged by police could just shoot out the lights and hide in the dark. As if it weren’t enough that today’s cornered malefactors have to worry about night vision goggles, tomorrow’s thugs may also have to worry about the soft radio glow of wireless routers and mobile communications towers.

Researchers at University College London (UCL) have devised a system for detecting the Doppler shifts of ubiquitous Wi-Fi and mobile telephone signals to “see” people moving, even behind masonry walls 25 centimeters thick. The method, which could be useful in situations from hostage-takings to traffic control, won the Engineering Impact Award in the RF and Communications category at this National Instrument’s NI Week 2015 meeting (which convened in Austin, Tex., 3-9 August).

Other researchers—notably Dina Katabi and Fadel Adib of MIT—have built through-wall radars in the household communication bands, but these are active radars that transmit as well as receive. The UCL technique uses only passive radiation—from Wi-Fi routers (using emissions in any of the IEEE 802.11 b, g, n, ac), ambient GSM and LTE mobile signals, and other sources—so there is nothing to betray the surveillance. The system calculates the positions of hidden target by comparing two signals: a reference channel, receiving the baseline signal from the Wi-Fi access point or other RF source, and a surveillance channel, which picks up Doppler-shifted waves reflecting from the moving subject.

Tan and company built their “high Doppler resolution passive Wi-Fi radar” on two multi-frequency, software-defined, FPGA-based transceivers (National Instruments’ USRP, or Universal Software Radio Peripheral. The system compares the reference and surveillance signals, interprets the very small frequency shifts, and reveals the hidden subject’s location and motion.

By tweaking the processing parameters—increasing signal-integration time and lowering sensitivity thresholds—the engineers could coax the passive radar into “seeing” quite subtle movements, even hand gestures. At this stage, the device doesn’t produce anything resembling a photograph of the subject. It delivers a radar-style scatter plot, a flare of color that says, “Here he is!” along with a variety of signal data.  The system is described in more detail in a paper that Tan and UCL colleagues Qingchao Chen, Karl Woodbridge, and Kevin Chetty presented at the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), held 19-24 April in South Brisbane, Australia.

Samsung, Nokia Show 5G Tech at NI Week

Two on-stage demonstrations at National Instruments’ NI Week, which was held in Austin, Texas, from 3–6 August, showed off working infrastructure prototypes for the 5G networks that will be the backbone of Big Data applications and the Internet of Things (the conference’s themes). 5G’s goals include hundred-fold performance improvements over today’s 4G systems, allowing the transmission of tens of megabits per second to thousands of mobile users, and offering peak rates well north of a gigabit per second within an office.

To open Day 1, researchers from Samsung showed off full-dimensional multiple-input, multiple-output (FD-MIMO), one of several parallel techniques for squeezing more information through the airwaves. FD-MIMO sculpts the signals fed to an array of transmission antennas to form virtual beams that can lock in on multiple receivers in three dimensions. This cuts interference from overlapping simultaneous transmissions to other receivers and increases the power of the signal that reaches the target.

The Samsung demo base station transmitted simultaneously at different data rates to four separate receivers. For demonstration purposes, the base station transmitted at 3.5 gigahertz (~86 millimeter wavelength), though production transmitters will likely use carriers in the tens-of-gigahertz range. (The receivers were configured on NI USRP RIOs, or Universal Software Radio Peripheral Reconfigurable IO, a transceiver that can be programmed to reproduce the characteristics of a variety of RF devices over a range of frequencies.)

Initially, in conventional broadcast mode, interference between the four streams garbled the signals and kept any of the data from getting through. Switching to FD-MIMO, however, modulated the signals produced by each of the base station’s 32 antennas to allow beamforming in three dimensions. The transmitter homed in on each of the four receivers to push through separate, clear signals. Throughputs measured at the receivers jumped from essentially zero to as much as 28 megabits per second. The Samsung engineers cautioned, though, that the demonstration was intended to show how much FD-MIMO can improve signal quality, not to showcase a fully blown 5G concept.

(For a quick look under the hood of Samsung’s FD-MIMO, see the Xcell blog of Xilinx’s Steve Leibson.)

On Day 2, engineers from Nokia demonstrated another cornerstone of 5G: higher frequency. In what Nokia’s head of North American radio research Amitabha Ghosh called the first public demonstration of their 5G base station (it debuted at an invitation-only event at the Brooklyn 5G Summit last April), Ghosh and his colleagues sent two MIMO streams across the stage. The 73-GHz (~4 mm) signals used 2 GHz of bandwidth to achieve a combined throughput greater than 10 gigabits per second with a latency under 1 millisecond. (To see the video, go to the NI Week web site, click the “Wednesday” tab, and select “Nokia.”)

Nokia’s proof-of-concept system is the latest iteration of a 1 GHz demonstration displayed last year that used fully programmable FPGA components. Ghosh also reported that 73-GHz signals had been successfully transmitted outdoors to fast-moving receivers and over distances of more than 200 meters.

The results are significant. Some parts of the millimeter-wave spectrum are open in part because they span the resonant frequencies of atmospheric water and oxygen, which produce spikes in atmospheric absorption. While there is a local attenuation trough around 73 GHz (between flanking spikes at 60 and 120 GHz), atmospheric losses are still about 20 times higher than they would be for a 1-GHz carrier. This circumstance had bred widespread doubt that useful signals could be carried at all in that part of the spectrum…doubts that these results have helped to quiet.

For more background, see the 5G vision white papers from Samsung and Nokia

A Computer That Can Sniff Out Septic Shock

Dr. David Hagar treats dozens of patients each day at the intensive care unit at John Hopkins Hospital in Maryland. One of his patients was almost perfectly healthy except for having low blood pressure. Within four hours, the patient died of septic shock.

A group of computer scientists at John Hopkins University partnered with Hagar, and created an algorithm that can predict septic shock and give clinicians more time to treat someone at risk.

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More