Tech Talk iconTech Talk

null

Forever Data in Quartz: The Quest for the Immortal Bit

Researchers in the UK have prototyped an ultimate data backup technology—a monocle-sized transparent disc that stores 360 terabytes, withstands fires and heat up to 1000º C and retains its data even over billions of years.

As IEEE Spectrum reported in 2013, the tiny quartz discs store their data as laser-etched nanostructures in the crystal. The nanostructures change the polarization of light passing through it. These changes are read off by an automated microscope algorithm that translates polarization signals back into the stream of stored data.

The system is called a “5D” data storage system, for the two extra dimensions (in addition to the standard three spatial dimensions) that polarization enables data to be encoded into the disc. And, says Peter Kazansky, professor at the Optoelectronics Research Centre at the University of Southampton, the idea began as a challenge.

Read More
null

Autonomous Nanosatellites: Satellites that Make Up Their Mind

Nanosatellites, small satellites with sizes ranging from a shoe box to a small suitcase are popular because they are cheap–just some communication gear and a few instruments, and because they can piggyback onto other space missions. NASA is now preparing to launch in orbit around Mars two CubeSats, small satellites that will piggyback on InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport), NASA’s mission to understand the interior structure of Mars, now likely to be launched in 2018.

The CubeSats, in orbit, will communicate with the lander and with NASA’s Mars Reconnaissance Orbiter (MRO), which will relay communication with ground control on Earth. Sending a message to Earth can take up to 20 minutes, too long to transmit and then receive urgent corrections or adjustments for such things as the misalignment of solar panels or antennas.

To avoid such mishaps in space, the satellites should be equipped with autonomous fault correction, something already available in certain drones or autonomous driverless cars, argues Hakan Kayal, a researcher at the University of Würzburg in Germany, whose expertise combines both space technology and computer science. “These technologies, as found in smartphones with their apps, are more advanced than what we normally find in space,” says Kayal, who adds that in space one turns to more secure devices and software. “We thought it would be a good idea to develop this technology for nanosatellites, allow them to functions autonomously, and allow error correction and autonomous control of mission scenarios,” says Kayal.  And in future planetary missions such autonomous satellites will be able to focus their cameras and instruments on transient events without having to wait for ground control.

The Würzburg scientists have now started up two space projects, technology missions that will lead to the development and testing of autonomous nanosatellites. The first project is the construction of a nanosatellite called SONATE for “Solutus (independent) Nano Satellite” that will incorporate two systems as payload, an “Autonomous Sensor and Autonomous Planning system” or ASAP and an “Autonomous Diagnosis System for Satellites” or ADIA. Strictly a technology  mission, it will be launched in an Earth orbit in 2019. By simulating faults in the satellite’s system, the researchers will test how ADIA will make a diagnosis, in order to find the root cause of the problem. It will also be able to find possible future errors by analyzing the trends within the satellite. We hope in a follow-up step or project, to be able to better correct the error; but this is not yet part of the project, remarks Kayal.

A second project, called NACOMI, “Nano Satellite Communication Interplanetary,” will focus on advanced communications technologies for nanosatellites that will be orbit around moons and planets.  “NACOMI will be a prototype study in a laboratory setting; we will look at different techniques and decide which are the most promising,” says Kayal.

For planetary astronomers, the autonomous operation of instruments, such as cameras, aboard the nanosatellites will open up exciting new possibilities.  “The cameras will be analyzing the scenery all the time, trying to detect changes or movements, such as meteor impacts, lightning, geysers, chemical eruptions, and transient lunar phenomena,” says Kayal. Typically, a satellite will catch these events—many shorter than a second—and store them on board. Communication bottlenecks will not allow sending images and video continuously. The satellite will have to autonomously analyze these records and decide what to do with each of them—for example, keep recording and then tweet to Earth: “What do you want to do with this recording?”.  

null

Paper Skin Mimics the Real Thing

Human skin’s natural ability to feel sensations such as touch and temperature difference is not easily replicated with artificial materials in the research lab. That challenge did not stop a Saudi Arabian research team from using cheap household items to make a “paper skin” that mimics many sensory functions of human skin.

Read More
null

MIT Turns Splashing Water into an Interactive Display

If you get very, very lucky when you're doing the dishes, the water streaming out of your tap might occasionally splash onto a bowl or spoon in just the right way and spread out into a flawless hemispherical water curtain that means you can stop doing the dishes for a while to admire it. 

Students at MIT have learned to do it on purpose. Members of MIT’s 2014 Tangible Interfaces class, taught by Professor Hiroshi Ishii, have created HydroMorph: a “dynamic spatial water membrane” that can turn this pleasingly curvy splash into a flapping bird, form it into an interactive countdown timer, direct it into a cup, and do all kinds of other things that water shouldn’t really be able to do.

Read More
null

Silicon Cochlea Mimics Human Hearing

Cameras and audio equipment are getting better all the time, but mostly through brute force: more pixels, more sensors, and better post-processing. Mammalian eyes and ears beat them handily when it comes to efficiency and the ability to only focus on what’s interesting or important.

Neuromorphic engineers, who try to mimic the strengths of biological systems in manmade ones, have made big strides in recent years, especially with vision. Researchers have made machine-vision systems that only take pictures of moving objects, for example. Instead of taking many images at a steady, predetermined rate, these kinds of cameras monitor for changes in a scene and only record those. This strategy, called event-based sampling, saves a lot of energy and can also enable higher resolution.

One example is a silicon retina made by Tobi Delbrück of the Institute for Neuroinformatics in Zurich; it was used as the eyes in a robotic soccer goalie. This design, made in 2007, has a 3-millisecond reaction time.

Last week, at the IEEE International Solid-State Circuits Conference in San Francisco, another group showed how this approach can also work for hearing. Shih-Chii Liu, co-leader of the Sensors Group at the Institute of Neuroinformatics, described a silicon cochlea that uses just 55 microwatts of power (three orders of magnitude less than previous versions of the system) to detect sound in a humanlike way.

The neuromorphic auditory system uses two “ears,” with each one capable of being moved independently of the other. The difference in timing between sound waves reaching the two ears makes it possible to locate the origin of a sound, says Liu. Each silicon ear has 64 channels, which each responds to a different frequency band, from low pitches to high. These channels mimic the cells in the human cochlea, which also responds to different frequencies (about a thousand in the real thing).

Liu connects the silicon cochlea to her laptop and shows what it’s recording with a graph of frequency over time. When we’re quiet, there’s no activity. When one of us speaks into the microphone, there are spikes around the 100-to-200-hertz range. The other channels, ranging from 20 Hz to 20 kilohertz, are not recording.

Liu says her group and Delbruck’s are now working to integrate the silicon cochlea and retina. This could give a humanoid robot a lot more low-power smarts. Besides being more humanlike, multimodal sensing means machines will miss less of what’s going on. This is evident in the way human senses support each other. When you talk to someone in a noisy restaurant, for example, you can’t always hear her every word. But your brain fills in the missing auditory pieces with visual information gathered when you watch their lips.

The neuromorphic researchers want to integrate these smart, low-power sensors with processors running deep learning algorithms. This kind of artificial intelligence does a good job of recognizing what’s going on in an image; some versions can even generate a surprisingly accurate sentence describing a scene. Neural networks excel at understanding and generating speech, too. Combining neuromorphic engineering with deep learning could yield computers that mimic human sensory perception better than ever before.

Asked whether this advance would someday help humans who are deaf or hard of hearing, Liu said the current design wouldn’t work for cochlear implants, so that is not an application her group is pursuing. She notes that although it could work in theory, it would probably involve some fundamental changes in hearing aid design that might cost so much to implement that the ultimate payoff would not be worth the effort.

This post was corrected on 4 April to clarify Liu’s name and affiliation.

null

Metallic Mesh Becomes Invisible to Antenna Signals

Most of modern science’s attempts to recreate the invisibility cloaks found in TV’s Star Trek and the wizarding world of Harry Potter have focused on bending light waves around the object meant to be hidden. A team of U.S. and Chinese researchers have taken a very different direction by creating the first practical “invisible” material that allows certain electromagnetic signals to pass unimpeded as they would through air.

Read More
null

Digital Baby Project's Aim: Computers That See Like Humans

Can artificial intelligence evolve as human baby does, learning about the world by seeing and interacting with its surroundings? That’s one of the questions driving a huge cognitive psychology experiment that has revealed crucial differences in how humans and computers see images.

The study has tested the limits of human and computer vision by examining each one’s ability to recognize partial or fuzzy images of objects such as airplanes, eagles, horses, cars, and eyeglasses. Unsurprisingly, human brains proved far better than computers at recognizing these “minimal” images even as they became smaller and harder to identify. But the results also offer tantalizing clues about the quirks of human vision—clues that could improve computer vision algorithms and eventually lead to artificial intelligence that learns to understand the world the way a growing toddler does.

Read More
null

Gaming: Amazon's New Developer Tools Are a Launch Into "Blue Ocean"

Online retail giant Amazon announced a pair of game developer tools this week. One service, called Lumberyard, is a game engine and developer environment built around Amazon Web Services’ cloud computing environment that developers can use for free. The other, a cloud-based game hosting service called GameLift, may be downplayed in press coverage because it’s buried in Amazon’s press release. But don’t sleep on GameLift, says one game industry expert, because its introduction may wind up being one of the more significant pieces of gaming news in some time.

GameLift could represent Amazon’s play for a whole new gaming marketplace—or two.

“Amazon may be trying to create their own little section of a blue ocean, because the rest of the game market has become so much of a red ocean,” says Michael Sellers, professor of practice at Indiana University’s Media School. Sellers refers to industry shorthand for saturated and already shark-infested waters (red ocean) and new marketplaces in which there are as yet no clear winners or losers (blue ocean).

Amazon designed GameLift, as its press release notes, “to host many different types of shared, connected, regularly-synchronized games including first-person shooters, survival & sandbox games, racing games, sports games, and MOBA (multiplayer Online Battlefield Arena) games.”

Sometimes called e-sports games, the emerging market GameLift serves represents a kind of virtual arena in which whole teams of players can go up against whole teams on the other side. What’s more, thousands or even millions of spectators can not only observe but also interact in creative ways that the best e-sports game designers will no doubt exploit in ways we may not even be able to imagine today.

“It allows for active [spectatorship] as well, so people feel like they can root for their team,” Sellers says. “The growth in [e-sports] is projected to just skyrocket.” In South Korea, for instance, e-sports tournaments in the game League of Legends are a national pastime; the country’s top players enjoy the cultural status of rock stars.

Yet, today, South Korea’s e-sports obsession makes it an outlier compared with many other countries around the world. Sellers predicts, however, that in this sense, the rest of the world may look more like South Korea in five years’ time.

“[Setting up these services] puts Amazon potentially in a very good position,” he says. “There’s the old saying that in a gold rush, you want to be the one selling shovels. This is them, manufacturing really good shovels.”

By contrast, Sellers says, the ocean of mobile games is crimson red. Every day, he says, some 500 new mobile games are placed on the virtual shelves at the Android or iOS App stores. And while the top 10 games frequently rake in a million dollars or more in revenue per day, the average mobile gaming company experiences practically none of that financial downpour. The differences are stark: The tenth-most-popular game might bring in $150,000 per day; the game at 100 on an app store’s most-popular list could enjoy a $15,000 daily take. But from there, the revenue cliff drops off even more precipitously, and there are thousands of games.

“I think the combination of Lumberyard and especially GameLift may give people who make a particular kind of game—the sort-session, multiplayer, e-sports–like games—a new avenue from which players can discover their games,” Sellers says. “And discoverability is where it’s at right now. That’s the pain point.”

The two new Amazon game developer environments also seem well suited for the high-throughput and low-latency-time intensive computing environments for virtual reality games—the kinds for which Oculus Rift and other VR headsets will serve as the platforms. And that too could be another blue ocean that Amazon is looking to colonize with this announcement, Sellers says.

If the name of the game in a gold rush is selling shovels, there are few shovel makers in either e-sports or VR gaming capable of competing with the online retail megalith.

“With the kind of breadth that Amazon is providing with this… they have unassailable advantage if they price it right,” he says.

So for game developers and game consumers, Amazon’s news this week might not change the gaming world overnight. But it might well be a force for democratizing gaming by providing scalable servers and development environments that scrappy game startups can set up on the cheap so they can ramp up to Candy Crush levels—if they’re ever so lucky as to have a hit on their hands.

“They’re playing a long game here,” Sellers says of Amazon’s strategy. “If your time horizon is six months to a year, this is not really relevant. And it’s not going to cause anyone to stop their development and jump over to what Amazon is doing. On the other hand, if your time horizon is two to five years, this is going to be really significant. I would not bet against Amazon on a five-year time scale.”

null

New App Could Improve Earthquake Warning Using GPS

A new Android app uses smartphone GPS sensors to detect earthquakes and estimate their locations and magnitudes in real time. The technology could lead to a dense global seismic network that can warn people who are a few kilometers away from a quake’s epicenter, giving them a few seconds to find a safe spot before the strongest tremors hit. Scientists at the University of California, Berkeley, have released the free app, called MyShake, on Google Play. 

Earthquake early-warning systems exist today only in Japan and Mexico. The U.S. Geological Survey (USGS) is currently testing a system called ShakeAlert for the western United States. These systems use data from networks of tens to a few hundred seismic stations spaced kilometers apart.

GPS accelerometers, which can take continuous measurements at a particular location, can also detect long-term ground movement at a geological fault that, with a sudden release of built-up forces, results in an earthquake. And since most smartphones today come with these accelerometers, they could become a free, crowdsourced seismic monitoring network.

“A smartphone network will be very dense, with a sensor or two on every block,” says Qingkai Kong, a graduate student who developed the algorithm at the heart of the MyShake app. “It can supplement the current seismic network. And in places like Haiti or Nepal, where there is no traditional seismic networking but millions of smartphones, this could be a low-cost system to issue warnings and save lives.”

Researchers at the USGS recently reported that the measurements gleaned from commercial GPS devices could indeed improve quake warning.

But the new app is the first practical way to tap into the data the smartphone GPS sensors provide. For one, its clever algorithm can differentiate between quake tremors and normal human activity. It does this by analyzing the frequency and amplitude of the accelerometer signals, Kong says. In simulated tests, the algorithm accurately distinguished quakes from other movement 93 percent of the time. The researchers detailed their algorithm in the journal Science Advances

The other key feature of the app is that it runs in the background on a handset and draws a minuscule amount of power. Because GPS is power-hungry, the app uses it only when absolutely necessary. It will briefly activate the phone’s GPS and send information on time, amplitude of the shaking, and the phone’s GPS coordinates only when the handset's accelerometers detect motion that fits an earthquake profile. For most users, a phone running MyShake wouldn’t need to be charged any more frequently than phones without the app, Kong says.

The data is sent from phones to a processing center at the Berkeley lab. There, a network detection algorithm calculates the location, origin time, and magnitude of the earthquake based on triggers from multiple phones. Then this information can be used to estimate the tremor intensity and the remaining time until damaging waves arrive at a target location.

In a series of proof-of-concept simulations using data from various California earthquakes, the researchers show that smartphones equipped with the app can record magnitude 5 earthquakes at distances of 10 kilometers or less. The earthquakes were first identified 5 seconds after the tremors started. This performance was similar to that of the ground-based ShakeAlert warning system that the USGS is currently testing, which issues alerts 5.3 seconds after quake origin. “In most cases these phones can only detect very strong parts of shaking, not the early portion of the wave,” Kong says. 

A denser network of app-equipped phones would yield a network capable of detecting an earthquake faster and better, Kong says. “Usually within a 110-by-110-km area and [with] more than 300 smartphones, we could make a relatively accurate estimate of location, magnitude, and origin time.”

Once enough people are using the app and the bugs are worked out, the Berkeley team plans to build a real-time warning system. They’re also working on an iPhone app.

null

How LIGO Found a Gravitational Wave in a Haystack

Washington, D.C.—The wait is over. After months of rumors, the Laser Interferometer Gravitational-Wave Observatory (LIGO) today announced the first direct detection of a gravitational wave.

Standing before a packed room at National Press Club on Thursday, LIGO executive director David Reitze made a declaration that was decades in the making: “Ladies and gentlemen we have detected gravitational waves. We did it.”

The signal, which hit LIGO’s two detectors on 14 September before the observatory’s revamped detectors had even begun their science run, was created by two black holes that spiraled into one another and merged. Computational modeling suggests the two objects—29 and 36 times the mass of the sun—coalesced to create a black hole of 62 solar masses.

Read More
Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More