Tech Talk iconTech Talk

Remote Mappers Enable Relief to Reach Filipino Typhoon Victims

The typhoon that hit the Philippines last week killed thousands of people and displaced hundreds of thousands more, according to the BBC. The damage to infrastructure and the dislocation of all those people make it difficult for relief agencies to know where to route aid. Yet this typhoon, called Yolanda in the Philippines and Haiyan elsewhere, hit just as the technology to update maps and sift through social media is maturing. Filipinos and their faraway friends may be in the best position yet to respond to a typhoon, thanks to better digital coordination among volunteers.

Crowd-sourced crisis mapping dates back to at least the 2007-2008 Kenyan election violence. That prompted a group now called Ushahidi ("testimony" in Swahili) to build a platform for people to report violence through text messages or via the Web. The idea spread: The OpenStreetMap community responded to a 2009 cyclone in the Philippines, and again helped volunteers find their way around after the 2010 Haiti earthquake, says Kate Chapman, executive director of the Humanitarian OpenStreetMap Team (HOT) .

Those first attempts, however, did not make the best use of all the available volunteers. "We didn't have a way to coordinate it," Chapman recalls, "We just sort of said, 'Go map somewhere that's blank,' so you had to be a seasoned mapper to help." Since then, with funding from the Australian aid agency, HOT has developed an online task manager that allows even non-experts to chip in. "The difference is now we can say, 'Just log into the server and you can go through and it'll give you a square to work on,'" Chapman says.  One volunteer has compiled just such a map, with updates to the Philippines OpenStreetMap; volunteers can now see where their work would be most useful. The latest iteration was built with input from over 600 volunteers who have contributed over 1.3 million updates.

Some of the work, done ahead of the typhoon's landfall, involved routine things such as tracing roads. But the group obtained its first post-disaster imagery, from a U.S. government agency, just today, says Chapman. Imagery from European agencies and other sources are also beginning to appear. "We can look at what roads are closed and where damage is worse," Chapman says. HOT has already sent PDF maps to volunteers and agencies such as the American Red Cross for printing before traveling to disconnected parts of the Philippines.

Volunteers can enrich maps with information from social media, too, reports IRIN. A Wall Street Journal article notes that doing so requires filtering through tens of thousands of tweets—a task that can be aided by software, but still requires many volunteers to verify information. Another disaster-recovery tool maturing now is Google's person finder, which it tested after the Japanese tsunami of 2011.

Readers interested in helping can consult the Digital Humanitarian Network for a list of participating groups. The Red Cross also has a recent big-picture look at the role technology plays in disaster relief.

Prosthetic Hands Trigger Uncanny Valley Sense of Creepiness

Creepy humanoid robots and animated Hollywood characters have been accused of triggering the so-called "uncanny valley" phenomenon—that eerie or unsettling feeling people experience in response to not-quite-human figures. But a new study shows how even a prosthetic hand can trigger a similar sense of unease.

Read More

Superconducting Video Camera Sees the Universe in Living Color

Your camera sees the world in black and white; but a new astronomical camera sees the stars in color

Almost every imaging device on the planet (or in orbit, for that matter) sees the world in black and white: incoming photons hit the sensor, knock electrons loose, and generate a current. If the incoming photon’s energy is anywhere in the detector’s sensitivity range, the result is the same: the pixel is white.

To see color, imagers (including the human eye) integrate multiple black-and-white images made with defined parts of the spectrum. They either split the sensor field, using overlapping arrays of sensors with different filters to simultaneously make separate images—from red, green, and blue, for example—or they split the spectrum to project successive single-wavelength images on a single sensor field.

The Array Camera for Optical to Near IR Spectrophotometry (ARCONS) approaches the problem from a different angle, simultaneously capturing time and energy (and so wavelength) information from a single photon.

"What we have made is essentially a hyperspectral video camera with no intrinsic noise," says Ben Mazin, a physics professor at the University of California, Santa Barbara. Mazin—with UCSB colleagues and collaborators at NASA’s Jet Propulsion Laboratory, Oxford University, and Fermilab—is developing the ARCONS device for astronomical observation.  "On a pixel-per-pixel basis, it's a quantum leap from semiconductor detectors; it's as big a leap going from film to semiconductors as it is going from semiconductors to these superconductors. This allows all kinds of really interesting instruments based on this technology."

The heart of ARCONS is a 60-nanometer-thick layer of titanium nitride (TiN) carried on a silicon base. Depending on the ratio of nitrogen to titanium, the layer becomes superconducting at about 1 Kelvin. (As the proportion of nitrogen decreases, the superconducting transition temperature and band-gap energies get lower; consequently, the imager's sensitivity to incoming photons increases. At its tiniest, the band gap of the superconducting TiN is about three orders of magnitude smaller than in a typical semiconductor.)

The TiN layer is etched into a 44 x 46 pixel array, and each pixel gets its own individually tuned microwave resonator and a microlens. The ensemble is enclosed in a lens-topped Dewar jar cooled to 0.1 K. When a photon strikes the sensor surface, is sends a ripple through the superconductor, breaking up the paired electrons—the Cooper pairs—that carry superconducting currents. The more energetic the photon, the more Cooper pairs are divided. Disrupting these pairs alters the impedance of the pixel. This electrical change, in turn, shifts the amplitude and phase of the pixel’s resonance in proportion to the number of Cooper-pair disruptions.

The researchers use a microwave frequency comb to interrogate and read out all 2024 pixels over a single microwave channel. Each pixel can be read about 2500 times per second, accurately seeing colors that range from the ultraviolet (100 nm) through the visible spectrum and into the infrared (longer than 5000 nm). CCD sensors, by contrast, typically detect light from 300 to 1000 nm—and only in a black and white.

The result is a video spectroscopic sensor that requires no beam-splitting, no filters, and no duplication of array fields for different wavelengths. Or, as the group’s paper for Publications of the Astronomical Society of the Pacific (also available on ArXiv) describes it, “A superconducting detector can count single photon events with no false counts, while determining the energy (to several percent or better) and arrival time (to a microsecond)”

The ARCONS has been field-tested at the 200-inch Palomar and 120-inch Lick telescopes, observing the link between surges in optical and radio pulses in the Crab Nebula pulsar and a 3.07 x 10-13 (three parts in 10 trillion) change in the 28.3-minute orbital period of a compact binary star.  

This shot of the double ring galaxy Arp 147 shows the ARCONS image with the Hubble Space Telescope’s version as an inset (here’s the full HST image). The ARCONS test image resolution is not quite up to the Hubble's standard—but then again, they were shot using a prototype sensor on a 5-meter-diameter, 17-meter focal length, Earth-based telescope, not the 2.4-meter-diameter, 57.6-meter focal length, orbiting HST.

ARCONS are not the only superconducting detectors under development; there are also designs based on Superconducting Tunnel Junctions and Transition Edge Sensors. Overall, though, the developers think ARCONS is a contender, offering (to quote from their paper):  

  • Time resolution up to six orders of magnitude better than a CCD
  • Extremely broad intrinsic bandwidth (100 to 5000 nm) with good quantum efficiency
  • No read noise or dark current, and nearly perfect cosmic ray rejection
  • No observing time lost to read-out of the array.
  • Simple scaling
  • Time domain information allows after-the-fact use of calibration stars for monitoring atmospheric transparency, setting dynamic apertures, and applying tip/tilt corrections.
  • Photon arrival time, spectral resolution, and the large number of pixels allow for monitoring and removing sky emissions.

Photo: Spencer Buttig. Arp 147 Image: UCSB/NASA

Corrected 12 Nov. to include focal lengths and apertures for Arp 147 images.

Chelyabinsk-like Impacts More Common than Scientists Thought

The type of meteor that blasted Chelyabinsk, Russia, with the force of a nuclear weapon may hit Earth more often than estimated in previous studies. The latest study, in which researchers examined data from sensors intended to detect atmospheric nuclear explosions, appeared earlier this week in Nature, alongside a pair of related studies, one in Science and another in Nature.

Research published eleven years ago by some of the same scientists had already raised the possibility that so-called small impactors were more common than previously thought. JPL scientist Paul Chodas, who is not one of the study authors, told reporters that the latest estimate is based on small-number statistics. Yet it uses data from about 20 years of observations, twice as long as the previous one. The authors also note that even if they throw away Chelyabinsk and two other events, they get the same size-frequency curve, predicting the same, higher, number of Chelyabinsk-type impacts.

Lucky for the city of Chelyabinsk, the meteor broke up into smaller pieces in the atmosphere, spreading the energy of their explosions over a larger area. The largest chunk found so far is 570 kilograms. That is typical of asteroids less than about 100 meters across. But these are difficult to detect. Existing tracking programs cover most of the thousand-odd asteroids in the solar system over about a 1000 meters across, but there are many more smaller asteroids. As this study illustrates, they also pose a more common hazard.

On the plus side, people occupy a tiny percentage of the Earth. The Natural History Museum of London claims that no human in recorded history has been killed by a meteorite, though some have been hit. Just in case, though, the UN is proposing a plan to share information on inbound asteroids and coordinate a response that could include kamikaze robots for diverting the asteroid's paths, reports NPR.

Let's hope the plan doesn't involve a Cold War-style hair-trigger nuclear response. As IEEE Spectrum has noted, the mere presence of nuclear weapons mean that even a low probability of their misuse could add up to unhappy odds of a nasty surprise. For those looking for morbid and gripping reading, Command and Control, a recent book by Eric Schlosser, contains terrifying anecdotes, including nuclear scares due to Norwegian weather rockets and more than one accidental (but unarmed) aerial nuclear bombing. A too-complex defense may cause as much trouble as the very simple problem of very big rocks.

Electric Device That Tells You When to Pee

People who suffer severe spinal cord injuries and subsequent paralysis often lose bladder control too. But a new electronic device may restore that bodily function. British neuroscientist James Fawcett and his colleagues have developed a neuroprosthetic device that replaces damaged nerves that convey the bladder's sense of fullness. The device also blocks or triggers bladder emptying on cue through electrical stimulation. Fawcett and his team successfully demonstrated the technology in rats and published their results yesterday in Science Translational Medicine.

In a normal bladder, nerves sense when the bladder begins to fill up and electrochemically send a message to the brain. When the signals say it's time to go, the bladder contracts, the sphincter relaxes, and presto: urination. But a spinal cord injury can disrupt the signals to the brain and eliminate the fullness sensation and muscle control. A person with this kind of injury is forced to empty his bladder with a catheter.

In Fawcett's design, parts of the spinal nerves called dorsal roots are teased out into rootlets; the rootlets are placed in an implanted microchannel electrode interface. The microchannels record signals from the nerves and can determine, by the amount of activity, when the bladder is getting full. A stimulator connected to the nerves sends high-frequency stimulation to stop the bladder from emptying itself. When the user is ready to urinate, he or she can push a button that causes the device to deliver low-frequency stimulation that allows the bladder to empty. The researchers envision a handheld device that buzzes to let the user know it's time to go. (Parents: wouldn't it be great to have a buzzer like that for potty training preschoolers?)

Of course it will be a while before the device is ready for humans. In the rat experiments, the rootlets only survived for a few months after they were placed in the microchannels. That lifespan will have to be increased considerably before the device would be useful in humans, the authors said.

In a separate approach, researchers at the University of Louisville have been experimenting with epidural stimulation of the spinal cord to restore bladder control and other functions in people with spinal cord injuries. The work has been largely successful, but experiments have only been completed in a few people. 

Illustration: Evangelos Delivopoulos

First 3-D-Printed Metal Gun Shows Tech Maturity

The world's first 3-D–printed metal gun aims to prove a point about the reliability of 3-D printing technology. But its makers don't plan on revolutionizing the manufacture of firearms by making the process available in every household.

The metal pistol made by Solid Concepts, a 3-D printing service based in Austin, Texas, represents a working 3-D–printed version of the famed 1911 pistol originally designed by John Browning. Solid Concepts created almost all parts of the classic gun through direct metal laser sintering (DMLS), an industrial 3-D printing process used to make metal parts in aerospace manufacturing and for surgical implants. (The gun's springs were made separately.)

"When we decided to go ahead and make this gun, we weren't trying to figure out a cheaper, easier, better way to make a gun," said Phillip Conner, DMLS project manager, in a video. "That wasn't the point at all. What we were trying to do is dispel the commonly held notion that DMLS parts are not strong enough or accurate enough for real-world applications."

The 3-D–printed pistol proved both sturdy and accurate during mounted and handheld firing tests showcased in a video posted on 6 November. Solid Concepts says it can 3-D print unique gun parts for any "qualifying customer" in five days—a service made legally possible by the fact that the company holds a Federal Firearms License.

But the 3-D–printed weapon that Solid Concepts built can't be replicated by any DIY gunsmith attempting to do so with a cheaper variety of 3-D printer costing less than $10,000—industry-grade DMLS machines cost hundreds of thousands of dollars. That's an important point that Solid Concepts emphasized at the very beginning of its blog post about its achievement.

"The industrial printer we used costs more than my college tuition (and I went to a private university) and the engineers who run our machines are top of the line; they are experts who know what they’re doing and understand 3-D Printing better than anyone in this business," said Alyssa Parkinson, a spokesperson for Solid Concepts, in a blog post.

Solid Concepts

In other words, the Solid Concepts gun still requires expensive, industrial-grade equipment that most DIY enthusiasts or homeowners can't afford. That makes the metal gun very different from the plastic guns of Defense Distributed, a Texas group that has developed an open-source design for guns intended for manufacture using home 3-D printers.

Defense Distributed's plastic guns haven't proven durable enough to survive more than a few shots. But the group's focus on plastic firearms produced by cheaper varieties of 3-D printers has raised fears that 3-D printers and open-source gun blueprints could theoretically allow anyone to make a gun at home—either legally or illegally.

Such fears have already driven lawmakers to propose laws that would restrict the manufacture of 3-D–printed plastic guns. Police in Manchester, UK went so far as to seize a 3-D printer and what they claimed to be 3-D–printed gun parts from a private home last month. (The police quickly backed off their claim after people pointed out that the parts displayed in police photos appeared to be printer parts.)

Photos: Solid Concepts

IBM Nairobi Lab's First Offering is a Traffic-Dodging Mobile App

Debates about how best to avoid Nairobi traffic can take nearly as long as a drive across town. The city has three dozen traffic cameras downtown, but that's not enough information for a city of over three million people. Traffic costs the city US $600 000 a day, by one estimate. IBM's Nairobi lab, in beta since a year ago, tackled traffic early on and today launched a mobile application to help drivers avoid traffic.

The app, called Twende Twende, meaning something like "Let's go" in Swahili, bases its recommendations on a central system that uses image recognition algorithms to process the traffic camera feeds and a separate algorithm to predict traffic on streets not covered by the cameras. Users can get recommendations via SMS or on a map interface.

In answer to the obvious question of what advantage this has over Google Maps' traffic feature, the lab's chief scientist, Osamuyimen "Uyi" Stewart, explained that IBM considers observational data from cameras more reliable than Google's crowd-sourced data. Time, and perhaps Nairobi-based Spectrum readers, will tell. The app is available through Safaricom and Airtel, two Kenyan mobile service providers.

Whatever the lab learns from this and other Nairobi-grown innovations should be useful elsewhere in Africa and perhaps globally, Stewart says. The app is one of the first results of the lab's first year in operation, and the fact that the company is promoting it on its official opening today, together with another data analysis product, signals its emphasis on data analysis in the developing world.

Last month Stewart told journalists that IBM is trying to position its research to serve the growing number of African countries transitioning from resource extraction to service economies. "Data is the next generation of resources in Africa," he declared.

The lab is also involved in setting up systems that make it easy to collect useful economic data, such as agricultural performance, weather, pricing, and finding ways to capitalize on the information. Farmers, for example, could obtain access to certain data themselves in return for participating in mobile phone surveys.

With huge growth in demand for middle-class services across much of Africa, the lab has its work cut out for it. If Twende Twende works as promised, at least IBM researchers will find it easier to get to and from that work.

Photo: Nichole Sobecki/Feature Photo/IBM

India Launches First Mission to Mars, Some Oppose It

The successful launch of the Mars Orbiter Mission today marked a promising start for India's ambitions to become the fourth in the world to reach the red planet. The mission kicks off in the midst of what some see as a growing Asian space race and amidst some reflection among Indians regarding their country's national priorities.

The Indian spacecraft represents one of the cheapest missions to take aim at Mars with a price tag of just US $72 million. But the relatively low cost has not stopped some Indians from wondering about the wisdom of launching a mission to Mars while the country still faces huge health and economic challenges at home, according to the BBC.

Read More

10 Things We Wish the Google Barge Was Instead of a Showroom for Google X

Intriguing update on 26 November: Our Silicon Valley editor, Tekla Perry, just received an invitation from Google that says in part: 

Google is getting ready to help Santa deliver presents with a very large, floating, nautical sled.

The blogosphere has been going absolutely bananas this week trying to figure out what Google was hiding on a barge docked at an island in the San Francisco Bay. The latest reporting says that the structure, which is made out of shipping containers, will be a party space and showroom for technologies that come out of Google X, the company skunkworks that birthed Google Glass.

Frankly, we're a bit disappointed. If you're going to have a secret lair in San Francisco Bay, it should be something really cool. Here's some ideas we thought would be cooler, or at least more fun to cover. Leave yours in the comments.

Singularity Induction Center: With Ray Kurzweil onboard, Google has moved swiftly to realize the vision of a transhuman future, in which minds and machine merge. The barge provides a scanning center to digitize and upload humans into a digital paradise, while also incorporating a mulching unit that converts discarded physical bodies into fish food, burnishing the companies environmental credentials.

Island Generators: Not generators for islands, but generators of islands. Using the latest in 3-D printing techniques, these barge can "print" an entire island in international waters up to 200 meters deep, allowing the construction of new sovereign microstates outside the reach of pesky U.S. or E.U law.

The Holodeck: Want to pilot an F-35 fighter? Take a Formula-1 car for a spin through the streets of Monaco? Jog across the Martian deserts? Be chased by ravenous hordes of zombies? The barge will provide simulated experiences with an unprecedented degree of physical immersion, with rooms that can tilt, vibrate, and accelerate in any direction. Of course, it will also constantly rock with the swells and waves, so we hope you don’t get sea sick.

The world's largest hard disk drive: Flash memory is zippy, but very pricy compared to good old fashioned magnetic material. Still, having endless hard drive enclosures is inefficient, in terms of volume and energy. By constructing a stack of platters three stories tall and 10 meters across, suspended in a vacuum, Google plans to store the entire searchable Internet on a single drive. A barge is needed in case mechanical problems arise: The platters act like flywheels with immense angular momentum. In an emergency a floating structure can spin and gently bleed off momentum rather than eject a giant flying disk of doom.

Ghost Containment Grid: Google's founders were inspired in their youth by the 1984 movie Ghostbusters to find a way to truly use modern science in the service of taming the supernatural. (The Google search engine actually grew out of an early attempt to catalog the information required to build unlicensed nuclear accelerators.) The movie got a lot of things wrong, but one thing it got right was the need for a secure place to put captured, or "busted," ghosts, and preferably one out of the reach of Environmental Protection Agency pencil necks.

The base for a space elevator: Supposedly, Sergei Brin and Larry Page were keen to use some of their billions to build a space elevator. Former CEO Eric Schmidt talked them out of it, but where is he these days?

The bottom of the foot of the world’s largest humanoid robot: They’ll call it the Colossus of Nodes.

Document Storage: Maybe Google Drive is actually powered by physical printouts of everyone's documents, and Google needs a place to hide all the filing cabinets.

Zombie-proof Data Center: Much of the early speculation about the barge was fueled by a patent Google filed for a floating data center. That would be neat, but why would you need one? Because zombies can't swim. We’ll all need Google Maps and the ability to do social networking to survive the zombie apocalypse, right? Alternatively, something has gone horribly wrong over at Google X and this is a zombie detention center. Again, because zombies can't swim.

Employee Office Space/Prison: Google has always been trying to keep its employees on campus. (Marisa Mayer exported the principle to Yahoo! when she took over that outfit.) What better way to keep employees at work than to maroon them on a barge, in the middle of a bay, surrounded by zombies?

That’s no barge: It's a fully armed and operational battle station, manned by zombies.

Illustrations: Randi Klett; Google Barge: Justin Sullivan/Getty Images

Automating Anesthesia For Medically Induced Comas

In some scenarios it can be safer for health professionals to put certain patients, those with persistent seizures or brain trauma for example, into medically induced comas during treatment or recovery. But this requires full time supervision from hospital staff because the process involves watching a patient's EEG and manually adjusting the anesthesia drugs to keep brain activity stable. But a new system tested in animals and published in PLOS Computational Biology yesterday automates the monitoring and dosage modification.

The team, which includes scientists from Massachusetts General Hospital (MGH) and Massachusetts Institute of Technology (MIT), modified some of its existing software which measures brain activity, and integrated algorithms to evaluate EEG patterns, set goals for appropriate brain activity, and control devices that administer anesthesia, all in real time. The group tested the their closed loop control in a rodent model and found that the system's EEG measurements were almost perfect.

In a press release, Emery Brown, an anesthesiologist and the senior author on the paper noted that, "As far as we know, these are the best results for automated control of anesthesia that have ever been published. We're now in discussions with the FDA for approval to start testing this in patients.

Anesthesiologists currently work with some computer assistance, but the FDA hasn't yet approved a fully automated system for monitoring patients' brain activity and modulating their anesthesia doses appropriately. This research is important because it is some of the first, if not the very first, to show both aspects functioning at once in real time with a safe level of precision and control. Since medically induced comas require keeping patients at the correct level of brain activity for hours or often days, a system that is totally controlled by a computer would reduce the manpower needed in Intensive Care Units and could even hold patients' brain activity stable with lower doses of anesthesia.

Photo: iStockphoto

Most Commented Posts

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Advertisement
Load More