Tech Talk iconTech Talk

China Builds World's Fastest Supercomputer

UPDATE: Listen to this week's IEEE Spectrum podcast to learn more about the Chinese supercomputer and the Top 500 list. Host Steven Cherry talks with Jack Dongarra, the University of Tennessee computer science professor and IEEE Fellow who is one of the organizers of the Top 500.

Take a look at the machine above. It's the world's most powerful computer and it belongs to China, according to the latest edition of the Top 500 list of fastest supercomputers, unveiled yesterday.

China's Tianhe-1A system at the National Supercomputer Center in Tianjin achieved a performance of 2.57 petaflops (quadrillion floating point operations per second), taking the title away from the Cray XT5 "Jaguar" system, rated at 1.759 petaflops, at the U.S. Department of Energy’s Oak Ridge National Laboratory.

There are two main design aspects in the Tainhe-1A system that stand out. First it uses more than 7,000 graphical processing units, the number-crunching accelerators known as GPUs. Other supercomputers have been using GPUs for a while, but this is the first time that the No. 1 system is a GPU-based design. It's also interesting to note that these are not Chinese-designed chips; China got them from U.S. chip maker Nvidia. The second interesting thing about the Tianhe-1A is that it uses a custom interconnection fabric to move data between all those GPUs. It seems that the Chinese engineers were able to make this proprietary interconnect very fast. How fast?

I spoke to Tarek El-Ghazawi, a professor of electrical and computer engineering at George Washington University and co-director of the NSF Center for High-Performance Reconfigurable Computing. El-Ghazawi, who was recently at a NSF meeting with computer scientists in China, tells me that the Chinese interconnect appears to achieve 160 gigabits per second, which is roughly twice as fast as QDR InfiniBand, the fastest interconnect technology commercially available in the United States. As for how the Chinese were able to pull this off, details haven't emerged, Dr. El-Ghazawi says.

This is the second time the United States has lost the No. 1 position in the past 10 years. The first time was in June 2002, when Japan's Earth Simulator supercomputer took the top spot and remained there for more than two years. That development shook the U.S. supercomputing community. At the time, a great number of researchers believed that clusters built from cheap commodity parts would suffice for most supercomputing needs and that research in new architectures shouldn't be a priority.

The United States responded by investing heavily into high-performance computing research, particularly through DARPA's High Productivity Computing Systems program, with funds of nearly US $1 billion. At about the same time, IBM began designing its Blue Gene series, and a version of that system, the Blue Gene/L at the Lawrence Livermore National Laboratory, unveiled in 2004, reclaimed the No. 1 spot.

Now news of the Chinese supercomputer leaping ahead is again shaking the U.S. scientific community. At a post-election press conference, U.S. President Barack Obama said, “We just learned that China now has the fastest supercomputer on Earth. That used to be us. They’re making investments because they know those investments will pay off over the long-term.”

But Dr. El-Ghazawi notes that the ranking could change again soon. He says that next year IBM will unveil a new powerful system called Blue Waters and in 2012 Japan is expected to introduce its massively parallel K Computer, built by Fujitsu. Both Blue Waters and the K Computer are designed to perform at more than 10 petaflops.

It's also important to emphasize that rankings like the Top 500 don't tell the whole story. To be sure, they are useful benchmarks, but they are not always representative of real-world applications. Though some problems (like climate modeling) work well in current supercomputers, other applications (like data mining) don't. In other words, even if you have the world's fastest supercomputer, it's going to be pretty much useless if you can't program it to solve practical problems. It turns out that writing programs for these machines remains one of the field's toughest problems. It would be nice to see more progress in this area.

Image: Nvidia

Mystery Missile Wasn't a Missile at All

A startling act of accidental skywriting off the coast of Los Angeles Monday evening has become an aerial Rorschach test where every blogger and ideologue can spell out their own messages with the signs in the sky. The media bandwagon quickly picked up momentum about the stupid Pentagon not even knowing who launched a missile right in front of Los Angeles. For some it was proof of Pentagon waste, for others proof of inadequate missile defense budgets. Government conspiracy buffs from the "chemtrailers" to alien visitationers quickly hopped on the event.

Meanwhile, quietly and calmly, an undeservedly obscure website called "Contrails Science," by an sky expert and pilot who is devoted to debunking crackpot atmospheric theories, came up with a startling prosaic explanation—and linked to visual evidence in support of it. All that remained was to get the attention of the "mystery missile" juggernaut.

That video sure looked like a missile to me, and I've seen them on screen and with naked eyes around the world for more than forty years. I watched the video twice looking for any clues it might NOT be a missile, and couldn't find any.

But now I am persuaded by the argument that what the CBS helicopter news crew saw and videotaped was only a commercial airliner's contrail under special viewing conditions.

There was no missile. There was no military oversight, or "oops" (accidental launch). It wasn't hobbyists or space commercializers or another country or cabal or terrorist cell. It was an ordinary phenomenon seen in unusual circumstances in a region where people had for years been trained to interpret cloud-streaks in the sky as missiles—and they were almost always right. Just not this time.The most impressive argument that airliner contrails can look like missile launches (and this video sure looked like one, to me and almost everyone else) is composed of videos where it's happened before. Here are two:


and


Evidence from the Defense Department was frustratingly vague, but apparently not inconsistent with the airliner theory. For example, despite how they have been quoted in the mass media, they never referred to the event as a missile, but as an "unexplained contrail." My own contacts at NORAD (Colorado), US Strategic command (Omaha), and the top aerospace defense control center at Vandenberg AFB all flatly stated that they were unaware of any such launches or who might have been responsible for what was seen in the video.

There is documentary support for these disclaimers. The international data base of warnings for planes and ships does not include ANY messages that cover the apparent region and time of the video. There does happen to be one “NOTAMS”—notice to airmen and mariners—for Tuesday that specified a naval test range being closed for the afternoon (PST) for hazardous activities. While that message was issued Monday, the time interval did not cover the CBS observation. MORE to the point – that message specified a maximum off-limits altitude of “FL039”, “flight level 0-3-9”, or 39,000 ft. For missiles, ALL US DOD, NASA, and even Russian warning messages specify an off-limits range of "surface to UNLIMITED." So this current message has nothing to do with a missile test and is apparently only a coincidence.

If this explanation is accurate—and I am persuaded that it is—then the real story is just how off track mass media narratives can get based on an original honest and reasonable misinterpretation of something in the sky – a misinterpretation that folks in the LA area are justifiably primed to make since they really do have front row center seating for offshore missile activity. This time, perceptual habits led the eyewitnesses astray -- and they dragged the whole country on a wild space goose chase along with them.

The "contrails science" blogger calling himself  “Cirrus Uncinus” (Latin for “Curly Hooks”), in a message posted on his website this morning, compared last night's faux-missile launch to one that had occurred recently:

“Another misidentification, from pretty much the same location, this time from a local CBS news crew. Note it's pretty much in the same location. Note also it's not exactly moving at missile speed.  Note also it's practically identical to the photos of plane contrails, above. And once again millions of people failed to notice, because from any other angle it looked like what it was, a contrail, from a plane.  Must be a slow news day, as this went all the way up to Jim Miklaszewski asking people at the Pentagon about it.”

This has been one heck of a space/missile flap! There was no danger from errant missiles. But since the rationale explanation will likely never overtake the bandwagon craziness that dominated the mass media today, the incident itself can warn of a real danger—the public's panic and prejudices exacerbated by press irresponsibility. At least that's what I've come to read in the "strange signs in the sky!" 

But at least “Uncinus” is getting help in getting the word out. Late in the afternoon, his quickly-rehosted site was linked from the Drudge Report and the avalanche began. “I just got 21,000 hits from drudge,” he emailed me. “My old server would be a pile of molten slag right now.” Fortunately for him, and for all of us there looking for accurate explanations, that won’t happen now.

About the Author

James Oberg worked as an aerospace engineer at NASA for 22 years. He switched to journalism in the late 1990s and now makes his living reporting on space for such outlets as Popular Science, NBC News, and of course, IEEE Spectrum. In September 2010, he reported on the new digital Soyuz.

Escaping the Data Deluge from Brain-Machine Interfaces

So you want to build a brain-machine interface. With 100 billion neurons in the brain, you'd be right to wonder how a few measly electrodes could possibly extract enough information to learn anything interesting about what those brain cells are up to.

Most implanted brain-machine interfaces consist of a couple dozen electrodes recording the waveforms of neurons firing in one specific part of the brain. Biomedical engineers have gone so far as to collect recordings from about 100 channels in the brain, but that may well be insufficient--perhaps we need 10 000 to truly grasp why we coo over photos of chubby cats and belt out showtunes in the shower. But that presents a new type of problem. Because it can't have any wires dangling from the scalp, an implanted system will naturally have a limited bandwidth for extracting data. An interface with 1024 electrodes, for example, might end up producing about 250 MBps, estimates Andrew Mason, an electrical and computer engineering professor at Michigan State University, in East Lansing. For today's implantable transmitters, that quantity is simply too much.

Neuroscientists already have methods for parsing brain data off-chip, namely for sifting out that distinctive up-spike, down-spike, undulating-tail pattern that is the action potential of a firing neuron. But those techniques--which might compare a neural recording to some existing templates of spikes--tend to be power-hungry and computationally demanding, requiring hardware that would be too large to implant in the brain. So to handle the large electrode arrays of the near future, new data reduction techniques will need to sift through the recordings on-chip and in-brain.

The constraints are daunting: The compression algorithm needs to run in real-time, it should be simple, and it must, of course, be accurate. To tackle this problem, Mason and the students in his laboratory at Michigan State University reported one possible solution at the IEEE Biological Circuits and Systems conference last week. They designed an ultra-low-power circuit for detecting spikes, then came up with a method for separating out the right segments of the waveforms to be sent off-brain. First, the circuit assessed the level of noise in the data, and then chose one of two ways to process the data. Most of the time the data isn't overwhelmingly noisy, so we can use a very simple technique that consumes hardly any power at all. If the data is pretty noisy, though, it should go through another method, called a stationery wavelet transform (SWT)--a bunch of math requiring 16 additions and 16 multiplications; in a nutshell, too much for both this blogger's brain and any implanted hardware. But with some minor compromises, Mason and his students were able to squeeze a version of SWT onto the circuit. For one channel operating at 25 kHz, they used just 450 nanowatts on a 0.082 square-millimeter CMOS circuit, an acceptable size for an implant.

Once you've found a spike, another algorithm tells you what data to bother keeping, as the authors reported in a separate paper. The most distinctive features of a neuron's spike are its amplitude, or energy, the relative position of its positive and negative peaks, and the width of the spike. Taken together, however, those parameters can be used to derive a new feature--namely, where the waveforms cross the x axis before and after a spike. That's considered the data-rich part of a neural recording, and the algorithm backs away from every zero crossing to give each spike a buffer. Using this zero-crossing method, they were able to compress the data to an impressive 2 percent of its original size.

If you get nothing else from this post, consider this: In contrast with much of the engineering done in a world where computing is cheap and software can get away with being woefully messy, biomedical applications are often elegant by necessity. In few other domains will you see as clear a demonstration of the value of ultra-low-power simplicity.

Who's Inside the New U.S. Cyber Command?

Last Wednesday, the Department of Defense finally announced that the "U.S. Cyber Command has achieved full operational capability."  The Cyber Command will be "responsible for directing activities to operate and defend DoD networks," the release stated.

While we've heard a lot about the threat of cyber warfare, and the need for greater security, we haven't heard much about the people on the front line.  Who are the cyberwarriors, and what will they do?  I had an early glimpse into this world when I visited the provisional Cyber Command at Barksdale Air Force Base in Shreveport, Louisiana.

One morning, I drove in with three cyberwarriors to accompany them on the job.   Just before six thirty they filed into the Cyber Command, past the guard dog sign, slipped on their headsets, and stared into the wall of flashing screens. They would stay here for eight hours.  Their hours rotated including a recurring graveyard shift.

When trouble happened, the guys snapped into action with expert precision, rapidly assimilating the attack and simultaneously dispatching orders to the other cyber warriors.  Lackland Air Force Base near San Antonia, Texas housed a large supporting team of cyber warriors.  Often the guys’ order was the same:  shut down the computer system being attacked, and find someway to reroute it.   Mouse-skills were essential.  “If you hesitate for a second,” one cyber warrior told me, “that could be the difference in a base going down.”

Or someone dying.  Though a cyber warrior isn’t dodging bullets, he knows that someone across the world could perish if he messes up.  Command and control would handle the finest details – from dispatching troops to locking down gates – over computer networks.  If a hacker went undetected, false information disseminated across a network could send troops right into harm’s way.  And that sense of service empowers the cyber warriors to stay on top of their game.  “Lives are depending on us,” another told me.

The cyber warriors weren’t just doing their battle at these desks.  As young guys living their lives online, their constantly scanning the threats.  David McNulty, a tall and laser-eyed 24-year-old staff sergeant from Honolulu, Hawaii, was the team’s designated hero in another crucial domain:   World of Warcraft, the massively multiplayer online computer game. 

McNulty played in a guild called Exanimus, partly for fun, but also for a more patriotic reason, to embed himself with hackers and geeks who can blow the whistle on potential threats.  Players in the Warcraft knew where McNulty worked, and frequently came to him to expose malicious activity.  One day, he learned of a security hole in a firewall at Barksdale that was giving hackers free reign.  McNulty snapped to attention, and shut the gap before it was too late.  “Playing World of Warcraft gives me insight to the civilian sector that I wouldn’t otherwise get from here,” he told me.

For McNulty and the rest, much of the day-to-day appeal in fighting the cyber war provided the thrill of these sort of meta war games.  They’re like kids in arcades with the ultimate weapons and gizmos at their disposal.  But as the military knows, recruiting and retaining skilled geeks in the age of Silicon Valley billionaire babies is one of the biggest threats to national security of all. 

The military has been trying to harness popular technologies to lure – and train - war gamers into their fold for decades.   Games from Battlezone in the 1980s to Doom in the 1990s have been modified for troop exercises.   Then the Army teamed with hotshot videogame development companies to churn out games like Full Spectrum Warrior and the shooter America’s Army.  Later the Army launched a new unit, Training and Doctrine Command’s Project Office for Gaming, which takes this to the new logical step:  equipping soldiers with the ability to develop their own videogame simulations of warfare scenarios for training. 

But convincing this generation to enlist is extra tough when they could be using their same brains to make millions in Silicon Valley.  Some of the guys make no bones about their motivation for being here – to get the skills and find a high paying job as a civilian. One of the things they get with their training is the super-sweet security clearance – Top Secret.  This clearance is valuably marketable in the commercial sector, where banks and other corporations are desperate for nimble geeks who can protect them for their own cyber attacks. 

 

Beyond Dean Kamen

I think it's fair to say that in the last 20 years the field of prosthetics has taken a sexy turn. We've come so far from Barbie doll legs and hook arms, that it boggles the mind. People now control 5-fingered robotic hands with electrical impulses from muscles in their chest. And soon their prosthetics will directly interface with nerve endings. The "Luke arm" is a work of art, as is the Otto Bock arm and Proto 2. But it's worth remembering something that I myself often forget: they aren't toys. And we can't measure the success of them by looking at how much fame they've bestowed upon their creators.

I most recently remembered this while listening to Kendra Calhoun speak yesterday at Worcester Polytechnical Institute's annual Neuroprosthetics symposium. At this highly technical conference, it was her role to remind people of the objective—to develop prosthetics that function well enough that amputees will actually wear them, and that are inexpensive enough that they can actually afford them.

Around 30% of people with prosthetic arms stop using them and most say it's either because they hurt or they just don't work well enough. Unfortunately, as prosthetics become more functional and incorporate more technology, the cost of them will sky rocket. It's painfully clear that every patient with upper limb loss will not be able to acquire a Luke arm.

Calhoun called on the community to offer practical solutions in parallel with the grandiose projects."How do we take pieces and parts of the technology that is being advanced and put it into the mainstream?" she asked.

One solution that has to be looked at is designing prosthetics that are simply more simple. A hand is an obscenely complex structure. But not every task requires such complexity. 

With this in mind, I was happy to run across a new design on the cover of PNAS that isn't anatomically literal. You can see how it works by watching this video:

 Engineers at Cornell University (along with the University of Chicago and iRobot) introduced the concept this week. The ball is filled with coffee grinds that mold around the object you want to pick up and suck it in with a vacuum. A lot of technology is left behind here, but that also means it will be less expensive.

While reproducing the elegance of our anatomy is noble, it may not always be necessary or practical. And the more noble task could very well be less sexy. Many would benefit if we channeled a bit of our adulation (and funding) to those projects that are churning out the prosthetics we'll actually see on the streets.

(Image courtesy of John Amend)

Hacking Poker

Tomorrow, nine players will face off at the final table of the World Series of Poker in Las Vegas.  The winner is expected to walk away with about $9 million.  Fans are marveling at how many young guns - many in the their 20s - made the cut.  Of course it's not that amazing, really.  The new generation of players are sharpening their chops online. But, behind the scenes, poker scientists are trying to make software smart enough to beat them.

The battle between humans and machines isn't new, of course.  Part Oedipal fantasy, part Frankenstein myth, it’s the stuff of nightmares and cult flicks.   From Yul Brenner’s killer cowboy in Westworld through HAL’s laser-pointer stare in 2001, we are fascinated by our powers of creation and the possibility that the bots we make may defeat us.

In recent years, nothing has quite captured this fantasy like the clash of Garry Kasparov and Deep Blue.  The 1997 match between the reigning world champion and IBM’s Supercomputer was hyped like a Geeks Gone Wild halftime show.   Newsweek called it “the brain’s last stand.”  The brain lost.  But humans including the late author George Plimpton rushed to its defense.  “This doesn't mean the machine is going to walk out of the hotel and start doing extraordinary things,” he said, “Over the years, it may be that the machine is likely to be able to do other things. I'm not sure what, at the moment.”

The answer, already, was poker.  Six years before IBM’s victory, a group of artificial intelligence researchers at the University of Alberta in Canada began creating software to beat people at Texas Hold ‘Em.  At the time, playing poker online for money was still a distant dream.  But the sheer challenge of mastering the game tantalized computer scientists like UA computer science professor Dr. Jonathan Schaeffer.

“In poker, you have to make decisions based on imperfect information,” Schaeffer told me, “You don’t know the other players’ cards, and that’s what real life is about.  You’re dealing with people on daily basis and you don’t have complete information.  The best poker game I ever witnessed was the Gulf War.

Dr. Schaeffer, who leads a team of developers working under the rubric of the Games Group, already has one severed man head on the machine’s stick.  It happened in 1994, when the Games Group’s checkers bot, named Chinook, became the first computer program ever to beat a world champion (yes, before even Deep Blue).  Then they set their sights on poker. 

Since poker is a game of both skill and luck, the Games Group set about mastering the part they could control:  the skills.  Calculating poker algorithms such as pot odds and winning hands is straight-forward enough, Schaeffer says.  He shrugs off more ephemeral technique, like bluffing.  “That’s very easy for computers,” he said, “You can mathematically show which class of hands you should bluff.”  

The biggest challenge is what he calls opponent modeling:  detecting betting patterns in other players that can help the computer make better decisions.  “Most of the time, in poker, you don’t get the information you need from other players,” Schaeffer said, “If they did perfect bluff and you folded, you don’t know they bluffed.”

Despite the difficulties, however, UA’s poker bots reached a turning point in 2003, when it stood up against a seasoned online pro.  They are now being used commercially in poker training software called Poker Academy.  No wonder this year's World Series of Poker pros seem like machines. 

 

Flexible, Unbreakable, Displays, Coming Soon to an E-Reader Near You

Sometimes, it pays to break for lunch.

That’s what researchers at Taiwan’s Industrial Technology Research Institute (ITRI) found out. They’d been struggling for years to figure out a way to manufacture thin, flexible, flat-screen displays. These are the stuff dreams are made of—futurists have long envisioned newspapers and maps we can fold and roll, that don’t break. They just haven’t made it to the mass market yet, they’ve been tough to manufacture cheaply.

ITRI’s been working on that manufacturing issue for years. Its goal was to use standard manufacturing equipment—for OLED, LCD, or electronic paper displays—but sneak a thin layer of film in on top of the standard glass panel. Detach the film from the glass at the end of the process, and bingo, flexible display. ITRI Display Technology Center Director John Chen told me yesterday that that after successfully coming up with a transparent polymer film that could take the high heat of semiconductor manufacturing, researchers struggled to figure out a way to attach the film to glass in such a way that it would stick tight through all the manufacturing processes, but be easy to peel off at the end.

“We failed 63 times in four years,” he said.

And then one of the researchers involved in the effort went out to lunch, to a little creperie in a local market. He watched the crepe maker prepare a crepe, noticing that he didn’t spread the layer of oil to the edges of the pan. The crepe clung tightly to the edges while cooking, but, when it was done, the chef had an easy time popping it loose.

That technique, of adding what Chen calls a “debonding” layer, but not bringing it all the way to the edges of the substrate, worked just as well in display manufacturing as it did in crepe making. ITRI is being honored next week by R&D Magazine’s Top 100 Awards for the innovation. And Taiwanese company AU Optronics Corp. will be releasing a line of e-readers using this flexible display technology in 2011; Chen thinks these will be of particularly interest to K-12 educators, because of their durability.

Holographic Video Brings Star Wars-Style 3D Telepresence a Step Closer

holographic display
A holographic display shows a 3D image of a man [center image above]. The image's three-dimensional nature becomes apparent when viewed from different angles [left and right images]. The display can update the hologram every 2 seconds. Faster refresh rates would make 3D telepresence possible. Images: University of Arizona

A holographic video system like the one Princess Leia uses in Star Wars is now one step closer to reality.

Researchers report today that they've built a holographic display that can show three-dimensional color images of a person in a remote location, with the images updated in almost real time -- a precursor to holographic telepresence.

This is the first time researchers demonstrate an optical material that can display "holographic video," as oppose to static holograms found in credit cards and product packages. The prototype looks like a chunk of acrylic, but it's actually an exotic material, called a photorefractive polymer, with remarkable holographic properties.

Nasser Peyghambarian and colleagues at the University of Arizona, in Tucson, designed the material -- a complex compound referred to as PATPD/CAAN, or polyacrylic tetraphenyldiaminobiphenyl/carbaldehyde aniline -- to refract and modulate light in a specific way useful for holography.

The breakthrough, which the researchers report in this week's Nature, is that the material can refresh a hologram every two seconds and give the effect of near real time updating. An early prototype built by the same group two years ago could refresh holograms only every 4 minutes.

Last year's blockbuster movie Avatar generated a lot of interest in 3D technologies. Several electronics manufacturers have demonstrated TV sets with 3D capabilities. But the technology used in movie theaters and TV sets, known as polarization stereoscopy, is different from holography.

Holograms are made with lasers and they use special materials capable of diffracting light in a way that looks to an observer as if it had been scattered by the real object itself.


Image of a fighter jet created on a new holographic display. Image: University of Arizona

Like ordinary displays, the new device is a matrix of picture elements, though in this case holographic pixels, known as hogels. As opposed to 2D pixels, hogels contain 3D information from various perspectives. Each hogel is written with a single 6-nanosecond laser pulse.

The technique is known as holographic stereography. It's been around for some time and is in fact used in large 3D static prints used in marketing materials. But the ability to dynamically update the image has eluded researchers -- until now.

Since its appearance in the original Star Wars film in 1977, 3D telepresence has been a source of fascination. But the absence of a large, updatable holographic recording medium prevented researchers from realizing the concept.

In their 3D telepresence demonstration, the researchers use 16 cameras to take two-dimensional pictures of a person's face at multiple angles. The cameras fire simultaneously every second, and a standard desktop PC converts the 16 views into hogel data and sends it to the laser recording system through an Ethernet link. Each image is 4 x 4 inches (about 10 x 10 centimeters), with 120 hogels. Once a hologram has been written, the laser uses the next available hogels to refresh the images every 2 seconds.

"The development seems like a nice and potentially important contribution to holography," says Paul Debevec, a computer scientist who leads the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies, in Playa Vista. He was not involved in the project. "It seems that the authors have significantly improved the speed with which holograms can be written."

But he adds that the prototype is still small and "two orders of magnitude slower than useful video rates." Other 3D technologies, much simpler and cheaper, could work as telepresence systems, he says. His group developed one such system based on a spinning mirror that was capable of showing a full-size human face, with the image updated at 30 frames per second.

(My colleague Sally Adee tested the prototype; see her "holographified" face here.)

"With respect to 3D telepresence," Debevec says, "what they claim to have taken a step towards appears to be something our 3D teleconferencing system demonstrated two years ago."


Color holograms recorded on the new device. Images: Nature

Peyghambarian and his colleagues acknowledge that their device needs to be bigger and refresh faster. But they say that other 3D telepresence technologies demonstrated before required moving parts, complex projection systems, floating particles, or they were simply optical tricks or computer-generated special effects. In other words, they aren't true holograms.

Advanced holographic system, the researchers say, could find applications in telemedicine, prototyping, advertising, updatable 3D maps, and entertainment.

"As an example, in telemedicine and especially for brain surgery, surgeons at different locations around the world could use the technique to observe in three dimensions, in real time, and to participate in the surgical procedure," they write in the Nature paper.

The project also included scientists from Nitto Denko Technical Corp., in Oceanside, Calif., the research arm of a Japanese company that makes semiconductor and optical products.

Is this the technology that is going to bring Star Wars holographic telepresence to our homes?

After watching videos of the prototype, I'm underwhelmed. The researchers claim this is getting us close to holographic telepresence, but we're clearly still far, far away. Lightsabers might come first.

So for now I'm sticking with robotic telepresence. I guess I feel more comfortable in a solid mechatronic body than as a flimsy hologram. But see for yourself and let us know what you think.

According to the researchers, the movie below "shows the concept of 3D telepresence." The device is displaying holograms of individuals located in a different room. Note that in the first hologram the man is smiling; the next time the hologram is refreshed, after several seconds, he's not smiling anymore. (Don't ask me about the second guy with the glasses at the end -- I don't understand why they show him if it's just a static image...)

This movie shows a series of 6-nanosecond laser pulses writing an image in about 2 seconds.

Videos: University of Arizona

Intel First: Making Advanced Chips for Third Parties

Achronix Semiconductor Corp. announced yesterday that it will churn out a new line of their chips in Intel's forthcoming 22-nm technology factory. This is a first: Intel has put their older manufacturing lines at other companies' disposal, but never have they opened one of their new factories to an outsider.

Some have said that this signals Intel's entrance into the now mostly Chinese and Taiwanese foundry business of chip manufacturers for hire. Gus Richard, a microprocessor industry analyst with Piper Jaffray, said as much to The New York Times:

“Manufacturing is [Intel's] crown jewel, and they’re finding new ways to monetize it.”

However, Intel's Bill Kircos, Director, Product and Technology Media Relations, wrote in a company blog post that the agreement is "not currently viewed as financially material to Intel’s earnings."

He echoed that sentiment in an email to IEEE Spectrum. "Our factories are our prized possession," Kircos says. "Still, this agreement with Achronix would only make up significantly less than one percent of our capacity."

Achronix, develops Field Programmable Gate Arrays--customizable chips that users can tap for a variety of telecommunications, military, and aerospace applications. They can be manufactured in smaller quantities than Application-Specific Integrated Circuits (ASIC) which, as The Register describes, require large production runs to make economic sense:

"It might take $30m or $40m to develop an ASIC to do a particular job. . . . For very high volume products—with hundreds of thousands to millions of units where the cost per unit has to be low—you want an ASIC. But in places where you need a chip that might only require thousands to tens of thousands of units to satisfy an entire market, an FPGA, while more expensive to buy, is better because it is less expensive to make and is correctable in a way that an ASIC is not."

Two other FPGA companies, Xilinx and Altera, currently control 85 percent of the market share, EE Times reports.

Achronix suggests in a press release that the Speedster22i they'll manufacture in Intel's factory will help them compete, writing that the chip will "eclipse other FPGA solutions expected to hit the market in the next few years." The release also says the chip will be the FPGA equivalent of an ASIC of over 20 million gates. Since Intel's plants are in the United States, the company says, the device will be ideal for military applications. IEEE Spectrum's Sally Adee discusses reasons why "on shore" silicon manufacturing is important for security in her feature Hunt for the Kill Switch.

What this means for Intel isn't clear. EE Times reports that some had recently speculated that Intel, which made their own programmable logic chips in the 1980s, would acquire Xilinx or Altera. Achronix CEO, John Lofton Holt, told EE Times that the arrangement,

"[S]peaks to how important they [Intel] see FPGAs to the future of the semiconductor industry." But, Holt said, "If Intel wanted to be in the FPGA business they would be already. They certainly have the cash."

I asked Intel's Kircos whether Intel had any plans for manufacturing their own FPGAs. "We're not going to speculate on that," he says.

Image: Achronix

Energy Consumption Labels Coming Soon to a TV Near You

You've seen them on refrigerators, you've carefully peeled them off washing machines. And now they're coming to a TV near you--they being EnergyGuide labels. This means shoppers can compare energy consumption--which can vary greatly between TVs, even those with the same technology and screen size, as easily as they compare price. (One model of 42-inch Philips LCD, for example, sucks up $40.35 worth of electricity in a year, according to CNET, a 42-inch LG LCD will use just $20.66 worth of electricity in that same year.) The U.S. Federal Trade Commission announced the new rule today, it goes into effect on 10 May 2011. 

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More